drive-by update!
Jun. 15th, 2025 10:05 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
My sleep was very disrupted for about a week thereafter, and I had some minor side effects as well as discomfort, but I am now pretty much back to normal. It will take a month or two to see if the slow withering of the fibroid fixes some problems I was having.
2. Friday the 20th will be my last day at the rental company. I am trying to get a certain set of tasks complete before I leave, but Lawyer Man keeps yanking me aside to work on stuff related to our imminent website revamp, which is frustrating. I am 90% sure I will not be able to finish everything I want to wrap up, but such is life.
3. I have two U-Haul U-Boxes set to be delivered to my driveway on Monday the 23rd. My parents will arrive later that day, and the goal is to have the boxes packed and collected by midday on the 26th, and for us to hit the road no later than mid-afternoon on the 27th.
Then I get to crash in their guest room and go AAAAAAAAAAAAAAAAAAAaaaaaaaaaaa for a bit.
4. In July Mom and I will return to Ithaca for my surgery follow-up appointment, and probably also to close my Ithaca bank accounts. Later in July I am road-tripping through western Canada en route to a fandom friends' gathering, and then road-tripping home by way of Washington and assorted bits of the northwestern US, because why not.
Then I will do a bit more AAAAAAAAAAAAAAAAAAAaaaaaaaaaaaaaaaa, and in August I start looking seriously for a new job. New housing will follow, since it's easier to rent an apartment convenient to a job location than to find a job convenient to an apartment location.
5. I have been sorting through boxes and bins of stuff that I have not touched for literal decades, and holy shit I have been clogging my apartment and my life up with so much nonsense. It will be good to start over on a cleaner footing.
What drove the tech right’s — and Elon Musk’s — big, failed bet on Trump
Jun. 13th, 2025 08:52 am![[syndicated profile]](https://www.dreamwidth.org/img/silk/identity/feed.png)

I live and work in the San Francisco Bay Area, and I don’t know anyone who says they voted for Donald Trump in 2016 or 2020. I know, on the other hand, quite a few who voted for him in 2024, and quite a few more who — while they didn’t vote for Trump because of his many crippling personal foibles, corruption, penchant for destroying the global economy, etc. — have thoroughly soured on the Democratic Party.
It’s not just my professional networks. While tech has generally been very liberal in its political support and giving, the last few years have seen the emergence of a real and influential tech right.
Elon Musk, of course, is by far the most famous, but he didn’t start the tech right by himself. And while his break with Trump — which Musk now seems to be backpedaling on — might have changed his role within the tech right, I don’t think this shift will end with him.
The rise of the tech right
The Bay Area tech scene has always to my mind been best understood as left-libertarian — socially liberal, but suspicious of big government and excited about new things from cryptocurrency to charter cities to mosquito gene drives to genetically engineered superbabies to tooth bacteria. That array of attitudes sometimes puts them at odds with governments (and much of the public, which tends to be much less welcoming of new technology).
The tech world valorizes founders and doers, and everyone knows two or three stories about a company that only succeeded because it was willing to break some city regulations. Lots of founders are immigrants; lots are LGBTQ+. For a long time, this set of commitments put tech firmly on the political left — and indeed tech employees overwhelmingly vote and donate to the Democratic Party.
But over the last 10 years, I think three things changed.
The first was what Vox at the time called the Great Awokening — a sweeping adoption of what had been a bunch of niche liberal social justice ideas, from widespread acceptance of trans people to suspicion of any sex or race disparity in hiring to #MeToo awareness of sexual harassment in the workplace.
A lot of this shift at tech companies was employee driven; again, tech employees are mostly on the left. And some of it was good! But some of it was illiberal — rejecting the idea that we can and should work with people we profoundly disagree with — and identitarian, in that it focused more on what demographic categories we belong to than our commonalities. We’re now in the middle of a backlash, which I think is all the more intense in tech because the original woke movement was all the more intense in tech.
The second thing that changed was the macroeconomic environment. When I first joined a tech company in 2017, interest rates were low and VC funding was incredibly easy to get. Startups were everywhere, and companies were desperately competing to hire employees. As a result, employees had a lot of power; CEOs were often scared of them.
Things started changing when interest rates rose and jobs dried up (relatively speaking). That profoundly changed the dynamics at companies, and I have a suspicion it made a lot of people resentful of immigration levels that they’d been fine with when they, too, were having no trouble getting hired. And in the last few years, the tech world has become convinced that AI is happening very, very soon, and is the biggest economic story of our lives. If you wanted to prevent AI regulation, Silicon Valley reasoned, you should vote Republican.
The third was a deliberate effort by many liberals to go after a tech scene they saw as their enemy. The Biden administration ended up staffed by a lot of people ideologically committed to Sen. Elizabeth Warren’s view of the world, where big tech was the enemy of liberal democracy and the tools of antitrust should be used to break it up. Lina Khan’s Federal Trade Commission acted on those convictions, going after big tech companies like Amazon. Whether you think this was the right call in economic terms — I mostly think it was not — it was decidedly self-destructive in political terms.
So in 2024, some of tech (still not a majority, but a smaller minority than in the past two Trump elections) went right. The tech world watched with bated breath as Musk announced DOGE: Would the administration bring about the deregulation, tax cuts, and anti-woke wish list they believed that only the administration could?
…and the immediate failure
The answer so far has been no. (Many people on the tech right are still more optimistic than me, and point at a small handful of victories, but my assessment is that they’re wearing rose-colored glasses to the point of outright blindness.)
DOGE was a complete failure at cutting spending. The administration did not actually break from Khan’s populist approach to the FTC. It blew up basic biosciences research, and is scaring off or outright deporting the best international talent, which is badly needed for AI in particular.
It’s killing nuclear energy (which is also important to AI boosters) and killing exciting next-gen vaccine research. Musk is out — so is his pick to run NASA. It’s widely rumored that Stephen Miller is running things at the White House, and his one agenda appears to be turning all federal capacity toward deportations at the expense of every single other government priority.
Some deregulation has happened, but any beneficial effects it would have had on investment have been more than canceled out by the tariffs’ catastrophic effects on businesses’ ability to plan for the future. They did at least get the tax cuts for the rich, if the “big, beautiful bill” passes, but that’s about all they got — and the ultra-rich will be poorer this year anyway thanks to the unsteady stock market.
The Republicans, when out of power, had a critique of the Democrats which spoke to the tech right, the populist right, the white supremacists and moderate Black and Latino voters alike. But it’s much easier to complain about Democrats in a way that all of those disparate interest groups find compelling than to govern in a way that keeps them all happy.
Once the Trump administration actually had to choose, it chose basically none of the tech right’s priorities. They took a bad bet — and I think it’d behoove the Democrats to think, as Trump’s coalition fractures, about which of those voters can be won back.
A version of this story originally appeared in the Future Perfect newsletter. Sign up here!
Biggles retrouve von Stalhein
Jun. 13th, 2025 12:11 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
( Biggles retrouve von Stalhein in detail )
AI can now stalk you with just a single vacation photo
Jun. 6th, 2025 08:30 am![[syndicated profile]](https://www.dreamwidth.org/img/silk/identity/feed.png)

For decades, digital privacy advocates have been warning the public to be more careful about what we share online. And for the most part, the public has cheerfully ignored them.
I am certainly guilty of this myself. I usually click “accept all” on every cookie request every website puts in front of my face, because I don’t want to deal with figuring out which permissions are actually needed. I’ve had a Gmail account for 20 years, so I’m well aware that on some level that means Google knows every imaginable detail of my life.
I’ve never lost too much sleep over the idea that Facebook would target me with ads based on my internet presence. I figure that if I have to look at ads, they might as well be for products I might actually want to buy.
But even for people indifferent to digital privacy like myself, AI is going to change the game in a way that I find pretty terrifying.
This is a picture of my son on the beach. Which beach? OpenAI’s o3 pinpoints it just from this one picture: Marina State Beach in Monterey Bay, where my family went for vacation.

To my merely human eye, this image doesn’t look like it contains enough information to guess where my family is staying for vacation. It’s a beach! With sand! And waves! How could you possibly narrow it down further than that?
But surfing hobbyists tell me there’s far more information in this image than I thought. The pattern of the waves, the sky, the slope, and the sand are all information, and in this case sufficient information to venture a correct guess about where my family went for vacation. (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent.)
ChatGPT doesn’t always get it on the first try, but it’s more than sufficient for gathering information if someone were determined to stalk us. And as AI is only going to get more powerful, that should worry all of us.
When AI comes for digital privacy
For most of us who aren’t excruciatingly careful about our digital footprint, it has always been possible for people to learn a terrifying amount of information about us — where we live, where we shop, our daily routine, who we talk to — from our activities online. But it would take an extraordinary amount of work.
For the most part we enjoy what is known as security through obscurity; it’s hardly worth having a large team of people study my movements intently just to learn where I went for vacation. Even the most autocratic surveillance states, like Stasi-era East Germany, were limited by manpower in what they could track.
But AI makes tasks that would previously have required serious effort by a large team into trivial ones. And it means that it takes far fewer hints to nail someone’s location and life down.
It was already the case that Google knows basically everything about me — but I (perhaps complacently) didn’t really mind, because the most Google can do with that information is serve me ads, and because they have a 20-year track record of being relatively cautious with user data. Now that degree of information about me might be becoming available to anyone, including those with far more malign intentions.
And while Google has incentives not to have a major privacy-related incident — users would be angry with them, regulators would investigate them, and they have a lot of business to lose — the AI companies proliferating today like OpenAI or DeepSeek are much less kept in line by public opinion. (If they were more concerned about public opinion, they’d need to have a significantly different business model, since the public kind of hates AI.)
Be careful what you tell ChatGPT
So AI has huge implications for privacy. These were only hammered home when Anthropic reported recently that they had discovered that under the right circumstances (with the right prompt, placed in a scenario where the AI is asked to participate in pharmaceutical data fraud) Claude Opus 4 will try to email the FDA to whistleblow. This cannot happen with the AI you use in a chat window — it requires the AI to be set up with independent email sending tools, among other things. Nonetheless, users reacted with horror — there’s just something fundamentally alarming about an AI that contacts authorities, even if it does it in the same circumstances that a human might. (Disclosure: One of Anthropic’s early investors is James McClave, whose BEMC Foundation helps fund Future Perfect.)
Some people took this as a reason to avoid Claude. But it almost immediately became clear that it isn’t just Claude — users quickly produced the same behavior with other models like OpenAI’s o3 and Grok. We live in a world where not only do AIs know everything about us, but under some circumstances, they might even call the cops on us.
Right now, they only seem likely to do it in sufficiently extreme circumstances. But scenarios like “the AI threatens to report you to the government unless you follow its instructions” no longer seem like sci-fi so much as like an inevitable headline later this year or the next.
What should we do about that? The old advice from digital privacy advocates — be thoughtful about what you post, don’t grant things permissions they don’t need — is still good, but seems radically insufficient. No one is going to solve this on the level of individual action.
New York is considering a law that would, among other transparency and testing requirements, regulate AIs which act independently when they take actions that would be a crime if taken by humans “recklessly” or “negligently.” Whether or not you like New York’s exact approach, it seems clear to me that our existing laws are inadequate for this strange new world. Until we have a better plan, be careful with your vacation pictures — and what you tell your chatbot!
A version of this story originally appeared in the Future Perfect newsletter. Sign up here!