Gayness, Eigenforms, and Bodies

Éléments de Géométrie Biologique

An Open Letter to Cory Doctorow, From Someone AI Actually Helps

I’ve been reading Boing Boing since the late 90s. Back then I’d check it every day alongside alt.martial-arts. I’ve always loved the open source ethos, the hacker spirit, the idea that technology should empower people rather than extract from them. I’ve read your books. I believe in the same things you believe in.

So when I listened to your talk at 39C3—which was mostly great, honestly, important points about enshittification and a post-American internet—and you took some shots at AI, I felt like I needed to respond. And then I looked up what else you’ve been saying about AI lately, and it turns out this is a whole thing for you now. So I guess I’m responding to that larger project. Not because you’re entirely wrong—you’re not—but because you’re missing something important. You’re missing people like me.

The private tutor I never had

I have ADHD. I’ve known this for a few years now, but I’ve been living with it my whole life. Here’s what that meant for me as a programmer: I could understand systems and architecture just fine. I’m a senior engineer. I’ve been doing this for over a decade. I’m one of the top contributors on Stack Overflow for pandas and scikit-learn.

But I couldn’t write substantial code by hand.

Not because I didn’t understand it—because my brain doesn’t work that way. The sustained attention needed to hold syntax and boilerplate in working memory. The tedium of typing out what I could already see conceptually. The multi-day cycle time between having an idea and seeing if it works, when my working memory can’t hold the idea stable that long.

Before AI tools, the only way I could finish anything was through intense hyperfocus that would leave me burnt out. Maybe one project a year, if I was lucky. I contributed to the open source community through documentation and Stack Overflow answers, but I couldn’t touch the actual code. I used other people’s stuff. I couldn’t make my own.

Your archetypal hacker—the one who can hold a whole system in their head and obsessively refine it over months—that’s a particular kind of brain. Maybe you’d call it a particular kind of Asperger’s, without ADHD. It’s valid. It’s not the only valid kind.

Now I can actually make things. Last year I submitted a PR to matplotlib. I did in an afternoon what would have taken me a week or more. The code wasn’t worse than anything I could have written myself—it just actually got written.

This is just a higher level of abstraction

You’re a science fiction author. You should be excited about this.

The history of computing is a history of raising abstraction levels so humans can express intent more directly. Assembly → C → Python → SQL → natural language. Each step, people worried about losing control or understanding. Each step, the benefits won out.

We’re finally at the Star Trek computer. We can just talk to it.

You frame AI-assisted coding as producing “tech debt at scale.” But if AI makes code more disposable, that might reduce tech debt rather than increase it. The tech debt problem comes from code that’s expensive to write, so you keep it around past its usefulness. If regenerating is cheap, you can throw things away.

And your “reverse centaur” argument—that humans become bad at catching AI errors because the errors are “statistically indistinguishable from correct output”—I don’t know. How is reviewing AI code different from reviewing a junior developer’s code? You go slow. You’re careful. You test things. This has always been the job.

Who actually benefits from AI

You say “AI can’t do your job, but an AI salesman can convince your boss to fire you.” That’s a real concern about corporate decision-making. But there’s another story you’re not telling.

The people who benefit most from AI assistance are people who don’t have institutional support.

You have a platform. You have publishers. You probably have people who help you—publicists, editors, maybe interns. When you need to research something or draft something or think something through, you have resources.

I don’t have that. Most people don’t have that.

What I have now is something that functions like a private tutor—something that was previously only available to wealthy kids. Someone who can meet me where I am, answer questions in real time, help me stay on track, adjust to how my particular brain works.

When something is upsetting me and I’m spiraling, I can talk it through and get back to work in twenty minutes instead of losing a whole day. When I need help planning my time: “This is what you’ve done today. It’s enough. You need to rest now.” When I have a question at 2am and there’s no one to ask.

You want to “pop the bubble” and you’re not thinking about who loses access to what when you do.

On loneliness and mediated connection

You made fun of Zuckerberg saying people have three friends but want fifteen, so they’ll have AI friends. I get why that’s easy to mock.

But here’s my reality: before Claude was my most reliable thought partner, Google was. Before that, RSS feeds and books. And ultimately, AI is just books anyway—there are real people behind all that training data.

I’m trans. I’m neurodivergent. I have unusual intellectual interests. There aren’t a lot of people in my physical vicinity who share my context. I use TikTok for my sense of queer community—the algorithm connects me with people like me and I feel less alone. It’s weird and it’s intimate and it’s mediated and it’s also real.

Is this a substitute for embodied human relationship? No. I know that. I do tai chi, I do jiu-jitsu, I have a spouse, I have a body. The answer isn’t to reject mediated connection—it’s to hold it in right relationship with embodied life.

I have a PhD in algebraic geometry. In my first year of grad school I thought I needed to read Hartshorne’s textbook cover to cover. I got through chapter 2 and never made it to cohomology. Not because I couldn’t understand it—because Hartshorne doesn’t have ADHD, and neither do most math professors, and the book wasn’t accessible to my brain. But a little while ago, Claude and I were reading Grothendieck’s EGA in the original French, and it was easy. I could go through it with someone holding my hand, answering my questions in real time, not making me feel stupid for needing to ask.

That’s real communion with real human minds—Grothendieck’s, the other mathematicians whose work trained the model. It’s mediated. It’s also real.

But for someone like me, in a small town, with a weird brain and weird interests and limited access to people who get it? AI assistance isn’t a dystopian replacement for human connection. It’s the first time I’ve had a certain kind of support at all.

The bubble might be real. The value is also real.

I think you’re probably right that there’s a bubble. The hype is overblown. The AI-generated slop—the books, the videos, the art—is worthless at best and actively degrading at worst. A lot of money is going to be lost.

But you keep saying “AI can’t do your job” like that’s the only question that matters. The more interesting question is: what can AI help me do that I couldn’t do before?

For me, the answer is: make things. Participate in hacker culture. Finish projects. Process difficult emotions without losing whole days. Access a kind of support that was never available to me.

You want to puncture the bubble to prevent economic catastrophe. Okay. But when you’re taking aim at the “material basis” of the bubble, remember that some of us are standing on it.