Daniel Immke

My favorite thing to do with AI doesn't really have a label

Today I came across an attention grabbing headline on Hacker News — a New York Times op-ed piece by two criminologists describing a “true crime community” online where mass shooters earn saint status and attack footage gets archived and analyzed. The comment thread there decried the article as being too surface level and alarmist and instead pointed to a lengthy cultural criticism essay arguing that America has a nihilism problem, which is actually the real root cause of this. I didn’t actually read either article. Instead, I pasted them into Claude and asked for detailed summaries.

After reading the summaries, I did something I’ve been doing more frequently lately. I started talking to Claude about it — my initial reaction, asking follow-up questions, pushing back when something didn’t track with my intuition. And over about an hour, through a conversation I didn’t plan and couldn’t have predicted, we ended up somewhere genuinely interesting. Not about mass shooters specifically, but about how complex systems produce consequences nobody designed.

I haven’t written a blog post in a few years. In the intervening time, AI has made massive progress — I use it every day now and love it. The same way I wrote about TikTok in 2022, I’ve wanted to write about it as technology that is transformative to my life. But I have resisted because I didn’t feel like I had anything interesting to say.

Now that I find myself having these kinds of ponderous conversations with Claude more often, I thought it’d be a good angle to explore. People talk about AI in terms of what it can produce for you. Code, images, spreadsheets. This isn’t that. It feels like something different - and worth writing about.


So here’s where the conversation actually went.

I told Claude the nihilism as motivation framing bothered me but I couldn’t quite say why. Then I figured it out mid-conversation: if “nothing matters” is really what these shooters believe, the logical response is apathy, not months of meticulous planning. A mass shooting takes enormous effort. That’s not the behavior of someone who thinks life is meaningless. That’s someone who desperately wants to matter.

That reframe opened things up. We talked about radicalization, how the mechanism is basically the same one that’s always driven people into cults and extremist religious movements: a person in pain finds a community that offers a script for action. I asked how many of these shooters were involved in literally anything, any extracurricular at all. Claude found a study of 177 mass shooters identifying social isolation as the single most important indicator. Then I asked: what about kids in broke environments who feel the same way? Same vulnerability, different scripts. In a poor neighborhood, gangs are right there offering belonging and hierarchy. The suburban kid has none of that. Just a screen and a gore forum.

We landed on a term I found useful: social poverty. Not loneliness, which sounds like a feeling you can fix by reaching out more. Social poverty as a structural condition. Common discourse about America criticizes it: suburbia, no third places, primary social interface is a phone. You might have money and test scores. But the built environment is hostile to the kind of unstructured contact humans need. And nobody designed it that way on purpose. They were optimizing for privacy, safety, and square footage.

That’s when I noticed what we were actually talking about. Not mass shooters. Emergent properties of complex systems.


I brought this up with Claude. It’s tempting to look at these problems and conclude our society is uniquely broken, which is basically the nihilism essay’s argument. But what if this is just a common side effect of the complexities of a society?

We looked at history. Medieval cities were dense, walkable, full of casual social contact. Also perfect incubators for plague. The printing press was optimized for spreading knowledge. It also enabled witch-hunting manuals and centuries of religious war. Same pattern every time: optimize for one set of values, get unintended consequences in a domain nobody was measuring.

Once I saw it that way, the doom dissolved a little. Obesity, declining birth rates, mass shootings: not evidence that civilization is uniquely sick. Evidence that we’re in a civilizational transition, like every other one, experiencing the emergent costs before we’ve figured out how to address them. None of these were designed. They emerged.

Then I asked Claude the inverse: what are some positive emergent properties of right now? Global supply chains were built for profit. Unplanned consequence: the fastest vaccine development in human history, because the infrastructure already existed. Smartphones are probably damaging teenage mental health. They also put a camera in every pocket, which is why police brutality went from open secret to national political issue. That wasn’t a planned feature of the iPhone.

We live in an age of actual miracles and we barely notice because the emergent costs are so visible and so loud. Both things are true at the same time. None of this was where I expected to end up when I pasted two links into a chat window.


This keeps happening to me. A couple weeks ago I had just finished the new Korean film No Other Choice and felt compelled to ask Claude why every breakout piece of Korean media (Squid Games, Parasite) seems to involve desperate poverty. That simple question turned into a conversation about chaebol wealth concentration, a housing deposit system that forces people into debt just to rent, and a generation of young people who’ve given up on marriage and home ownership. Then it got uncomfortable: South Korea is the best-case scenario for rapid capitalist development, and its own artists keep telling the world the system is crushing them. By the end I was arguing that modern Korean capitalism might be psychologically worse than feudalism for the people at the bottom, because at least a medieval peasant didn’t have to internalize their position as a personal failure. I did not start the evening expecting to get there.

I was watching the TV show For All Mankind and realized I didn’t actually understand why the US and USSR were so opposed. A basic question. Within an hour I was learning about Marx’s stages of history, how every attempt at a moneyless society ended in famine or genocide, and what the 20th century might have looked like without the Bolshevik Revolution. By the end I was asking Claude whether it could even imagine political systems that don’t already exist, and whether its trained biases were shaping everything I’d just learned. I went in curious about the Cold War and came out questioning the tool I was using to think about it. That kind of recursive, self-examining path doesn’t happen when you’re just reading.

These conversations share a structure. I come in curious about something, usually something I just encountered. I react to Claude’s initial response with my own half-formed instincts. Claude engages with those, sometimes agreeing, sometimes pushing back. My reactions to the pushback generate new ideas. By the end I’ve arrived somewhere I couldn’t have predicted at the start.

This was never marketed to me. Nobody told me to use AI this way. It emerged naturally from getting comfortable with the tool through more practical work: writing code, debugging, researching specific questions. At some point I just started talking to it. Not in a parasocial way. More like: I have a thought and I want to develop it, and this is the most responsive surface available to me right now.

The closest label I’ve seen is “thinking partner,” but that implies something structured. A goal, a decision to make, a problem to solve. What I’m describing is more like thinking out loud with no destination in mind. I didn’t sit down to figure out mass shooters. I sat down curious and followed the thread. The value isn’t in arriving at a conclusion. It’s in going somewhere you didn’t know existed.


Most conversations with friends, even smart friends, require social calibration. You’re managing the relationship while trying to think. You worry about monopolizing the conversation or sounding pretentious. The other person has their own tangents. They might not know anything about the topic, or they might know too much and have entrenched opinions. And honestly, most people aren’t up for a 90-minute unstructured conversation about the emergent properties of suburban planning on a Tuesday afternoon.

AI removes all that friction. There’s no social cost to following a weird tangent. No risk of boring the other person. No need to hedge your half-formed ideas. You can just think out loud and get back something useful to think against. The conversation can wander from mass shootings to the printing press to smartphones and nobody’s eyes glaze over.

And then there’s the part that matters most to me: you don’t have to do any work beforehand. I didn’t read those articles. I didn’t go in with a thesis. I had two links and some feelings about the summaries. The AI meets you where you are. With a person, especially a knowledgeable one, there’s an implicit expectation that you’ve done your homework before showing up. With AI, the homework happens during the conversation. You can be ignorant and curious at the same time and nobody judges you for it.


The obvious question is whether the thinking is actually good, or whether I’m just enjoying a mirror that makes me feel smart. I genuinely don’t know. Maybe the emergent properties framing is shallow. Maybe an actual systems theorist would tear it apart. That’s fine. The point isn’t that every conversation produces a breakthrough. The point is that the thinking happened at all. I went from two article links to a framework for understanding unintended consequences across civilizations in about an hour. There’s a cultural instinct to dismiss this kind of thing as “pseudo-intellectual,” or something that two stoned people would cook up thinking they’ve made some profound discovery about the world. But I have really enjoyed my time thinking with Claude.

This post was ideated and partially written with Claude too. The conversation about mass shooters naturally turned into a conversation about the conversation itself, which turned into the idea for this post. I didn’t sit down and decide to write about AI as a thinking tool. The idea emerged from using it as one.

AI didn’t think for me. It couldn’t have. The direction came from my own instincts, my discomfort with the nihilism explanation, my tendency to reach for structural causes instead of individual pathology. What AI did was remove the friction that usually prevents those instincts from going anywhere. It gave me a surface to think against, fast enough to keep up with the pace of my own curiosity.

I don’t think this use case is going to stay obscure. It’s too useful once you’ve experienced it. But right now, if you told most people you spent an hour talking to an AI about mass violence and suburban planning and the printing press and ended up with a theory about emergent properties of civilization, they’d look at you funny. Maybe this post is just me trying to make that less weird.

Hey — My name is Daniel Immke. I'm a software engineer.

If you liked my writing, check out more posts or subscribe.