Two attacks on Sam Altman’s house in three days — a Molotov cocktail, then gunfire — and a lengthy New Yorker profile raising serious questions about his credibility. These are the two modes of pressure American society has: institutional journalism and direct physical confrontation. Suspect Daniel Moreno-Gama, twenty years old, flew from Texas to San Francisco specifically to do this.

OpenAI announced no major policy changes in response to either.

The instinct is to read this as evidence of how insulated tech power has become. That’s the wrong diagnosis. The more interesting question is why standard accountability tools don’t work here, because the answer isn’t “billionaires are above the law.” It’s that the tools assume something about the target that isn’t true.

Public pressure works by threatening something the target values and wants to protect. Media scrutiny threatens reputation. Political pressure threatens regulatory standing. Consumer boycotts threaten market share. The underlying assumption is that the target wants to continue operating normally, and pressure creates a cost for specific behaviors — enough to shift them.

That assumption breaks when the target doesn’t think “operating normally” is the right frame.

Sebastian Mallaby’s The Infinity Machine traces the founders of the AI era and keeps finding the same thing: these people aren’t primarily motivated by business outcomes. Demis Hassabis built DeepMind as an epistemological project — AI as a way of understanding what reality actually is. Larry Page is a transhumanist who considers death a bug to be fixed. The investors who backed the field early kept running into religion: Gammon chasing “God’s algorithm,” Thiel with his Catholic contrarianism. These are people who persist when rational actors quit, because quitting means letting someone worse win the race — and that outcome, in their frame, is genuinely worse than a bad news cycle.

Sam Altman is an interesting variation. He has Zuckerbergian competitive instincts but talks in the vocabulary of civilizational mission. That combination isn’t cynical — it’s coherent, and it explains his behavior. When OpenAI responded to the New Yorker profile, they didn’t apologize or restructure. They published a twenty-page industrial policy paper proposing a “right to AI,” higher capital taxes, and portable benefits for workers. The response to “your CEO is dishonest” was a larger vision of society.

Dario Amodei fits a different type. He talks explicitly about building something that might end the world, keeps building, and frames that as the responsible choice — the worried physicist who helped build the bomb and now runs the non-proliferation institute. The paranoia is sincere. And if you believe stopping means ceding the frontier to someone with worse values and fewer guardrails, then “stop” isn’t a cost you can impose through bad press.

Watch what happened when accountability actually had leverage. When the Trump administration threatened to phase out Anthropic across federal agencies — real consequences, not just bad coverage — Anthropic sued and won a court order blocking retaliation. That’s not a company managing optics. That’s an institution that believes it can’t afford to lose on the core question, period.

The mistake is treating this as a power problem when it’s an incentive structure problem. These people aren’t responding to pressure the way normal corporate leaders do because they aren’t weighing the same things. A reputational hit is survivable and temporary. Ceding the AI race to a competitor with different values is, in their framing, not survivable.

Governance designed for normal companies won’t fix this. Ethics boards and transparency reports don’t change the internal calculus of people who believe the stakes exceed the usual regulatory concerns. The mechanism that needs to change isn’t the amount of pressure. It’s what pressure is actually designed to do — which means first being honest about what these organizations actually are, and what motivates the people running them.

Nobody in the policy world has gotten there yet. That’s the actual hard problem.