Escalating Anti-AI Radicalism: How AI Risk Narratives, Funding, and Power Are Converging
A Molotov cocktail and gunfire aimed at OpenAI CEO Sam Altman's home signal the troubling escalation of anti-AI radicalism. Fueled by existential fears and apocalyptic rhetoric, these acts raise urgent questions about the diffusion of extremism in technological opposition movements.
A man walked through San Francisco before dawn carrying a bottle filled with fuel, moving toward a house that, for years, had existed mostly as a symbol.
By the time the glass struck the gate outside Sam Altman’s home, the arguments that preceded it, about extinction, survival, and the future of artificial intelligence, had already been circulating across research papers, nonprofit campaigns, policy debates, and private online forums. The fire burned briefly. The ideas did not.
Police say the man, Daniel Moreno-Gama, had spent that time immersed in a digital ecosystem organized around a single premise: that advanced AI systems could end human existence. In that framework, timelines compress and the cost of inaction expands. Less than an hour after the Molotov cocktail attack on Altman’s home, authorities say, Moreno-Gama appeared outside OpenAI’s headquarters and threatened to burn it down. Two days later, a separate vehicle returned to Altman’s home and a passenger appeared to shoot at the property before fleeing.
Moreno-Gama’s online trail shows how that premise travels. Under the alias “Butlerian Jihadist,” he participated in the Discord server of PauseAI, an international campaign calling for a halt to advanced AI development. He wrote that humanity was “close to midnight” and that it was “time to actually act.” In a Substack essay, he described extinction from AI as “nearly certain” and framed developers as the immediate locus of risk. He now faces charges including attempted murder and arson.
PauseAI said in a public statement that it condemns violence and that Moreno-Gama held no formal role in the organization. The group’s messaging operates within a broader institutional network shaped by funding flows and overlapping personnel. Public disclosures show that PauseAI has received approximately €715,000 since its founding, according to its publicly disclosed funding page, with €422,961, about 59%, coming from the Future of Life Institute.
The Future of Life Institute has been a central node in advancing existential-risk arguments into policy and public discourse, including its 2023 open letter calling for a pause on large-scale AI development. It is part of a wider intellectual ecosystem influenced by effective altruism, a movement focused on long-term global risks, including artificial intelligence. That same framework has informed the founding philosophy of Anthropic, established by former OpenAI researchers with an explicit focus on AI safety. The institutional overlap does not indicate coordination, but it places advocacy groups, funders, and AI developers within a shared conceptual language centered on existential risk.
Neither Anthropic nor the Future of Life Institute responded to requests for comment.
Within that framework, rhetoric has at times moved beyond abstract modeling into direct calls for physical intervention. John Sherman, who briefly served as director of public engagement at the Center for AI Safety, said in recorded remarks, “Walk to the labs across the country and burn them down. Like, literally,” while specifying he was not advocating harm to individuals. In a post on X, 'AI is an intruder in your home. It's here to kill you, your kids, your pets, and everyone you know and love. You can hide under the covers and pretend it's not there. Or you can act now to defend your family.' In other remarks, he described a catastrophic “warning shot” killing “a few million people” as a potential catalyst for global response.
This framing has not been confined to a single political orientation. Commentary warning of civilizational risk from AI appears across ideological lines. Joe Allen, a writer associated with right-leaning media, has described AI development as a point where “dark signs converge” and has warned of humanity “summoning demons” through advanced systems. The language differs in tone and origin, but it converges on a shared structure: AI as an externalized, existential threat requiring urgent response.
Sherman’s positions connect multiple organizations. GuardRail Now shares leadership ties with the Center for AI Safety, and its board includes individuals who also hold roles in AI safety research and advocacy groups. One board member is also active in PauseAI, linking the organizations through personnel as well as ideology. These overlaps create pathways through which narratives can circulate across institutional boundaries, even when formal coordination is absent.
On April 10, the PauseAI Discord server carried a message announcing the activation of a “Warning Shot Protocol” in response to a newly announced AI system. The message said the world had crossed a “genuinely dangerous threshold” and linked to an article describing the model as posing a “credible risk of civilisational catastrophe.” Available evidence does not establish that this message preceded the attack on Altman’s home, but both appeared within the same rapidly escalating environment on the same day.
Executives building these systems have used similarly heightened language, though directed toward regulation and caution rather than physical action. Dario Amodei, co-founder of Anthropic, has described AI as potentially “the single most serious national security threat we’ve faced in a century, possibly ever,” and warned that humanity stands “on the brink of acquiring almost unimaginable power.” In essays and interviews, he has said progress is advancing “too head-spinningly fast,” that systems could transform society “within two years,” and that “all bets are off” within a decade if current trajectories continue.
Amodei has also framed risk not only as a property of the technology, but of the institutions building it. “I think the next tier of risk is actually AI companies themselves,” he wrote, warning that concentrated control over powerful systems introduces its own set of vulnerabilities. In discussing governance, he has said, “I’m deeply uncomfortable with these decisions being made by a few companies,” adding, when asked who authorized such control, “No one. Honestly, no one.” He has further warned that AI companies could “misuse their AI products to manipulate public opinion.”
He has repeatedly invoked a metaphor of a “country of geniuses in a datacenter,” describing a near future in which millions of AI systems outperform human experts and operate at superhuman speed. “If for some reason it chose to do so,” he said, “this country would have a fairly good shot at taking over the world.” He has also estimated a roughly 25% chance of catastrophic outcomes if development proceeds unchecked, and warned that AI could enable biological threats, disrupt large segments of white-collar employment, and concentrate wealth at unprecedented scale.
This language reflects a tension at the center of the system. The same organizations building advanced AI systems are also articulating the risks those systems may pose, while operating within market structures that reward rapid deployment. “There is so much money to be made with AI, literally trillions of dollars per year, that it is very difficult for human civilization to impose any restraints on it at all,” Amodei has said publicly.
Researchers who study extremism note that the structure of these arguments matters as much as their content. When risk is framed as imminent and total, traditional thresholds for action can shift. Anti-technology extremism, one analysis found, “possesses one remarkable quality: flexibility,” allowing it to bridge ideological divides and align actors who would otherwise remain separate.
That flexibility is emerging alongside rapid expansion of AI infrastructure. Data center investment rose sharply in 2025 and is projected to exceed $1 trillion in 2026, with major technology companies committing hundreds of billions of dollars to new capacity. In the United States, data centers now account for a growing share of electricity consumption, with higher concentrations in certain regions. Rising energy demand and water use have intensified local opposition in some communities.
Those tensions are beginning to surface in governance. In Indianapolis, Ron Gibson discovered bullet holes in his home days after supporting a $500 million data center rezoning project. A note left at the scene read “No Data Centers.” He said in a statement that violence “is never the answer.” Authorities are investigating.
At the federal level, proposals to pause or regulate AI development have gained traction, including legislation introduced by Bernie Sanders and Alexandria Ocasio-Cortez. At the same time, executive policy has designated AI infrastructure as critical, accelerating its buildout. The result is a regulatory landscape that both enables expansion and invites restriction.
Law enforcement agencies have not established a unified framework for classifying violence tied specifically to technological opposition. Incidents are prosecuted under existing statutes, and researchers say available data remains limited. The pattern, however, has drawn attention from analysts monitoring escalation dynamics across decentralized movements.
In a public post following the attack, Altman wrote that while debate over AI’s risks is necessary, “we should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally.”
The events in San Francisco do not originate from a single organization or ideology. They emerge from an overlapping system of funding, research, advocacy, and rapid technological change. The same language that defines long-term risk can, under certain conditions, be interpreted as immediate instruction. The mechanisms governing that shift remain unclear, even as the scale of both the technology and the response continues to grow.