Red State, Blue Playbook: How Effective Altruist Networks Are Testing AI Regulation in Red States
A coordinated effort by Effective Altruism-aligned organizations is advancing state-level AI safety legislation in deep-red Tennessee, backed by a network of donors and advocacy groups seeking bipartisan footholds for a national regulatory framework. The initiative reflects a growing trend in decentralized policymaking, where state legislatures become testing grounds for ideologically driven policy agendas.
Tennessee’s House of Representatives is poised to vote on HB 1898, a sweeping bill that would regulate advanced artificial intelligence tools under the banner of “public safety” and “child protection.” The legislation—mirroring a failed attempt in Utah—marks a broader strategy tested by advocacy groups rooted in the progressive Effective Altruism (EA) movement, which aims to shape AI regulations state-by-state. The bill’s reach is ambitious: it targets frontier AI developers with over 500 million dollars in annual revenue and chatbot providers with more than one million active users, requiring public safety disclosures and crisis intervention measures aimed at minors.
Rep. Jason Zachary, the Republican Deputy Speaker and one of the bill’s sponsors, framed the legislation as urgently needed. “As a father and as Deputy Speaker, protecting Tennessee's children is one of my highest priorities,” he said. “Tennessee families shouldn’t have to wonder whether the AI systems their children use have basic safety measures in place. This legislation is common sense.” Polling cited by supporters shows that 88 percent of Tennessee voters back legislative safeguards for AI, while 94 percent express concern over child safety in particular. At the same time, new national polling shows that voters overwhelmingly want those safeguards delivered through a single national standard, not a patchwork of state laws.
Behind the scenes, however, HB 1898 is part of a larger national narrative. Key drivers of the effort include the Secure AI Project and Encode AI, advocacy nonprofits linked to the EA ecosystem. Both organizations are part of a progressive donor-funded apparatus championing AI safety and transparency initiatives. The Secure AI Project, led by former Open Philanthropy program officer Nick Beckstead, has registered lobbyists in Tennessee and maintains active campaigns in other states, including Missouri and Nebraska. Meanwhile, Encode AI’s involvement highlights the depth of EA alignment; despite claiming independence from corporate or foreign funding, Encode has accepted financing from the Future of Life Institute, according to public grant records, another EA-associated organization with ties to anti-AI related violence in California.
The strategic calculus is evident in Tennessee, where conservative credibility and grassroots polling are leveraged to fast-track legislative support. “Right now, Tennessee families are telling us loud and clear that they're concerned about what AI is doing to their kids. When nine out of 10 voters say they want action, that's not something I need to think twice about,” said Sen. Ken Yager, a Republican co-sponsoring the legislation. Advocacy groups like Tennesseans for AI Safety amplified the pressure through targeted messaging and a localized push framing the legislation as pro-family and pro-community.
But the Tennessee effort has met significant resistance at the national level. Recent Yale polling finds that more than 60 percent of voters say AI should be regulated by the federal government, while far fewer believe states should take the lead; just 39 percent of Republicans say regulation should be left to the states. A separate national survey from President Trump’s longtime pollster Tony Fabrizio asked voters how best to protect kids from harmful AI content and found that 56 percent prefer nationwide standards, compared with just 31 percent who favor state-level rules, with 13 percent undecided. Taken together, those surveys show a clear public preference for a single national standard on AI—especially when it comes to protecting children—rather than fifty different state regimes.
That sentiment aligns with the AI policy vision advanced by the Trump administration. In late 2025, President Donald Trump signed an executive order directing federal regulators to push back on what it described as “burdensome and conflicting” state AI initiatives and to prepare a unified national framework instead. In March 2026, the White House followed up with a national AI legislative blueprint that calls for one set of clear federal rules governing AI companies operating across state lines. For Trump-world strategists, the choice is plain: Republicans can stand with voters and back a single national standard, or they can defend the state-by-state patchwork that effective altruists and their donors are trying to build.
Industry groups have also voiced concerns, arguing the proposed legislation risks tangling smaller developers and startups in expensive compliance regimes. NetChoice, a technology trade association, testified that the bill would “force large AI developers and chatbot providers to produce detailed safety plans, third-party audits, and public disclosures,” creating what they described as “sweeping and unnecessary” obligations. The Computer & Communications Industry Association echoed similar concerns, warning that state-specific mandates like HB 1898 conflict with the broader goal of maintaining U.S. competitiveness under a uniform national framework.
Fiscal conservatives have explicitly linked HB 1898 to a clash with Trump’s national approach. “This legislation also falls contrary to the national AI framework that President Donald Trump announced on March 20, 2026, which would preempt state AI legislation like HB 1898,” said Tom Schatz, President of the Council for Citizens Against Government Waste. “Since AI companies offer their products and services across state lines as interstate commerce, such activity should be solely regulated by the federal government.” In practice, Schatz and other allies argue, that means one national standard for AI rules—including child-safety protections—rather than a proliferation of overlapping state laws.
Some in the national GOP warn that state legislators are being used as pawns by Effective Altruists. “Republicans at the state level are being bought and sold a bag of goods by radical leftists that hate us,” said one national conservative leader, arguing that EA-linked institutions and their donors are using GOP legislators as vehicles for a progressive regulatory agenda. Another Trump-aligned national policy leader was even more blunt: “If we have learned anything during the Trump years it should be that the left will go to all ends to destroy and undermine us. Republican legislators are being utilized as useful idiots to left wing billionaires who are trying to enact radical leftists policies through them, all while undermining Trump's AI agenda.” For these critics, Republican-led state AI bills are not just policy mistakes; they are strategic errors that undercut Trump’s national AI agenda and hand leverage to ideological opponents.
The stakes of the fight extend beyond Tennessee. Effective Altruism groups are employing state legislatures as policy laboratories, testing bills tailored to local political landscapes before scaling efforts. Tennessee represents a case study in this approach: targeting conservative states, recruiting credible Republican lawmakers, and appropriating concerns like child safety to galvanize bipartisan action. “This is modular policymaking,” said a Vanderbilt-affiliated legal scholar specializing in AI. “The same draft gets edited minimally and then redeployed for new states, usually focusing on emotional levers to get public buy-in. The playbook is clear.”
More than one thousand AI-related bills have been introduced across U.S. states since January 2026—a patchwork approach that legislators and tech industry groups alike have warned against. For donors in the EA network, the fight in Tennessee is about more than one state’s rules; it represents a deliberate strategy to build a legislative record capable of influencing federal outcomes. By contrast, industry groups and the Trump administration argue that fragmented policies may undercut U.S. leadership in global AI innovation and confuse parents, schools, and businesses about what rules actually apply.
As policymakers at all levels wrestle with the implications of AI regulation, the Tennessee case is illustrative of broader patterns in American governance. The state’s legislative halls may be the battleground, but the implications could reshape institutional norms nationwide, raising the question of whether decentralized policymaking in AI produces innovation—or chaos—when voters themselves increasingly say they want one clear national standard instead of fifty competing ones.