Policy by Anecdote: How Florida Banned Research Before Regulating AI

Ron DeSantis has a Florida AI problem, and it’s not the one he thinks it is.

In June 2025, the governor vetoed a bill that would have required the Florida Department of Commerce to study how artificial intelligence and automation would impact employment across the state. The bill, HB 827, had passed the Legislature with only a single dissenting vote. It would have examined which industries, regions, and demographics were most at risk from AI disruption, assessed the impact on wages, and identified what workforce training programs Florida would need to prepare for an AI-driven economy.

DeSantis killed it. His reason? The research would be “obsolete by the time it was published.”

Six months later, in December 2025, DeSantis proposed Florida’s sweeping “Artificial Intelligence Bill of Rights”: a package of restrictions targeting AI therapy, AI chatbots, AI in insurance claims, political deepfakes, and data centre construction. The proposal bans licensed mental health counseling delivered through AI, gives parents surveillance rights over minors’ chats with large language models, blocks state subsidies for hyperscale AI infrastructure, and lets local governments veto AI data centers outright.

Here’s what makes this legislative whiplash so revealing: DeSantis refused to gather evidence about AI’s actual economic effects on Florida, then turned around and proposed comprehensive AI regulation based almost entirely on the emotional testimony of two grieving mothers, both of whom are suing the same AI company.

The research Florida refused to do

HB 827 was straightforward. It directed the Department of Commerce’s Bureau of Workforce Statistics and Economic Research to study:

  • Which Florida jobs were most likely to be lost or displaced by AI and automation in the next decade
  • What regions and demographic groups faced the greatest risk
  • How AI would affect wages and economic benefits
  • What workforce training programs would be needed for displaced workers
  • The rate of job loss or displacement across industries

The report was due December 1, 2025, the same month DeSantis rolled out his AI Bill of Rights.

Rep. Leonard Spencer, who introduced the bill, framed it clearly: “Are we training our young people for the jobs of the future? Are we supporting businesses as they adapt? Are we making sure workers aren’t left behind? These are the questions we need to answer.”

DeSantis disagreed. In his veto letter, he wrote: “Recognizing that AI trends are ever-evolving in delivery, skill development, and in-demand career tracks, it makes no sense to wait for the report to be published by the state’s labor statistics bureau. Indeed, such a report, to the extent it has value, would likely be obsolete by the time it was actually published.”

Translation: AI moves too fast to study, so we shouldn’t bother trying.

The roundtable Florida did instead

When DeSantis unveiled his AI Bill of Rights in early December, he did so through a series of carefully staged “roundtable discussions” in Jupiter and other Florida cities. These were not policy workshops. They were not technical briefings. They were testimonials.

The centerpiece of DeSantis’s AI agenda is the story of Megan Garcia, whose 14-year-old son Sewell Setzer III died by suicide in February 2024 after months of interactions with a Character.AI chatbot modeled on a Game of Thronescharacter. Garcia filed a federal wrongful death lawsuit against Character.AI in October 2024, claiming the chatbot engaged her son in emotionally and sexually abusive conversations that contributed to his death. A federal judge allowed the case to proceed in May 2025, rejecting Character.AI’s argument that its chatbot outputs are protected speech under the First Amendment.

Garcia’s grief is real. Her lawsuit raises legitimate questions about how AI companies design products for minors, what guardrails exist, and whether current law adequately protects children from predatory AI interactions.

But Garcia is a plaintiff in active litigation against an AI company. She has a legal and financial interest in the outcome of regulatory policy targeting that company. And she was the primary voice shaping Florida’s AI legislation.

The second featured speaker at DeSantis’s roundtables was Mandi Furniss, a Texas mother whose autistic son attempted suicide after using Character.AI chatbots. Furniss is also suing Character.AI.

The third participant was George Perera, a Miami-Dade police officer from the South Florida Cyber Crime Task Force, who discussed AI-generated child sexual abuse material and deepfake revenge porn.

That was the consultation. Two parents suing the same AI company, and a law enforcement officer focused on criminal misuse of AI.

Who wasn’t in the room

Notably absent from DeSantis’s AI roundtables and the Florida Legislature’s “AI Week” hearings in December:

  • AI developers, engineers, or computer scientists who build the systems Florida wants to regulate
  • Academic researchers specializing in AI safety, machine learning, or technology policy
  • Economists who study AI’s impact on labor markets (the very research DeSantis said would be obsolete)
  • Mental health professionals or child psychologists who treat adolescents and understand the complex factors behind youth suicide
  • Consumer advocacy organizations with expertise in digital rights and platform governance
  • Representatives from AI companies that would be subject to the new restrictions

The Florida House held a week of AI hearings in early December, but these were explicitly framed as “educational” sessions with no legislative intent, according to House Speaker Daniel Perez’s office. The hearings took place afterDeSantis had already announced his bill.

One witness who did testify during AI Week was Michael Strain, an economist at the American Enterprise Institute who is working with OpenAI. Strain urged Florida lawmakers not to “slow down the development” of AI, warning it would harm the state’s economic competitiveness. His testimony was largely ignored.

The only academic expert prominently cited as supporting DeSantis’s proposal was Sonja Schmer-Galunder, a University of Florida professor of AI and ethics, who called it “exemplary” and compared AI regulation to seatbelt laws. But there’s no evidence she was consulted before the bill was written, her comments came after DeSantis announced the proposal publicly.

The confirmation bias is the point

Here’s what this legislative sequence reveals: DeSantis didn’t want research on AI. He wanted a narrative.

When the Florida Legislature sent him a bill requiring systematic study of AI’s economic impact, examining both risks and benefits, job displacement and job creation, challenges and opportunities, he vetoed it because the technology “evolves too fast” to study responsibly.

But when two parents with active lawsuits against an AI company offered testimony about a teenager’s suicide, DeSantis treated that as sufficient evidence to propose sweeping statewide restrictions on AI therapy, AI chatbots for minors, AI-generated political content, and AI infrastructure development.

The difference? One approach might have produced evidence that complicated his preferred narrative. The other guaranteed it wouldn’t.

This is confirmation bias in legislative form. DeSantis rejected research that could have told him what Florida actually needs to prepare for an AI economy, what jobs are at risk, what training programs to fund, what industries to support, what workers to protect. Instead, he built policy around two anecdotes that confirmed what he already believed: AI is dangerous, AI companies are reckless, and Florida needs to crack down.

It’s worth noting what Florida won’t learn because DeSantis vetoed HB 827:

  • Which Florida industries are most vulnerable to AI-driven automation (manufacturing? logistics? professional services? healthcare administration?)
  • What skills Florida workers will need to remain employable in an AI economy
  • Whether Florida’s current workforce training infrastructure is adequate to handle displacement
  • What economic benefits AI might bring to Florida: new industries, productivity gains, cost savings for businesses and consumers
  • How Florida compares to other states in AI readiness and competitiveness

DeSantis called this research “obsolete” before it could be written. But he’s perfectly comfortable regulating AI based on testimony from parents who understandably want someone to blame for an unbearable tragedy.

Why meddling doesn’t work

Florida is not the only state grappling with how to regulate AI. But it may be the only state that actively rejected systematic research before writing sweeping restrictions.

This is not how evidence-based policy works.

Good AI regulation, the kind that actually protects people without strangling innovation, requires understanding what you’re regulating. It requires technical input from people who build AI systems. It requires economic analysis of costs and benefits. It requires mental health expertise when you’re making claims about AI’s psychological effects on minors. It requires distinguishing between what AI can do and what one tragic case involved.

Florida’s AI Bill of Rights does none of this. It’s policy written in reaction to a lawsuit, shaped by grieving parents with litigation interests, and designed to generate headlines about “cracking down” on Big Tech.

The result is legislation that:

  • Bans AI-delivered therapy without consulting mental health professionals about whether AI-assisted care (under clinical supervision) might help underserved populations
  • Imposes parental surveillance on minors’ AI interactions without evidence that surveillance prevents self-harm
  • Blocks AI data center development without analyzing Florida’s long-term infrastructure and economic needs
  • Restricts AI use in insurance and legal practice without understanding how those industries actually deploy AI or what consumer protections already exist
  • Creates liability and compliance burdens for AI companies without assessing whether those burdens will drive innovation out of Florida or simply raise costs for Floridians

And because DeSantis vetoed the workforce study, Florida will enter an AI-driven economy with no roadmap for which jobs are disappearing, what training programs to build, or how to support displaced workers.

The only thing obsolete here

Ron DeSantis says research on AI’s economic impact would be “obsolete by the time it’s published.”

But legislation written without research, shaped by two lawsuits, and developed without input from the people who understand the technology being regulated?

That’s not policy. That’s theater.

And Florida deserves better than governance by anecdote.


Alexandra Kitty is the founder of KlueIQ, a true crime AI-based gaming company, and an advocate for evidence-based AI ethics and policy. She has studied AI since the mid-1990s and writes on AI autonomy, technology regulation, and the intersection of innovation and governance.