It’s important to avoid these regulation mistakes now so patients...

It’s important to avoid these regulation mistakes now so patients of the future don’t miss out on life-saving innovations made possible by AI.  Credit: Getty Images/SDI Productions

This guest essay reflects the views of Kevin Frazier, who is Texas Law's inaugural AI innovation and law fellow.

Imagine it is 2028. A St. Louis startup trains an artificial intelligence model that can spot pancreatic cancer six months earlier than the best radiologists, buying patients precious time that medicine has never been able to give them.

But the model never leaves the lab.

Why? Because a well-intentioned, technology-neutral state statute drafted in 2025 forces every "automated decision system" to undergo a one-size-fits-all bias audit, to be repeated annually, and to be performed only by outside experts who — three years in — still do not exist in sufficient numbers. As regulators scramble, the company’s venture funding dries up, the founders decamp to Singapore, and thousands of Americans are deprived of an innovation that would have saved their lives.

That grim vignette is fictional — so far. But it is the predictable destination of the seven "deadly sins" that already haunt our AI policy debates. Reactive politicians are at risk of passing laws that fly in the face of good policy for emerging technologies. Faced with fast-moving AI, the temptation is to act first and reflect later. Yet history tells us that bad tech laws ossify, spread, and strangle progress long after their drafters leave office. To avoid missing out on a better future due to bad laws, lawmakers should avoid these sins.

1. Mistaking ‘tech-neutral’ for ‘future-proof.’ Imagine a statute that lumps diagnostic AIs with chatbot toys. This broad definition will invite litigation and paralyze AI development. Antidote: regulate by context, not by buzzword.

2. Legislating without an expiration date. The first draft of a law regulating emerging tech should never be the last word. Antidote: bake in sunset clauses that force lawmakers to reassess once data rolls in.

3. Skipping retrospective review. Passing a law is easy; measuring whether it works is hard. Antidote: mandate evidence audits — independent studies delivered to the legislature on a fixed schedule, coupled with automatic triggers for amendment when objectives are missed.

4. Exporting one state’s preferences to the nation. When a single market as large as California or New York sets rules for all AI training data, the other states lose their voice. Antidote: respect constitutional lanes. States should focus on local deployment and leave interstate questions to Congress.

5. Building regulatory castles on sand — no capacity, no credibility. Agencies cannot police AI with a dozen lawyers and programmers on the verge of retirement. Antidote: appropriate real money and real talent before — or at least alongside — new mandates.

6. Letting usual suspects dominate the microphone. If the only people in the room are professors, Beltway lobbyists and Bay Area founders, policy will skew toward their priors. Antidote: institutionalize broader participation through citizen advisory panels and notice-and-comment processes that actively seek out nonelite voices.

7. Confusing speed with progress. The greatest danger is not underregulation but freezing innovation before we understand its upside. Antidote: adopt a research-first posture. Fund test beds, pilots and more.

Together, these antidotes form a simple governing philosophy: regulate like a scientist, not like a fortuneteller. Start narrow. Measure relentlessly. Revise or repeal when evidence demands it. And always weigh the cost of forgone breakthroughs — lives unsaved, problems unsolved — against the speculative harms that dominate headlines.

The payoff? A legal environment where responsible innovators can move fast and fix things, where regulators are nimble rather than reactive, and where the public enjoys both the fruits of AI and meaningful protection from its risks. By exorcising the seven deadly sins of AI policy now, we can safeguard the public — and the next generation of world-changing ideas.

Kevin Frazier is Texas Law’s inaugural AI innovation and law fellow.

SUBSCRIBE

Unlimited Digital AccessOnly 25¢for 6 months

ACT NOWSALE ENDS SOON | CANCEL ANYTIME