Attention New York People: The RAISE Act’s AI ‘Seatbelt’ Law Could Change Everything

On June 13, 2025, New York’s Legislature made history by approving the Responsible AI Safety and Education (RAISE) Act, a first-of-its-kind law demanding that the world’s most powerful artificial intelligence systems carry built‑in “seatbelts” in the form of transparent audit trails and safety checks. What happens when an AI model could, in theory, inflict over $1 billion in damage or harm more than 100 people? Under RAISE, companies like OpenAI, Google DeepMind and Anthropic will need to prove they’ve thought that far ahead—and put the proof on paper.

For years, major AI labs have made voluntary pledges—red‑team testing, model cards, internal reviews—but none were legally enforceable. RAISE changes that by zeroing in on “frontier” models: those trained with at least $100 million in compute resources. If a company’s latest system fits that bill, it must file a detailed safety playbook, undergo a yearly third‑party audit and immediately alert New York authorities of any serious incident—like theft of a model or unexpected, dangerous behavior—within 72 hours.

Behind the scenes, Senator Andrew Gounardes and Assemblymember Alex Bores crafted RAISE to balance innovation with oversight. They borrowed the term “critical harm” from existential‑risk research: death or serious injury to 100 or more people, or losses exceeding $1 billion. That definition flips the usual AI‑ethics script, focusing on extreme, low‑probability events rather than everyday algorithmic bias. RAISE even protects insiders who blow the whistle on looming dangers, shielding them from retaliation.

Imagine an AI model experimenting with biological data and hinting at a novel pathogen. Today, most developers might patch the code quietly—or worse, ignore the worry. Under RAISE, they’d have to document that scenario months before launch, submit it to an independent auditor and keep regulators in the loop if anything goes awry. Fail to comply, and civil fines start at $10 million for a first offense, jumping to $30 million thereafter. Suddenly, safety isn’t just best practice; it’s the law.

Supporters hail RAISE as a breakthrough. Nobel‑winning AI pioneer Geoffrey Hinton praised it as “timely and necessary,” noting that frontier models are evolving faster than our rules can keep up. Advocacy groups point out that Washington’s federal framework is still months—or years—away, and Europe’s AI Act, while comprehensive, won’t fully address “frontier” threats. With New York’s third‑largest state GDP, proponents argue, these guardrails could become a de facto national standard.

Yet not everyone is convinced. The Business Software Alliance (BSA), representing big tech and software firms, warned that the bill was rushed and risks punishing companies for harms they can’t directly control. Its senior director for state advocacy called the audit regime “extensive and unworkable,” arguing that publishing detailed safety protocols might help bad actors reverse‑engineer vulnerabilities. Critics also fear that firms will simply stop offering their most advanced models in New York, creating a regulatory “cold spot” where businesses choose convenience over compliance.

Industry voices have been muted but pointed. OpenAI and Anthropic note their existing White House‑backed commitments to red‑teaming and transparency, and Google says it “supports thoughtful regulation”—while historically lobbying against overlapping state laws. Some venture capitalists worry that a patchwork of state rules will slow startups, but others see an opportunity for “safety‑by‑design” consultancies and new compliance tools. In the months ahead, expect a scramble to develop off‑the‑shelf audit software and specialist law firms advertising “AI seatbelt” expertise.

What’s next? Governor Kathy Hochul can sign RAISE into law, veto it or send it back for tweaks. If she gives her approval, large AI labs will have 180 days to file their first safety dossiers with Albany—essentially a “report card” on how they’re preventing worst‑case disasters. That deadline will test how seriously Silicon Valley takes state‑level mandates. Pull out of New York? Few companies can afford to forsake the Empire State’s market.

For everyday New Yorkers—healthcare patients, commuters, small‑business owners—RAISE means a new layer of protection. When AI-powered diagnostic tools or traffic‑management systems make life‑and‑death calls, someone’s already audited the decision path. If something goes unexpectedly wrong, there’s a public record and real consequences for the lab responsible.

By embedding audit trails into AI’s DNA, New York is betting that accountability can keep pace with innovation. Whether RAISE becomes the nationwide blueprint or a state‑level experiment depends on one signature—and on the willingness of frontier AI labs to treat New York’s safety demands not as red tape, but as a seatbelt for us all.

Leave a Comment