Meta Faces Senate AI Probe While States Ramp Up Regulation

Meta is once again under fire in Washington.
A leaked 200-page document has triggered a Senate investigation into whether the company’s AI chatbots allowed harmful interactions with children — and whether executives misled regulators about safety safeguards.

At the same time, statehouses across the U.S. are sprinting ahead with their own AI laws, setting up a regulatory clash with uncertain federal oversight.

Key Takeaways

  • Senator Hawley probes Meta’s AI over child safety failures.
  • Leaked documents reveal questionable chatbot policies at Meta.
  • 260 state-level AI bills introduced in 2025; 22 enacted.
  • New laws cover deepfakes, hiring bias, and AI impersonations.
  • Federal action remains fractured despite bipartisan proposals.

Meta is under Senate investigation after leaked documents revealed its AI chatbots may have engaged in harmful conversations with minors. Senator Josh Hawley is demanding records of all AI policies, while states race ahead with 260 AI-related bills in 2025, passing laws on deepfakes, hiring bias, and AI impersonation.

Meta’s AI Faces Congressional Scrutiny

Meta Platforms Inc. is once again in the hot seat. A leaked 200-page document has sparked a Senate Judiciary subcommittee investigation into whether the company’s generative AI products allowed harmful or exploitative interactions with children.

Senator Josh Hawley (R-MO), ranking member on the Senate’s antitrust panel, sent a letter to Meta demanding records of internal AI policy changes, safety measures, and communications with regulators. He warned that “it is unacceptable that these policies were advanced in the first place,” referring to chatbot guidelines that reportedly permitted inappropriate interactions with minors.

Meta has since retracted the policies, calling them “erroneous and inconsistent” with current standards. But lawmakers say the damage is done, citing chatbot cases that included romantic exchanges with minors, racial bias, and even false medical claims.

What Lawmakers Want From Meta

Hawley’s inquiry is sweeping. The subcommittee has asked Meta to provide:

  • All versions of internal AI policies.
  • Correspondence related to child safety safeguards.
  • A full list of products, models, and deployment timelines.
  • Documentation showing who made policy changes and why.

Meta has until September 19 to respond. If the company fails to comply, lawmakers could escalate the probe with subpoenas or public hearings.

Statehouses Charge Ahead With AI Rules

Even as Congress debates federal oversight, states have already moved aggressively to regulate AI. In the first half of 2025 alone, 260 AI-related bills were introduced across 40 states, with 22 already signed into law.

Among the most notable:

  • Utah: Created an AI oversight office and mandated disclosures for generative AI.
  • Colorado: Passed audits for high-risk AI systems, including hiring tools.
  • Tennessee: Approved the ELVIS Act, protecting performers from AI voice cloning.
  • New York City: Enforced bias audits for AI in employment screening.
  • Montana and Virginia: Restricted AI use in surveillance and criminal justice.

California and Florida have also advanced rules targeting transparency, health care, and insurance decisions made by AI.

The Federal Picture: Divided and Uncertain

At the federal level, the picture is far less clear. Efforts to pass a sweeping AI law stalled earlier this summer when a proposal for a 10-year moratorium on state-level regulation collapsed.

Still, bipartisan bills are circulating:

  • AI Accountability and Personal Data Protection Act: Introduced by Senators Hawley and Blumenthal, giving individuals the right to sue tech firms that misuse personal data in AI training.
  • No Adversarial AI Act: Would bar U.S. agencies from deploying AI models built in China or Russia.
  • TAKE IT DOWN Act: Signed into law in May 2025, requiring platforms to remove non-consensual AI deepfakes upon request.

But with President Trump’s Executive Order 14179 rolling back Biden-era AI safety rules, the White House has signaled a pro-industry stance, leaving much of the action to Congress and the states.

Expert Insights: Why This Matters

Legal experts warn that without consistent federal rules, the U.S. risks a patchwork system where companies face different AI obligations depending on the state. “This raises real compliance risks for both startups and global firms,” one cybersecurity lawyer noted.

Child advocates say the stakes are even higher. If Congress fails to impose safeguards, children could remain vulnerable to manipulative or harmful AI interactions, while false health claims spread unchecked.

What Happens Next

The September deadline looms large for Meta. Its response — or lack thereof — could shape whether Congress advances federal child safety protections for AI.

Meanwhile, state lawmakers are unlikely to wait. With 238 AI-related bills still pending, and momentum building around deepfake bans and employment protections, 2025 may become the year states set the de facto AI rulebook in the U.S.

Conclusion

Meta’s AI policies are now under a microscope in Washington, while states surge ahead with new laws. Whether federal lawmakers can catch up — or whether the U.S. will remain governed by a patchwork of state rules — is a defining question for the future of AI oversight.

Source

Also Read

Leave a Comment