Elon Musk’s xAI Sues Ex-Engineer Xuechen Li  for Stealing Grok Code

The AI world just got hit with one of its biggest scandals yet. Elon Musk’s xAI has filed a lawsuit against a former engineer accused of stealing Grok’s code and taking it straight to rival OpenAI. Add in new reports that ChatGPT could now call the police on users, and you’ve got a perfect storm of ethics, power, and paranoia shaping the AI industry.

Key Takeaways

  • Ex-XAI engineer accused of stealing trade secrets and handing them to OpenAI.
  • Musk confirmed lawsuit, fueling fears of CCP-linked tech theft.
  • OpenAI says ChatGPT chats may be reviewed and sent to police in extreme cases.
  • Rising concerns about AI “psychosis” and public safety.
  • Meta’s AI lab sees high-profile exits despite billion-dollar bets.

The Big Story: XAI vs. OpenAI

Elon Musk’s AI startup xAI is in legal battle mode. According to court filings, former engineer Xuechen Li allegedly stole confidential materials—described as Grok’s codebase—before selling $7 million worth of xAI stock and joining rival OpenAI.

Court documents claim Li admitted—both in writing and verbally—that he misappropriated trade secrets, even trying to cover his tracks by hiding files on personal devices.

Musk himself confirmed the lawsuit, sparking outrage across social media. Many accounts labeled the scandal as yet another case of “CCP-linked tech theft.”

And Right Angle News Network went even further:

If proven true, this would be one of the largest AI trade secret thefts in history—potentially changing the balance of power between Musk’s xAI and OpenAI.

Meanwhile: ChatGPT Could Call the Police on You

While Musk battles alleged code theft, OpenAI itself is under fire for something else entirely. Reports confirm that ChatGPT conversations may be reviewed by humans and flagged to law enforcement if they involve imminent threats of harm.

This comes after researchers coined the term “AI psychosis”—arguing that when chatbots hallucinate, they sometimes drag users into delusional thinking too.

Imagine pouring your heart out to ChatGPT… only to have your chat escalated to a police report. That’s the nightmare scenario critics warn about.

DeepSeek Hype—or Disinfo?

Adding more intrigue to the week’s chaos: DeepSeek’s meteoric rise is under scrutiny. A recent investigation claims thousands of fake social media accounts amplified DeepSeek’s hype, recycling avatars and posting in lockstep.

Whether this was an intentional campaign by DeepSeek—or a third party boosting the buzz—is still unclear. But it raises hard questions about manipulated AI hype cycles and their impact on stock markets, which famously plunged during the “DeepSeek moment.”

Meta’s AI Lab Trouble

Meta, meanwhile, is struggling to keep its ambitious superintelligence lab stable. After showering researchers with massive signing bonuses, several have already left—some returning to OpenAI, others launching their own ventures.

The turbulence highlights a growing problem in AI’s talent war: money can buy star researchers, but not necessarily loyalty.

Why This Matters

The AI industry is moving faster than regulators, and these stories show why the stakes are so high:

  • Trade secret theft could shift global AI leadership.
  • ChatGPT’s “police pipeline” raises privacy and free speech concerns.
  • AI hype cycles risk destabilizing markets.
  • Even Big Tech can’t buy harmony in the race to superintelligence.

Conclusion

From Musk’s courtroom drama to OpenAI’s controversial police-monitoring policy, this week showed us the fragile, high-stakes world of AI power plays.

Whether you cheer for transparency, privacy, or innovation, one thing is certain: AI isn’t just changing technology—it’s changing trust.

Also Read

Leave a Comment