GPT-5 Under Fire: AI Critic Says “Same Flaws, Bigger Hype”

OpenAI’s GPT-5 just dropped with promises of near-expert intelligence. But not everyone’s buying the hype—especially veteran AI skeptic Gary Marcus. Hours after launch, he accused OpenAI of rushing an incremental update while ignoring deep-rooted flaws in large language models.

Key Takeaways:

  • Gary Marcus calls GPT-5 “overdue, overhyped and underwhelming.”
  • Points to early hallucinations, flawed reasoning, and basic errors.
  • Says AI industry prioritizes hype and marketing over real progress.
  • Calls for neurosymbolic AI instead of endless scaling.

OpenAI’s latest flagship model, GPT-5, arrived with bold claims from CEO Sam Altman—promising conversations like chatting with “a legitimate PhD-level expert in anything.” But for Gary Marcus, one of the AI industry’s most persistent critics, the launch was more smoke than fire.

In a blunt blog post on his Substack, Marcus dismissed GPT-5 as “the latest incremental advance. And it felt rushed at that.” Instead of a breakthrough moment, he argued, the update delivered small refinements while leaving long-standing problems untouched.

Marcus didn’t just rely on opinion—he cited concrete examples. Within hours of release, GPT-5 was caught giving wrong answers to basic chess puzzles, making flawed physics explanations during the launch livestream, and botching image analysis.

“A system that could have gone a week without the community finding boatloads of ridiculous errors and hallucinations would have genuinely impressed me,” Marcus wrote.

The Bigger Problem: It’s Not Just GPT-5

Marcus pointed to a recent study from Arizona State University that backs up his concerns. Researchers found that “chain-of-thought” reasoning—a highly marketed feature—collapses when faced with problems outside its training data.

This “distribution shift” failure, Marcus says, explains why rival models like Grok and Gemini also stumble when pushed beyond their comfort zone. “It’s not an accident. That failing is principled,” he noted, stressing that scaling up existing architectures won’t solve it.

Taking Aim at AI Hype

Beyond technical shortcomings, Marcus slammed what he sees as a marketing-driven culture in AI. From cherry-picked demos to secrecy around training data, he accused the industry of selling a glossy narrative instead of building trustworthy systems.

“We have been fed a steady diet of bullshit for the last several years,” Marcus wrote, urging the community to focus on real research that addresses known limitations.

A Different Path Forward

Marcus’s solution? Neurosymbolic AI—models that blend statistical learning with explicit, human-like world models. He believes this hybrid approach is a clearer path to reliable reasoning, rather than endlessly scaling parameters in hopes of emergent intelligence.

While GPT-5 has earned praise for speed and adaptability, Marcus’s critique resonates with a growing audience skeptical of AI’s grand promises. For them, the launch isn’t proof of approaching AGI—it’s a reminder that the industry’s toughest problems remain unsolved.

Also Read

Leave a Comment