Anthropic Launches Claude Opus 4.5 With a Price Shock and Skills That Outrun Engineers

Anthropic just took a big swing at the AI leaderboard. The company has launched Claude Opus 4.5, a major upgrade that slashes costs, boosts reasoning, and posts performance numbers that are already rattling its competitors.

And the timing couldn’t be more pointed. OpenAI and Google have spent the past month unleashing their own model upgrades. Anthropic’s answer? A model that’s faster, smarter, and aggressively priced to undercut the market.

A Price Cut That Changes the Game

The headline move is simple: Opus 4.5 is dramatically cheaper.
Input tokens now cost $5 per million, while output hits $25 per million — a huge drop from the previous Opus price tiers.

This isn’t a marketing gimmick. Lowering costs at this magnitude shifts the economics of building AI products. Developers can ship more features. Enterprises can run bigger workloads. Startups suddenly get room to experiment instead of rationing tokens.

A Model That “Just Gets It”

But performance is the real hook.

Anthropic says Opus 4.5 shows a “qualitative jump” in reasoning. Internal testers describe it as a model that understands priorities instead of just following instructions. It filters noise, makes cleaner decisions, and handles multi-step tasks with more mature judgment.

On SWE-bench Verified, one of the toughest real-world engineering benchmarks, the model scored 80.9% — higher than OpenAI’s new Codex Max and Google’s Gemini 3 Pro. For a sector where every percentage point is hard-won, that gap matters.

Outperforming the Humans Who Built It

The most provocative claim is tied to Anthropic’s internal engineering exam — the same task given to real candidates applying to work there.

With a parallel thinking technique that lets the model try multiple approaches, Opus 4.5 outscored every human engineer who ever took the test.
Without the time limit, it tied the company’s highest-performing human candidate.

This doesn’t replace engineers. But it does raise a new question: What happens when AI begins surpassing the professionals meant to supervise it?

Efficiency Gains Are Quietly the Biggest Upgrade

Opus 4.5 doesn’t just think better — it thinks smarter.

According to Anthropic, the model uses far fewer tokens to reach the same (or better) conclusions. In some tests, output dropped by more than 70% while maintaining performance.

For companies running heavy workloads, this isn’t a small optimization — it’s a budget-shifting improvement. As one early partner put it, the efficiency “compounds fast.”

Self-Improving Agents Make a Debut

Early customers noticed something new: Opus 4.5 can refine its own tools and approaches over multiple attempts. Anthropic calls these “self-improving agents.”

They don’t retrain themselves, but they do upgrade their tactics. In testing, agents reached optimal performance in just a few iterations, even on tasks that previously required heavy oversight.

This behavior shows up in coding, documentation, analysis, presentations — anything with repeatable workflows.

New Productivity Features Land Too

Anthropic paired the model launch with new product updates:

  • Infinite chats that auto-compress conversation history.
  • Claude for Excel, now with pivot tables and charts.
  • A new Chrome extension for Max users.
  • Better tool-calling and an upgraded Claude Code interface.

The AI Race Tightens

With Opus 4.5, Anthropic signals that top-tier AI capability is getting cheaper and more accessible — fast. The company’s revenue is climbing, competitors are sprinting, and the market is now watching more than just raw intelligence.

It’s watching who can deliver power at the right price.

Because in 2025, performance matters — but efficiency decides the winner.

Also Read..

Leave a Comment