AI Tools24 min read

Anthropic vs OpenAI 2026: Who's Actually Winning?

ChatGPT share fell from 69% to 45%. Claude Code hit $2.5B ARR. Sora is dead. The people who left OpenAI are winning — and the data proves it.

CL

ComputeLeap Team

Share:
Anthropic vs OpenAI rivalry — split composition showing fragmentation vs crystallization in the AI industry

The Anthropic vs OpenAI rivalry didn't start with a product launch. It started with a revolt.

In 2021, Dario Amodei walked out of OpenAI with his sister Daniela and roughly 30 researchers. They didn't get fired. They weren't poached. They revolted — because they believed the company they'd helped build was heading somewhere dangerous.

Three years later, they were an interesting footnote: the safety nerds who left the rocket ship.

Five years later — right now, in March 2026 — they're on the Wall Street Journal's front page, Dario is comparing his former colleagues to Hitler and Stalin, ChatGPT's market share has collapsed from 69% to 45%, and Anthropic is on track to overtake OpenAI in revenue by mid-year.

The revolt won.

This isn't a neutral comparison piece. The data has a clear direction, and pretending otherwise would be dishonest. But this also isn't hagiography — Anthropic faces real risks that could reverse everything. We'll get to those.

First, let's talk about the week that made the outcome undeniable.

The Week Everything Flipped

The week of March 22–29, 2026 didn't start the reversal. But it made it impossible to ignore.

In seven days:

  • Anthropic's next-generation model leaked. Fortune reported that roughly 3,000 internal documents were found on a publicly accessible server, revealing a model called "Mythos" — described internally as "by far the most powerful AI model we've ever developed." A new capability tier above Opus. Reddit exploded: "Anthropic May Have Had An Architectural Breakthrough!" hit 866 upvotes and 302 comments on r/singularity.

"Anthropic May Have Had An Architectural Breakthrough!" — 866 upvotes, 302 comments on r/singularity

View thread on Reddit →

  • OpenAI killed Sora. The AI video platform that once broke the internet was burning $15 million per day in inference costs — $130 per 10-second clip, $5.4 billion annually. Disney's $1 billion content deal? Dead. CEO Fiji Simo pulled the plug as pre-IPO cleanup.

  • Claude Code crossed $2.5 billion ARR. Nine months from launch to $2.5 billion, per Forbes — the fastest B2B product ramp in AI history, now driving over half of Anthropic's enterprise revenue.

  • Karen Hao's exposé dropped a bomb. The investigative journalist's Diary of a CEO interview drew on 300+ interviews — including 90+ current and former OpenAI employees — for her book Empire of AI. The NDA wall cracked wide open during IPO prep. Timing? Chef's kiss.

  • The All-In Podcast called it "generational." Chamath Palihapitiya and the Besties — who'd been skeptical of Anthropic for two years — used the word that venture capitalists only break out when they're genuinely spooked: this run is generational.

  • Dario Amodei went scorched earth in the WSJ. But more on that later.

This isn't a bad quarter for OpenAI. This is a regime change.

Naval captured the broader moment in a single tweet that went mega-viral — 16K likes and 1.4K bookmarks:

Naval tweet: A lot of software is about to get a lot better, right before it becomes unnecessary — 16K likes, 726K views

View original post on X →

The question isn't whether AI will reshape the industry. It's which company is positioned to lead the reshaping — and right now, the data says it's the one that walked out of OpenAI five years ago.

The scoreboard, March 29, 2026: ChatGPT app market share down from 69% to 45%. Anthropic ARR at ~$19B (up from $9B at end of 2025). Claude Code at $2.5B ARR from zero in 9 months. OpenAI projecting a $14B net loss. Sora dead. Claude hit #1 on the U.S. App Store. Polymarket gives Anthropic a 99% chance of having the best AI model at end of March.

The Founding Schism — 2021

To understand why March 2026 matters, you need to understand why Anthropic exists at all.

Why They Left

Dario Amodei was VP of Research at OpenAI. Daniela Amodei was VP of Safety & Policy. They weren't outsiders critiquing from the sidelines — they were the people closest to the work and closest to the risk.

The disagreements weren't abstract. According to the WSJ's reporting, the breaking point came when OpenAI President Greg Brockman floated the idea of selling artificial general intelligence to governments — specifically the nuclear powers on the UN Security Council. Russia. China. The United States.

Dario considered this "tantamount to treason" and nearly quit on the spot. He demanded direct board reporting and said he couldn't work with Brockman. The relationship was unsalvageable.

What followed wasn't a quiet departure. Roughly 30 researchers left together — not leaked out over months, but in a coordinated exodus. This was a revolt, rooted in a specific philosophical conviction: that the path OpenAI was on would end badly, and that there was a better way to build transformative AI.

What They Built Instead

Anthropic's founding thesis was deceptively simple: you could build the most capable AI systems and the most responsible ones. These weren't opposing goals — in fact, the safety research (Constitutional AI, the Responsible Scaling Policy) would produce better models, not handicapped ones.

The early years looked like a bet against the market. While OpenAI was shipping consumer features, signing Microsoft deals, and racing to ChatGPT, Anthropic was publishing papers on AI alignment and carefully releasing Claude with guardrails that competitors mocked as overcautious.

The mockery aged poorly.

The Pattern Nobody Noticed

Here's what most analysis of Anthropic gets wrong: the company's contrarian bets weren't principled instead of strategic. They were principled and strategic. Refusing to ship features that compromised safety wasn't leaving money on the table — it was building the kind of trust that enterprise customers and developers pay a premium for.

But that thesis needed time to prove itself. And in 2024, with OpenAI holding 69% market share and a $157 billion valuation, the clock looked like it was running out.

Then OpenAI started making mistakes.

OpenAI's Unforced Errors

The most important thing to understand about OpenAI's decline is that nobody did this to them. Every wound was self-inflicted.

The Pentagon Bet That Backfired

In late February 2026, Anthropic walked away from a Pentagon contract for AI systems that could be used in autonomous weaponry and mass surveillance. Their position: the technology wasn't ready for fully autonomous military deployment, and they weren't willing to pretend otherwise.

The Trump administration's response was immediate and unprecedented: a federal blacklisting of Anthropic from all government agencies. First time a sitting president had targeted an AI lab for refusing a military contract.

OpenAI took the deal Anthropic refused. Within 48 hours.

The consumer backlash was the most expensive PR disaster in AI history.

TechCrunch, citing Sensor Tower data, reported the damage: ChatGPT uninstalls surged 295% day-over-day. One-star reviews spiked 775% in a single day. ChatGPT downloads dropped 13%. Meanwhile, Claude downloads jumped 51%, and the Claude app hit #1 on the U.S. App Store — leaping 20+ ranks in under a week.

"US judge says Pentagon's blacklisting of Anthropic looks like punishment for its views on AI safety" — 2,346 upvotes on r/technology

View thread on Reddit →

On Reddit, "Cancel your ChatGPT Plus, burn their compute on the way out, and switch to Claude" hit 29,903 upvotes on r/ChatGPT — OpenAI's own subreddit. The top comment: "Anthropic was founded by people who left OpenAI specifically because they saw the company abandoning its mission. Turns out they were right about every single concern they raised."

An estimated 1.5 million subscription cancellations followed. A federal judge later said the Pentagon's blacklisting of Anthropic "looks like punishment for its views on AI safety."

Pentagon domino effect: Anthropic refuses contract → Trump blacklists Anthropic → OpenAI takes the deal → 295% uninstall spike → 775% 1-star review surge → ~1.5M cancellations → Claude hits #1 App Store. Every step of the sequence was a foreseeable consequence. OpenAI walked into it anyway.

The Sora Money Pit

Sora was supposed to be OpenAI's moonshot — the platform that proved AI could create professional-quality video. Instead, it became the most expensive demo reel ever built.

The numbers are staggering. At peak usage, Sora was generating 11 million clips per day at a compute cost of roughly $130 per 10-second clip. That's $15 million per day in inference alone — $5.4 billion annualized. OpenAI's adjusted gross margin fell from 40% to 33% before the kill decision was made.

Disney had signed a $1 billion deal granting Sora access to over 200 characters, including Mickey Mouse and Darth Vader. That deal is now dead. Fiji Simo killed Sora as pre-IPO cleanup, and honestly? It was the most rational decision OpenAI has made in a year.

But rationality doesn't reverse the narrative damage. Killing your flagship creative product weeks before an IPO tells the market something uncomfortable: we built something we couldn't afford to run.

Meanwhile, Sam Altman is focused on a different kind of survival — vertically integrating OpenAI's power supply by leaving the board of fusion energy startup Helion:

Sam Altman tweet announcing departure from Helion board as OpenAI explores working together with Helion at significant scale

View original post on X →

When your CEO is leaving boards of fusion energy startups to prepare for "working together at significant scale" — you're not optimizing a product. You're building a power plant.

The Culture Cracking Open

Then there's the human cost. Karen Hao's Empire of AI, drawn from over 300 interviews including 90+ current and former OpenAI employees, paints a picture of an organization in internal turmoil — and it dropped at the worst possible time for OpenAI's IPO narrative.

The key pattern Hao documents: every major builder at OpenAI eventually left feeling used, and each one started a direct competitor. Dario Amodei founded Anthropic. Ilya Sutskever founded Safe Superintelligence Inc. Mira Murati founded Thinking Machines Lab. No other tech company has seen its entire original builder team walk out and compete head-on.

Hao's reporting also alleges that Altman tailored the AGI narrative depending on his audience — "cure cancer" for Congress, "best assistant ever" for consumers, "$100 billion revenue machine" for Microsoft. Whether that's savvy marketing or something more corrosive is a question each reader can answer for themselves.

"Karen Hao Whistleblower Exposed How Sam Altman Allegedly Manipulated Elon Musk" — 252 upvotes on r/ArtificialInteligence

View thread on Reddit →

What's not debatable is the timing: when 90+ employees are willing to talk to a journalist during IPO prep, the NDA wall isn't just cracking. It's crumbling.

Anthropic's Jiu-Jitsu

Anthropic didn't win by outspending OpenAI or out-hiring them. They won by turning every "no" into a competitive advantage. It's strategic jiu-jitsu — using the opponent's momentum against them.

Saying No as Strategy

Consider the pattern:

No to the Pentagon → earned public trust → Claude downloads surge 51% → #1 App Store → subscriber growth that would have cost billions to acquire through marketing.

No to erotic chatbots → maintained the safety brand → became the default choice for enterprise customers who need to explain their AI vendor to a compliance department.

No to shopping features and side quests → maintained developer focus → Claude Code dominance → $2.5B ARR from the highest-value customer segment in tech.

Each "no" looked like leaving money on the table at the time. Collectively, they built a moat that money can't replicate: trust. In AI, where every customer knows they're handing over sensitive data and critical workflows to a model they can't fully audit, trust isn't a nice-to-have. It's the product.

If you've been following our comparison of Claude, ChatGPT, and Gemini, this pattern has been building for months. The Pentagon moment just made it visible to everyone else.

The Claude Code Phenomenon

Claude Code is the most important product launch in AI since ChatGPT itself — and almost nobody outside the developer community noticed until the revenue numbers forced them to.

Launched in mid-2025, Claude Code reached $2.5 billion in annual run-rate revenue by early 2026, doubling since January. It serves over 300,000 business customers and now drives more than half of Anthropic's enterprise revenue. We've covered the implications of agentic coding in our deep dive on Claude Code's remote task capabilities — what's happening here is bigger than a product launch. It's a platform shift.

The product itself keeps accelerating: Auto Dream (memory consolidation modeled on human REM sleep), auto-fix in the cloud for CI pipelines, a hooks system for custom workflows, and iMessage integration that signals where agents are heading next. The creator ecosystem is exploding — 5+ tutorial videos per day, ecosystem velocity that exceeds even the early GPT-wrapper era.

Claude Code ARR trajectory: $0 at launch (mid-2025) → ~$1.2B (January 2026) → $2.5B (March 2026). Key milestones along the way: Auto Dream memory consolidation, cloud auto-fix for CI pipelines, hooks system for custom workflows, and iMessage integration. The fastest B2B product ramp in AI history — and it's still accelerating.

Matthew Berman's widely-watched model tier list placed Claude as S-tier and ChatGPT as A-tier. For developers, the hierarchy has quietly settled: Claude Code is the tool you reach for first. If you're building AI-powered development workflows, our comparison of the best AI coding assistants breaks down exactly why.

Mythos — The Next Punch

Then there's the leak that might not have been a leak.

In late March, Fortune reported that roughly 3,000 unsecured digital assets were discovered on a publicly accessible Anthropic server. Among them: documentation for a model called Mythos, described as "by far the most powerful AI model we've ever developed," with dramatically higher scores on software coding, academic reasoning, and cybersecurity benchmarks compared to the current flagship Opus 4.6.

The most intriguing detail: Mythos represents a new capability tier called "Capybara" — above Opus. Anthropic has never created a tier above Opus before. The planned release strategy? Cybersecurity organizations first, not the general public. The stated reason: Mythos is "currently far ahead of any other AI model in cyber capabilities" and could enable attacks that "far outpace the efforts of defenders."

Anthropic called the exposure "human error." Maybe it was. But the timing — right when the competitive narrative favors Anthropic, right when OpenAI is hemorrhaging trust — gave Anthropic the best of both worlds: free publicity for their most impressive model and the responsible AI narrative of only releasing it to defenders first.

Was the Mythos leak intentional? Probably not — 3,000 unsecured files is an embarrassingly large surface area for a controlled leak. But the rapid cleanup, immediate confirmation, and "cybersecurity-first release" narrative suggest Anthropic pivoted fast from "security incident" to "strategic positioning." Never let a crisis go to waste.

The Dario Interview That Changed the Tone

On approximately March 10, the Wall Street Journal published "The Decade-Long Feud Shaping the Future of AI" — and Dario Amodei stopped being diplomatic.

The quotes are extraordinary for a sitting CEO of a company valued at $380 billion:

He compared the legal battle between Sam Altman and Elon Musk to "the fight between Hitler and Stalin."

He dubbed Greg Brockman's $25 million donation to a pro-Trump super PAC "evil."

He likened OpenAI and other rivals to "tobacco companies knowingly hawking a harmful product."

"Altman is as evil as Stalin — Dario Amodei" — 664 upvotes, 198 comments on r/OpenAI

View thread on Reddit →

This is a CEO going on the record in the Wall Street Journal with language that would get most PR teams fired. The Hitler-Stalin comparison alone would normally be career-ending in corporate America.

But here's what's interesting: the market didn't punish it. If anything, it accelerated the narrative shift. Why?

Because Dario wasn't being reckless — he was being specific. The Hitler-Stalin comparison wasn't about character; it was about the dynamic between Musk and Altman's legal battle, two powerful figures fighting each other while the real stakes (AI governance) went unaddressed. The "tobacco companies" framing wasn't hyperbole; it was a reference to knowingly shipping products with downplayed risks.

And the "treason" accusation — that Brockman wanted to sell AGI to the UN Security Council nations including Russia and China — wasn't name-calling. It was a description of a specific proposal that, if accurate, represents the most reckless idea in tech history.

The r/singularity thread hit 591 upvotes. The r/OpenAI thread hit 664. These aren't massive numbers — but they're happening on OpenAI's home turf. The narrative has shifted from "Anthropic is the underdog" to "Anthropic is the frontrunner who's now willing to play offense."

The Numbers Don't Lie

Strip away the narrative. Strip away the Reddit threads and the WSJ quotes and the podcast takes. What do the raw numbers say?

Market share shift (Jan 2025 → Mar 2026): ChatGPT app share declined from 69% to 45%. Claude's share rose from ~5% to ~15-20%. The crossover trajectory is clear — and accelerating after the Pentagon backlash. Source: All-In E220, TechCrunch/Sensor Tower data.

MetricOpenAI (March 2026)Anthropic (March 2026)
App market share45% (↓ from 69%)~15-20% (↑ rapidly)
Annualized revenue~$25B~$19B (10× growth/year)
Projected 2026 net income-$14B lossNot disclosed (leaner cost structure)
Flagship product killedSora ($5.4B/yr burn)None
Military contractsTook Pentagon dealRefused → blacklisted → won ruling
Developer sentimentA-tier (Berman rankings)S-tier
Employee morale90+ talked to Karen HaoStable
Latest modelGPT-5.4Opus 4.6 + Mythos (leaked)
Valuation~$340B (pre-IPO)~$380B

Epoch AI's analysis projects the revenue crossover: since each company hit $1 billion in annualized revenue, Anthropic has grown at 10× per year versus OpenAI's 3.4×. If recent trends continue, Anthropic overtakes OpenAI in total revenue by mid-2026.

Revenue crossover projection: Anthropic growing at 10× per year vs OpenAI's 3.4× since each hit $1B ARR. Current: OpenAI ~$25B, Anthropic ~$19B. At these rates, Anthropic overtakes OpenAI in total revenue by mid-2026. Source: Epoch AI growth rate analysis.

And the understanding that this isn't just analyst projection — there's real money behind this — is where the numbers get particularly interesting. The cost of reasoning models has been dropping dramatically, which favors the company with the more efficient architecture.

The Betting Markets Have Already Decided

Polymarket — the prediction market where traders put real money behind their forecasts — tells a story that leaves almost no room for ambiguity.

MarketResultVolume
Best AI model at end of March 2026Anthropic: 100%$16M
Best AI model end of April 2026Anthropic: 90%$3M
Will Anthropic or OpenAI IPO first?Anthropic: 69%$50.6K
Anthropic $500B+ valuation?91% yes$11K
Claude Mythos released by June 30?70% yes$37.6K
Claude 5 released by June 30?59% yes$3M (161 comments)
Anthropic Pentagon deal?Only 19% yes$43.2K
OpenAI has #1 model by June 30?Only 29%

Read those last two lines again. Only 19% of traders think Anthropic will take a Pentagon deal — the market has priced in that Anthropic will continue to say no. And only 29% think OpenAI will reclaim the top model spot by the end of June. With $16 million in volume on the March market alone, this isn't speculation from bored degens — it's institutional-grade conviction.

The betting markets have already decided. The question is whether the fundamentals agree. So far, they do.

Polymarket snapshot, March 29: $16M in volume says Anthropic has the best model. 91% say Anthropic hits $500B+ valuation. Only 29% think OpenAI reclaims #1 by June. When this much money is on the line, sentiment becomes signal.

What Could Go Wrong for Anthropic

The data supports Anthropic winning. But intellectual honesty requires asking: what could reverse this?

The answer isn't nothing. It's four specific things.

The Capacity Problem

Anthropic is growing faster than its infrastructure can handle — and users are noticing.

On r/ClaudeAI, an open letter titled "Want to free up compute during peak hours?" hit 1,052 upvotes — a rare display of user frustration from Anthropic's most loyal community. The complaint: throttled responses, degraded quality during peak usage, and rate limits that feel punitive for paying customers.

@Austen tweet: Why have LLMs all started to drop like 10% of all requests? Are they just all overwhelmed all the time?

View original post on X →

Growth this fast can break more than servers. It can break talent pipelines, engineering culture, and the careful quality control that earned Anthropic its reputation. The history of tech is littered with companies that grew faster than their infrastructure — and the ones that survived were the ones that throttled growth until quality caught up. The ones that didn't? Ask anyone who worked at early-growth Twitter.

The Mythos Expectations Trap

When you leak documentation calling your next model "by far the most powerful AI model we've ever developed" and create a new tier above your flagship product, you've set expectations that are nearly impossible to meet.

If Mythos delivers a genuine step-change — the kind of jump that Opus 4 represented over Claude 3 — Anthropic's lead becomes structural. But if Mythos feels like an incremental improvement with better marketing, the narrative reverses fast. Markets reward expectation beats, not absolute performance.

The 70% Polymarket odds on Mythos releasing by June 30 mean there's already a countdown clock ticking. Every week that passes without a release builds both anticipation and skepticism.

The Government Isn't Done

Anthropic won a court ruling, but they haven't won the war.

The federal judge said the Pentagon blacklisting "looks like punishment for its views on AI safety" — a meaningful legal signal. But the government has far more tools than lawsuits: executive orders, procurement requirements, export controls, national security designations. The next administration could flip the entire posture. And the current one has demonstrated willingness to punish companies that don't align with its AI agenda.

The Polymarket number here is telling: only 19% think Anthropic will take a Pentagon deal. That's the market pricing in continued refusal — which means continued government friction.

OpenAI Isn't Dead

Let's not write the obituary yet.

OpenAI still holds 45% app market share. They topped $25 billion in annualized revenue as of February. They just raised an additional $10 billion, bringing total funding past $120 billion. They're hiring aggressively — from 4,500 to 8,000 employees. And the SuperApp consolidation (merging Atlas browser, ChatGPT, and Codex into a single desktop application) is architecturally sound.

Most importantly: Tom's Guide reports that OpenAI is preparing a new model codenamed "Spud" — potentially GPT-6 — and that Sora's compute was freed specifically to train it. If Spud delivers a genuine capability leap, the Polymarket odds reset overnight. Killing Sora was a sacrifice, not a surrender — OpenAI bet that video AI was the wrong game and coding/reasoning is the right one.

It's also worth remembering that the broader AI capabilities narrative is more nuanced than the hype suggests. As Dwarkesh Patel documented with Terence Tao — AI has solved 50 Erdős problems, but the overall success rate on mathematical research is just 1-2%:

Dwarkesh Patel tweet citing Terence Tao on AI math capabilities — 50 Erdős problems solved but only 1-2% overall success rate

View original post on X →

OpenAI's bet on reasoning models (Spud/GPT-6) may be exactly the right play if the next capability frontier is making that 1-2% rate dramatically higher.

The $14 billion projected loss looks alarming until you remember that OpenAI has $120B+ in backing and is targeting a $1 trillion IPO valuation in H2 2026. They can afford to lose money for a long time — the question is whether that money buys them back the trust they've burned.

The realistic bear case for Anthropic: Capacity constraints alienate power users → Mythos underwhelms relative to expectations → Government pressure escalates beyond the courts → OpenAI's "Spud" delivers a genuine GPT-6-level leap → IPO capital gives OpenAI an infrastructure advantage Anthropic can't match. Each of these alone is manageable. Together, they could reverse the narrative.

The Uncomfortable Question

Here's what keeps me thinking about this story long after the numbers are tallied.

If doing the right thing is also the optimal business strategy, what does that mean for every other company?

Anthropic refused the Pentagon contract — and got rewarded with the #1 App Store position and a subscriber wave that would have cost billions to acquire through paid marketing. They refused to ship erotic chatbots — and earned the enterprise trust that's driving $19 billion in ARR. They focused on developer tools instead of consumer gimmicks — and Claude Code became the fastest B2B product ramp in AI history.

Every contrarian bet was a bet on the proposition that responsible AI development produces better commercial outcomes. Not because the market rewards virtue (it usually doesn't), but because in AI specifically, trust is the scarcest resource. When you're asking enterprises to route their most sensitive data through your models, when you're asking developers to build their careers on your platform, when you're asking consumers to trust you with conversations they wouldn't have with another human — the company that demonstrably takes safety seriously has a structural advantage over the company that takes Pentagon contracts and ships adult content.

This isn't a feel-good story. It's a market story. And if Anthropic's thesis is correct — if principle and profit are genuinely aligned in AI — then every company in tech needs to reconsider the assumption that ethics is a cost center.

OpenAI's IPO will be the biggest test. It will either be the comeback story of the decade or the most expensive validation of Dario Amodei's original thesis: that the people who left were right all along.

The betting markets have picked their side. With $16 million in volume.

The question isn't really who's winning anymore. The question is what it means that this is how they won.


This article is part of our ongoing coverage of the AI industry landscape. For a direct model comparison, see our Claude vs ChatGPT vs Gemini breakdown. For a deeper look at how Claude Code is reshaping development workflows, read our analysis of Claude Code's remote task capabilities.

CL

About ComputeLeap Team

The ComputeLeap editorial team covers AI tools, agents, and products — helping readers discover and use artificial intelligence to work smarter.

Join 100+ engineers

Stay ahead of the AI curve

Get weekly insights on AI agents, tools, and engineering delivered to your inbox. No spam, just actionable updates.

No spam. Unsubscribe anytime.