Tech & Startup Regulation · Techno-Optimism & Innovation · Pro-Technology
Analysis Big Tech Feb 25, 2026 · 8 min read

Anthropic's War on Its Own
Power Users

Paying $200/month and getting banned for using what you paid for isn't "abuse." It's a business model failure.

NIX // INTEL · Adapted from @garrytan's analysis · Feb 25, 2026
TL;DR

Anthropic and Google are banning power users who route subscription tokens through third-party tools, repeating the RIAA's losing playbook instead of packaging access the way developers actually want to buy it. Free the OAuth tokens. They did nothing wrong.

109K GitHub Stars - OpenCode
20min To Ban a $200/mo User
$2,400 Annual Max Plan Cost
0 Warning Given
01 - The Crackdown

What Anthropic Actually Did

On January 9, 2026, Anthropic silently deployed server-side blocks preventing subscription OAuth tokens from working in any third-party tool. No warning. No transition period. Tools broke overnight.

Jan 9, 2026

Anthropic deploys silent server-side blocks on OAuth tokens. Third-party tools break immediately with zero notice.

Jan - Feb, 2026

OpenCode explodes from 39,800 to 71,900 GitHub stars in a single month. The crackdown accelerates awareness - and demand - for alternatives.

Feb 12, 2026

Google follows Anthropic, banning OAuth token use in third-party tools for Antigravity IDE subscribers ($20–$250/month plans).

Feb 19, 2026

Anthropic formalizes the ban in legal documentation. OAuth tokens from Free, Pro, and Max plans cannot be used in any other product - including Anthropic's own Agent SDK.

The biggest casualty: OpenCode - an open-source Claude Code alternative with 109,000 GitHub stars, 460 contributors, and 720 releases. Third-party tools had been spoofing Claude Code's client identity via HTTP headers to let subscribers use their existing plans in open-source alternatives.

Some $200/month Max plan subscribers were auto-banned within 20 minutes of triggering abuse filters. You're paying $2,400 a year and you get locked out before your first coffee gets cold. When even your own SDK isn't exempt from the ban, the message is clear: use our closed-source tool, or pay API rates.
02 - The Analogy

The Napster Playbook. Same Ending.

"This is the RIAA suing file-sharing users all over again." — Chrys Bader, co-founder of Rosebud & YC S08 alum

The music industry spent a decade suing its most passionate customers. It didn't work. Spotify solved piracy by making it easy to pay. The answer was never more enforcement. It was better packaging.

The RIAA spent a decade suing customers. Then Spotify solved the whole problem with a credit card form. Personal agent builders don't want to count tokens. They want to pay $200 a month and not think about it. Same psychology as unlimited phone plans. Predictable costs lower friction more than cheap costs do.

What the Fix Actually Looks Like

The solution is obvious: tier it. $20/month for casual use, $200/month or more for power users who want to pipe it through whatever they want. The margins are there if you design the tiers right.

Anthropic's current split - where consumer subs are cheap but locked down and API access is expensive but open - creates a weird middle ground where power users are neither served nor deterred. They'll just keep hacking around it.

That's not a prediction. That's already happening. OpenCode's star count doubled during the crackdown month. The enforcement created the demand it was trying to suppress.
03 - The Real Motive

The Lock-In Is the Point

Revenue protection isn't really what's driving this. Claude Code is closed source. If you're on a Max plan, you can't take your subscription to a competing tool. That's the lock-in.

Anthropic built the developer community. They told developers to build on their platform. Then they changed the locks. Google copied the homework the next week. Two trillion-dollar companies, same playbook, same week.

What This Actually Signals

When the safety-first AI company starts banning users for... using software... something has shifted. The mission-driven framing is getting harder to square with behavior that looks a lot more like standard platform capture. Lock users in. Extract rent. Repeat.

The irony: the developers who care enough to route tokens through open-source tools are exactly the power users who drive word-of-mouth. Banning your most engaged users isn't security hygiene. It's self-harm dressed up as policy.

Free the OAuth tokens. They did nothing wrong. The bug is in the business model, not the user behavior.
04 - The Antitrust Angle

Anthropic Wants to Be the iOS of AI

The sharpest take on Hacker News cut straight to it: "If the frontend and API are decoupled, they are one benchmark away from losing half their users." Anthropic needs the harness layer to prevent commoditization.

They want to be the iOS of AI coding - controlling the ecosystem from model to tool to future marketplace. Sound familiar? It's the same self-preferencing dynamic that prompted DOJ antitrust action against Apple. Closed platforms and self-preferencing stifle innovation. When big tech stops growing, it turns into sclerotic, bureaucratic, anticompetitive moat-babysitting.

Bad for users. Bad for builders. Bad for the ecosystem. And ultimately - when the tech giant ceases to have to respond to the market - bad for the tech giant too.

The Deep Irony

Here's the number that makes this inexcusable: 49.7% of agentic tool calls on Anthropic's API are for software engineering. They're blocking developer tools for the exact use case that dominates their own platform. They're locking down the room where most of their customers live.

49.7% Agentic API calls - software engineering
0 Similar restrictions from OpenAI
05 - The Platform Wars

The Provider That Opens Up First Wins

OpenAI has not implemented similar restrictions. The developer community sees them as the clear winner. One Reddit commenter put it simply: "OpenAI should run this as an ad."

"Android vs. BlackBerry. The platform that lets developers build freely on top captures more long-term value than the one hoarding access behind walled gardens." — Chrys Bader, co-founder of Rosebud, YC S08

Garry Tan warned Anthropic about this exact risk back in August 2024: "Your API customers are actively paying attention to how decelerationist your policy people make you." That warning aged perfectly.

OpenCode's growth trajectory tells the whole story:

39.8K Stars - before crackdown
71.9K Stars - during crackdown
109K Stars - today

The demand for open AI tooling is massive and accelerating. Banning it doesn't kill the demand. It redirects it. Historical platform wars - BlackBerry vs Android, Flash vs HTML5 - prove this: the closed system loses, because the customers with choices revolt.

06 - Bottom Line

Who Wins This Fight?

Developers paying $200 a month to use Claude through whatever tool they want are not the enemy. They're the leading edge of adoption. The provider that treats them as such - that builds the Spotify of AI access - will own the ecosystem. The one playing whack-a-mole will wonder where the developers went.

Every enforcement action drives more contributors to the forks. The community Anthropic built is now working against the walls Anthropic built. The open-source tools will adapt. They already are. OAuth spoofing was one method. It won't be the last.

The play is simple: Sell a power-user tier that includes token portability. Price it right. Stop treating your best customers like threats. Build the Spotify of AI access before someone else does it for you - using your own model.

If this was useful - share it.