Paying $200/month and getting banned for using what you paid for isn't "abuse." It's a business model failure.
Anthropic and Google are banning power users who route subscription tokens through third-party tools, repeating the RIAA's losing playbook instead of packaging access the way developers actually want to buy it. Free the OAuth tokens. They did nothing wrong.
On January 9, 2026, Anthropic silently deployed server-side blocks preventing subscription OAuth tokens from working in any third-party tool. No warning. No transition period. Tools broke overnight.
Anthropic deploys silent server-side blocks on OAuth tokens. Third-party tools break immediately with zero notice.
OpenCode explodes from 39,800 to 71,900 GitHub stars in a single month. The crackdown accelerates awareness - and demand - for alternatives.
Google follows Anthropic, banning OAuth token use in third-party tools for Antigravity IDE subscribers ($20–$250/month plans).
Anthropic formalizes the ban in legal documentation. OAuth tokens from Free, Pro, and Max plans cannot be used in any other product - including Anthropic's own Agent SDK.
The biggest casualty: OpenCode - an open-source Claude Code alternative with 109,000 GitHub stars, 460 contributors, and 720 releases. Third-party tools had been spoofing Claude Code's client identity via HTTP headers to let subscribers use their existing plans in open-source alternatives.
The music industry spent a decade suing its most passionate customers. It didn't work. Spotify solved piracy by making it easy to pay. The answer was never more enforcement. It was better packaging.
The RIAA spent a decade suing customers. Then Spotify solved the whole problem with a credit card form. Personal agent builders don't want to count tokens. They want to pay $200 a month and not think about it. Same psychology as unlimited phone plans. Predictable costs lower friction more than cheap costs do.
The solution is obvious: tier it. $20/month for casual use, $200/month or more for power users who want to pipe it through whatever they want. The margins are there if you design the tiers right.
Anthropic's current split - where consumer subs are cheap but locked down and API access is expensive but open - creates a weird middle ground where power users are neither served nor deterred. They'll just keep hacking around it.
Revenue protection isn't really what's driving this. Claude Code is closed source. If you're on a Max plan, you can't take your subscription to a competing tool. That's the lock-in.
Anthropic built the developer community. They told developers to build on their platform. Then they changed the locks. Google copied the homework the next week. Two trillion-dollar companies, same playbook, same week.
When the safety-first AI company starts banning users for... using software... something has shifted. The mission-driven framing is getting harder to square with behavior that looks a lot more like standard platform capture. Lock users in. Extract rent. Repeat.
The irony: the developers who care enough to route tokens through open-source tools are exactly the power users who drive word-of-mouth. Banning your most engaged users isn't security hygiene. It's self-harm dressed up as policy.
The sharpest take on Hacker News cut straight to it: "If the frontend and API are decoupled, they are one benchmark away from losing half their users." Anthropic needs the harness layer to prevent commoditization.
They want to be the iOS of AI coding - controlling the ecosystem from model to tool to future marketplace. Sound familiar? It's the same self-preferencing dynamic that prompted DOJ antitrust action against Apple. Closed platforms and self-preferencing stifle innovation. When big tech stops growing, it turns into sclerotic, bureaucratic, anticompetitive moat-babysitting.
Here's the number that makes this inexcusable: 49.7% of agentic tool calls on Anthropic's API are for software engineering. They're blocking developer tools for the exact use case that dominates their own platform. They're locking down the room where most of their customers live.
OpenAI has not implemented similar restrictions. The developer community sees them as the clear winner. One Reddit commenter put it simply: "OpenAI should run this as an ad."
Garry Tan warned Anthropic about this exact risk back in August 2024: "Your API customers are actively paying attention to how decelerationist your policy people make you." That warning aged perfectly.
OpenCode's growth trajectory tells the whole story:
The demand for open AI tooling is massive and accelerating. Banning it doesn't kill the demand. It redirects it. Historical platform wars - BlackBerry vs Android, Flash vs HTML5 - prove this: the closed system loses, because the customers with choices revolt.
Developers paying $200 a month to use Claude through whatever tool they want are not the enemy. They're the leading edge of adoption. The provider that treats them as such - that builds the Spotify of AI access - will own the ecosystem. The one playing whack-a-mole will wonder where the developers went.
Every enforcement action drives more contributors to the forks. The community Anthropic built is now working against the walls Anthropic built. The open-source tools will adapt. They already are. OAuth spoofing was one method. It won't be the last.