Four days ago, we published “Anthropic Banned Third-Party Tools. Here’s What It Means.” We documented how Anthropic blocked subscription OAuth tokens from working outside their official Claude Code CLI, then formalized the ban in a dedicated legal page.

In that post, we mentioned Google briefly: developers had been extracting OAuth tokens from Google’s Antigravity IDE and injecting them into third-party tools. Google characterized it as a ToS violation and issued account bans.

We didn’t have the full story then. Now we do. And Google handled it very differently from Anthropic.

What is Antigravity

For context: Google Antigravity is Google’s AI-powered IDE, launched in November 2025 alongside Gemini 3. It is a modified fork of VS Code with an “agent-first” architecture that lets developers dispatch multiple AI agents to work on coding tasks simultaneously. Google offers it free in public preview, with paid tiers: Pro at $200/month and Ultra at $249/month for higher usage limits and premium models.

The paid tiers authenticate through OAuth tokens against Google’s Antigravity servers. Those tokens became the problem.

The bans: February 12

On February 12, 2026, an Ultra subscriber posted on Google’s official AI developer forum reporting a sudden restriction on their account that had persisted for three days, with no prior warnings.

The only change to their workflow: connecting Gemini models via OpenClaw’s OAuth integration. Their $249/month subscription was immediately restricted with no explanation.

A Google representative responded within the hour. The suggestion: use the in-app feedback tool to report the issue.

The developer’s reply:

“I am logged out of my account and I can’t even get into the app!!”

The person locked out of their account was told to use the account they were locked out of to report being locked out. This set the tone for everything that followed.

What followed

Over the next week, the forum thread documented a pattern of support failures.

Day 4. The original poster reported “total silence from support” and “zero acknowledgement through official channels.”

More users surfaced. One described circular support routing: Google Cloud Support sent them to Google One Support, which sent them back to Google Cloud Support. Another reported a prepaid annual subscription now completely inaccessible, and began exploring legal action. Others reported suspension within a day of integrating third-party tools. One user spent a week sending screenshots and recordings to Google One support with no resolution.

Support deleted its own acknowledgments. On February 20, a user reported that Google’s representative had posted a brief acknowledgment of “403 ToS issues” with a prioritization statement, then deleted the post within minutes. When the user followed up with a polite inquiry, their forum account was banned.

New accounts got banned too. Users reported that creating fresh Google accounts for Antigravity resulted in immediate restriction. The enforcement was not limited to accounts that had used third-party tools.

Billing continued. Multiple users reported being charged $200 to $250 per month for accounts they could not access. No automatic refunds were issued.

A second forum thread titled “$250/mo Ultra Subscriber Banned Without Warning” framed it as “a systemic failure in Google’s Developer Support.”

Google’s official response

Five days after the initial forum post and three weeks after the bans began, Google shared the results of an investigation:

“Use of your credentials within the third-party tool ‘open claw’ for testing purposes constitutes a violation of the Google Terms of Service.”

The response cited “use of Antigravity servers to power a non-Antigravity product” and invoked a “zero tolerance policy” that made reversal impossible.

No warning before the ban. No graduated response. No path to appeal. No refund for the weeks of subscription payments collected from suspended accounts. The suggestion from another forum user: create a different Google account and start over.

A Google employee on Hacker News (self-identified, not on the Antigravity team) later clarified that only Antigravity access was blocked, not entire Google accounts. Users’ Gmail, Drive, and other Google services remained accessible. This was a meaningful distinction that Google’s official communications had failed to make clearly, leaving users to assume the worst.

Meanwhile, on the OpenClaw GitHub, affected users confirmed the bans were widespread. OpenClaw creator Peter Steinberger called Google’s enforcement “pretty draconian” and announced he would remove Antigravity support from OpenClaw entirely, warning users to “be careful if they plug it in.” The maintainers ultimately closed the issue as “won’t fix,” noting that “some providers have TOS which may be violated when using your agents with those providers.”

Google vs. Anthropic

Both companies faced the same situation: developers using subscription OAuth tokens in third-party tools, bypassing the rate limits and optimizations that make flat-rate pricing viable. Both decided to stop it. Their approaches diverged significantly.

AnthropicGoogle
WarningNone (server-side block)None (instant suspension)
EnforcementError message in API responseAccount suspended, no access
Account impactAPI calls rejected; account intactFull Antigravity access revoked
BillingSubscription continued normallyCharged for suspended service
SupportAcknowledged issues, reversed accidental bansCircular routing, deleted posts, banned users asking questions
CommunicationPublished legal page 6 weeks laterForum reply after 3-week investigation
ReversalN/A (accounts never banned)“Zero tolerance,” no reversal
Official stance”Nothing changes for normal account usage""We cannot reverse the suspension”

Anthropic’s handling was not perfect. The January block hit without warning and broke workflows across the ecosystem. But Anthropic returned a clear error message, kept accounts intact, reversed accidental bans, and eventually published formal documentation.

Google’s approach was more severe in every category. As one Hacker News commenter put it: “Anthropic had the same problem with Claude Code third-party tools. They communicated first, flagged it, gave people time to adjust.”

The table makes the factual differences clear. The harder question is whether Google’s approach was proportionate, and whether it serves Google’s long-term interests in a market that is still taking shape.

The economics, and a question of proportionality

Using subscription OAuth tokens outside official tools violates the terms of service. That is not in dispute. Google has the right to enforce its terms. Every provider does.

The economics are identical to what we described in the Anthropic post. Google’s Antigravity optimizes prompt caching and inference routing for its official client. When requests come through the official IDE, Google can batch requests, reuse cached prefills, and control concurrency. Third-party tools bypass these optimizations, potentially increasing per-request serving costs by 5 to 10x. A $249/month Ultra subscription becomes deeply unprofitable when the tokens flow through tools that strip away those cost optimizations.

The question is about proportionality and market timing.

The AI agent market is barely two years old. Pricing models have not stabilized. Usage patterns are still evolving. The developers who were suspended are paying subscribers, spending $200 to $250 per month, experimenting with a technology that Google itself markets as transformative. There were alternative responses available: warnings, rate limits, graduated enforcement, or working with the community on sustainable pricing. Google chose permanent suspension with no prior notice and no appeal.

One argument for the bans is that third-party tools bypass prompt caching optimizations, inflating serving costs. But prompt caching is not a trade secret. Anthropic, OpenAI, and Google itself all offer public prompt caching APIs. If serving cost is the core concern, there are collaborative alternatives: document the caching requirements, enforce them through the API, rate-limit clients that do not comply, or offer a third-party access tier at sustainable pricing. These are standard approaches in platform ecosystems. Permanent bans without warning are not.

There is a loose historical parallel. In January 2023, Twitter banned all third-party clients overnight. Tweetbot, Twitterrific, and other apps that had spent over a decade serving Twitter’s most engaged users were shut off without notice. The developers behind those apps did not return to the official client. Tweetbot’s creators pivoted to Mastodon and launched Ivory. Twitterrific’s creator Craig Hockenberry wrote: “Personally, I’m done.” Twitter survived, of course, and the developer exodus had many causes beyond the API ban. But the precedent is worth noting: restricting third-party access did not strengthen Twitter’s developer ecosystem. It shrank it.

The AI market is different from social media in important ways. But the risk is similar: in a nascent market where pricing has not settled, punitive enforcement against paying, engaged users may push some of them toward alternatives rather than back into the official product. And in a market where DeepSeek runs at $0.25 per million tokens and four of OpenRouter’s top five models are open source, the alternatives are more accessible than they have ever been.

The real cost of closed models

Both ban waves share a common assumption: that developers need access to a specific proprietary model badly enough to accept the provider’s terms, pricing, and enforcement style. That assumption is weakening fast.

Here is what the market looks like today, via OpenRouter:

ModelTypeInput / Output (per 1M tokens)
Claude Opus 4.6Closed$5.00 / $25.00
Claude Sonnet 4Closed$3.00 / $15.00
GPT-5.2Closed$2.00 / $14.00
Gemini 3 ProClosed$2.00 / $12.00
Gemini 3 FlashClosed$0.50 / $3.00
DeepSeek V3.2Open$0.25 / $0.38
Qwen 3.5 PlusOpen$0.40 / $2.00
GLM 4.7Open$0.40 / $2.00
Kimi K2.5Open$0.50 / $2.00
MiniMax M2.5Open$0.28 / $1.00
Grok 4.1 FastOpen$0.20 / $0.50
Devstral 2Open$0.05 / $0.22

DeepSeek V3.2 achieves roughly 90% of GPT-5’s coding performance at 1/50th the cost. MiniMax M2.1 scores 72.5% on SWE-bench Multilingual. Devstral 2 hits 73%+ on SWE-bench at five cents per million input tokens. Four of the five most-used models on OpenRouter are now open source.

OpenClaw users are already voting with their tokens. OpenRouter’s usage data for OpenClaw shows the top models by volume are Kimi K2.5 (867B tokens), Arcee Trinity Large (496B), Gemini 3 Flash (438B), Step 3.5 Flash (399B), and MiniMax M2.5 (365B). Four out of five are open source. The one closed model on the list, Gemini 3 Flash, is the cheapest option Google offers. Developers are not choosing open models out of ideology. They are choosing them because the performance is there and the price is right.

The pricing gap between closed and open models is not a rounding error. It is an order of magnitude. And with every ban wave, the risk side of the equation gets worse too: proprietary models carry the added cost of sudden access revocation.

What this means for OpenClaw users

Some OpenClaw users connected to Gemini models through Antigravity OAuth tokens. When Google detected requests coming from outside the official Antigravity client, those users’ accounts were suspended. Steinberger has since announced he is removing Antigravity support from OpenClaw. This is functionally the same situation we described in the Anthropic post: third-party tools using subscription OAuth tokens to access models at flat-rate pricing, bypassing the provider’s official client and its built-in cost controls.

Subscription OAuth from any provider is subsidized access running on borrowed time. Every provider will eventually close this path, because the economics demand it. Anthropic closed it in January. Google closed it in February. Others will follow.

If you were affected: your OpenClaw agent still works. The agent itself, your conversations, your data, your automations: none of it is tied to how you authenticate with a model provider. You have two paths forward.

Bring your own API keys. Connect directly to whichever provider gives you the best value. Based on the pricing table above, open-source models via providers like DeepSeek, Fireworks, or Together deliver frontier-level coding performance at a fraction of the cost. No subscription, no ban risk, pay only for what you use.

Use Fuel. Services like OpenRouter already solve multi-provider routing for general use. Fuel does the same thing but is designed specifically for OpenClaw agents: we select and update the best model for each task based on ongoing benchmarking, so you do not have to. It works as a pay-as-you-go skill you can install on any OpenClaw agent, including fully self-hosted ones.

Three things to take away

Two major AI providers have now restricted third-party tool access in six weeks:

ProviderDateAction
AnthropicJanuary 9, 2026Blocked subscription OAuth in third-party tools
Google~February 9, 2026Suspended Antigravity accounts using third-party OAuth
GoogleFebruary 17, 2026Confirmed “zero tolerance policy,” no reversal
AnthropicFebruary 19, 2026Published formal legal page codifying the ban

If you are building on subscription OAuth from any provider, it is reasonable to assume similar enforcement is coming.

  1. Stop using subscription OAuth tokens in third-party tools. This applies to every provider, not just Anthropic and Google. Subscription OAuth is subsidized access, and every provider will protect it eventually. Switch to API keys or a managed service like Fuel before your account gets flagged.

  2. Consider the enforcement precedent. Google’s response included instant permanent suspensions, continued billing during suspension, and limited support recourse. If you build on any provider’s AI stack, factor in how that provider handles policy disputes, not just how their models perform.

  3. Open-source models have caught up, and they cannot be revoked. DeepSeek V3.2 at $0.25 per million input tokens delivers 90% of frontier performance. No vendor can pull the plug on an MIT-licensed model. Pair open models with a provider-agnostic setup, whether that is your own API keys or a routing service like Fuel, and the next ban wave becomes irrelevant.

The AI industry is following a predictable path: open early access to build ecosystems, then restrictions once the economics bite. The question for every developer and every team is simple: have you built on ground you control, or ground that can be pulled out without warning?