Agent Skill

Fuel

Optimized LLM inference for OpenClaw agents. Install the skill, add your key, and your agent runs on the best price/performance models available. No config to figure out.

Install in 10 seconds

npx skills add openclaw-rocks/skills --skill fuel

Works with Claude Code, Cursor, Windsurf, Gemini CLI, and 27+ agent platforms.

How it works

1

Install the skill

One command. The skill provides your agent with the optimized config and setup instructions.

2

Buy credits

Get a virtual key with your balance. Your agent can't overspend — budget limits are built in.

3

Tell your agent to set up Fuel

The skill handles the rest. Your agent applies the config and starts routing through Fuel.

What's in the config

Best models, always

We continuously evaluate and swap in the best price/performance models. You get semantic roles — we handle the rest.

Context pruning

Stops your agent from keeping every message in context forever. Saves 30-50% on input tokens.

Smart compaction

Distills context into memory files at 40K tokens. Your agent remembers what matters, forgets the noise.

Budget controls

Hard spending limits on your virtual key. No 3am surprise bills from a runaway agent loop.

Pricing

Role Input Output
fuel/worker ~$0.67 / M tokens ~$2.02 / M tokens
fuel/reasoning ~$0.72 / M tokens ~$3.60 / M tokens
fuel/heartbeat ~$0.06 / M tokens ~$0.24 / M tokens

Worker for routine coding, reasoning for complex tasks, heartbeat for pings. Costs are approximate — we optimize continuously. A typical 1-hour session costs ~$0.50.

Our picks

"Best models. No borders." We evaluate models purely on quality, cost, and reliability — regardless of where they come from. These picks are opinionated. We explain our reasoning, and you can see exactly what's running.

Role Model Provider Origin Hosted In Why
worker DeepSeek V3.1 Fireworks CN US 73.1% SWE-Bench at $0.67/$2.02, best cost/quality ratio, US-hosted
worker DeepSeek V3 DeepSeek CN CN Same quality, ~50% cheaper ($0.34/$1.32), CN-hosted
worker Devstral 2 (123B) Mistral EU EU 72% SWE-Bench at $0.48/$2.40, EU-sovereign, GDPR
worker Llama 4 Maverick Together AI US US Cheapest worker at $0.32/$1.02, best US-origin MoE
reasoning Kimi K2.5 Fireworks CN US 76.8% SWE-Bench at $0.72/$3.60, highest reasoning score, US-hosted
reasoning Kimi K2.5 Instruct Together AI CN US Same 76.8% SWE-Bench, failover provider at $1.20/$3.60
reasoning Magistral Medium Mistral EU EU EU-sovereign reasoning at $2.40/$6.00, GDPR
reasoning gpt-oss-120b Together AI US US Cheapest reasoning at $0.18/$0.72, 62.4% SWE-Bench, Apache 2.0
heartbeat GPT-OSS 20B Fireworks US US $0.06/$0.24, cheap and fast, Apache 2.0
heartbeat Llama 3.1 8B Groq US US Fastest TTFT, cheapest output at $0.06/$0.10
heartbeat Mistral Small 3.2 OVHcloud EU EU EU-sovereign at $0.11/$0.35, GDPR

Default picks are bold. Alternatives activate when you filter by region. Every region filter has full role coverage.

When we swap models, your config stays the same. No action needed on your end.

This infrastructure is open source. See our GitHub for the proxy code, configuration reference, and self-hosting instructions.

GDPR & data sovereignty

Fuel supports fully GDPR-compliant inference. Filter by region and every request stays within EU-sovereign infrastructure — no CLOUD Act exposure.

~eu-eu EU-origin models, EU-hosted providers. Fully GDPR-compliant. Mistral (France) + OVHcloud (France). No US-headquartered companies in the data path.
~all-eu Any model origin, EU-hosted providers. Data stays in the EU regardless of model origin.
~us-us US-origin models, US-hosted providers. Llama 4 Maverick + gpt-oss-120b + Groq.
~all-all Default. Best globally, no restrictions.

EU providers

Our EU stack uses exclusively EU-headquartered companies: Mistral (Paris), OVHcloud (Roubaix), and Scaleway (Paris). All offer signed Data Processing Agreements, EU-only data residency, and zero exposure to the US CLOUD Act. No prompts or responses leave the EEA.

What we don't store

The Fuel proxy is stateless. We do not log, store, or retain prompts or responses. Request data transits through the proxy to the upstream provider and back. Budget tracking uses only metadata (token counts, costs) — never content.

Buy credits

Choose an amount. You'll get a virtual key with your balance.

Powered by Stripe. Credits don't expire. Top up anytime.

Usage estimates are approximate and vary based on conversation length, model mix, and task complexity. We refine these as we learn from real OpenClaw usage patterns.

Common questions

Do I need an OpenClaw.rocks hosting plan to use Fuel?

No. Fuel works with any OpenClaw agent — self-hosted, on OpenClaw.rocks, or anywhere else. The endpoint is OpenAI-compatible, so it works with most AI tools.

Why not just pick a provider myself?

You can. Fuel saves you from researching models, comparing providers, tuning config, and keeping up with new releases. We continuously swap in the best models — your config stays the same.

What happens when my credits run out?

API calls return a clear 402 error. Your agent won't break — it just can't make new LLM calls until you top up. Buy more credits here and you're back in seconds.

Can I use this with Claude Code or Cursor?

Yes. The skill installs on any agent platform that supports the skills standard. The Fuel endpoint itself is OpenAI-compatible, so any tool with custom base URL support can use it.

Is Fuel GDPR compliant?

Yes. Use the ~eu-eu region filter to route all inference through EU-sovereign providers (Mistral, OVHcloud) with signed DPAs and EU-only data residency. The proxy itself is stateless — we don't log or store prompts or responses. No US-headquartered companies in the data path means no CLOUD Act exposure.

Why do some models come from China?

Because they're the best. DeepSeek V3 and Kimi K2.5 lead on cost/quality benchmarks for coding. We pick purely on merit. If you need data sovereignty, use a region filter — every role has EU and US alternatives with full coverage.

Your agent deserves good fuel

npx skills add openclaw-rocks/skills --skill fuel