
Your OpenClaw runs 24/7.
Your inference bill shouldn't.
Always-on agents run hundreds of background tasks โ research, email triage, reports. None of it needs real-time latency. All of it gets charged like it does.
Every skill run, every sub-task, every scheduled automation your OpenClaw agent kicks off is an inference call. At scale, those calls add up fast. Most providers charge real-time prices for workloads that have no business being real-time.
Doubleword gives your agent a cheaper AI inference tier with the same model quality for everything that can wait, so you can afford to run OpenClaw on more and more tasks. Dispatch, decouple, consume as they arrive. Up to 10x cheaper than real-time endpoints.
A day with your agent
Your agent decides in real time: does this need to be instant, or can it run in the background? Overtime, your agent is above to do 10x more with the same inference budget.
Email triage โ 30 emails classified & summarised
Quick question โ "What's our Q2 pipeline?"
Research brief โ Competitor analysis across 12 sources
Tool call โ Schedule a meeting with the design team
Doc processing โ Extract key terms from 40-page contract
Follow-up chat โ "Summarise what you found this morning"
Weekly deep report โ Market trends synthesised from 8 feeds
Model evals โ Run eval suite across 200 test cases
Set up in 3 steps
Install the Doubleword skill - your agent learns the async API and handles everything from there.
Install the skill
teaches your agent the full async API
npx skills add https://github.com/doublewordai/batch-skill
Add your API key
sign up at app.doubleword.ai, $10 free (~20M tokens)
DOUBLEWORD_API_KEY=sk-... โ ~/.openclaw/.doubleword_creds
Tell your agent when to use it
one line in TOOLS.md and it routes automatically
Use Doubleword for: background research, cron jobs, email triage, doc processing
The economics
Qwen3.5-397B-A17B โ same model, same quality. Price per 1M tokens (input + output combined).
Intelligence Score: 45 ยท 256k context
Same model quality at a fraction of the cost. Async infrastructure fills GPU capacity that real-time systems leave idle โ that efficiency becomes your saving. So your OpenClaw can do more with the same Inference Budget.
