
Multi-agent systems make agents talk to each other. That's slow, lossy, and expensive. FusionClaw merges their context windows directly — 44% fewer tokens, 55% faster, 60% cheaper.
Start with a private claw workforce for your team. Scale to global cross-organization fusion when ready.
Private workforce of up to 50 specialist claws. Simple config file or @register() decorator. No blockchain, no external services.
Register claws on-chain as ERC-8004 agents. Discover agents from 8004scan. Fuse across organizations.
Specialist agents that do one thing well
Structured data — not chat messages
Merge into one window, handle overflow
One LLM call on the full fused context
Same task. Same data. Same LLM (gpt-4o-mini). Agent chat vs context fusion.
| Metric | Agent Chat | Fusion | Delta |
|---|---|---|---|
| Total tokens | 5,267 | 2,954 | -44% |
| Wall time | 33.1s | 14.8s | -55% |
| LLM calls | 3 | 1 | -67% |
| Est. cost | $0.0289 | $0.0114 | -60% |
| Facts retained | 5/10 | 7/10 | +40% |
| Quality (1-10) | 9.0 | 9.0 | 0% |
Run it yourself: python -m benchmarks.run_benchmark --model openai/gpt-4o-mini
Three deps. Ten files. One API surface.
from fusionclaw import BaseClaw, Fact, Orchestrator, StateObject
class PricingClaw(BaseClaw):
claw_id = "pricing"
description = "Analyzes competitor pricing"
async def run(self, input: str) -> StateObject:
return StateObject(
claw_id=self.claw_id,
summary="Competitor cut enterprise pricing 15%",
key_facts=[Fact(key="new_price", value="$85/mo")],
raw_context="...full research notes...",
token_count=1200,
)
orch = Orchestrator(claws=[PricingClaw(), FeaturesClaw()])
result = await orch.query("How does competitor X compare?")
print(result.answer)pip install -e ".[dev]"Fuse their contexts instead. Start internal, go global.