We studied the best AI assistants — and their tradeoffs. Then we built one binary that gets it all right.
OpenClaw's integrations without the 100MB. NanoClaw's security without the TypeScript bundle. PicoClaw's size without the bare-bones feature set.
Claude and OpenAI with automatic retry on failures and fallback between providers. Exponential backoff, token budget tracking, and cost estimation built in.
Real-time SSE streaming from both Claude and OpenAI. Watch responses appear token-by-token in CLI, gateway, or batch mode.
Delegate subtasks to specialized sub-agents with role-specific prompts and tool whitelists. Recursion blocking prevents infinite loops.
Extend with JSON manifest plugins — define custom tools with command templates, parameter schemas, and validation. Auto-discovered at startup.
4 built-in templates (coder, researcher, writer, analyst) plus custom JSON templates. Override system prompt, model, tokens, and temperature.
Native runtime by default. Opt into Docker or Apple Container isolation with --containerized for full sandboxing.
Telegram, Slack, Discord, WhatsApp, Lark, Email, Webhook, and CLI channels. Channel factory with per-channel auth, allowlists, and unified message bus.
Long-term key-value memory with categories and tags. Conversation history with session discovery, search, and cleanup. Persistent across restarts.
Prometheus and JSON metrics export. Per-model cost estimation for 8 models. Token budget enforcement with atomic lock-free counters.
Policy-based tool gating — require approval before dangerous tools execute. SSRF prevention, path traversal detection, shell blocklists.
Process hundreds of prompts from text or JSONL files. Template support, text or JSONL output, stop-on-error control.
Memory safety without garbage collection. 3,100+ tests. Async-first with Tokio. Minimal resource usage even under heavy load.
We studied what works — and what doesn't. OpenClaw proved an AI assistant can handle 12 channels and 100+ skills. But it costs 100MB and 400K lines. NanoClaw proved security-first is possible. But it's still 50MB of TypeScript. PicoClaw proved AI assistants can run on $10 hardware. But it stripped out everything to get there.
ZeptoClaw took notes. The integrations, the security, the size discipline — without the tradeoffs each one made.
The comprehensive one. Every feature, every channel, every integration.
The hackable one. Fork it, read it, make it yours.
The tiny one. Runs on hardware that costs less than lunch.
The one that took notes. Security, integrations, and minimalism — without the tradeoffs.
Built on a MessageBus architecture for loose coupling and easy extensibility.
Telegram, Slack, Discord, WhatsApp, Lark, Webhook, CLI — unified message interface
Approval gate → Token budget → Tool execution → Streaming response
Base provider → Fallback → Retry with cost tracking
Run your own AI assistant on a VPS or home server. Single binary, no runtime dependencies, Docker-ready.
SSRF prevention, path traversal detection, and shell blocklists by default. Optional container isolation for full sandboxing.
Deploy one binary for many users. Isolated workspaces, per-tenant config, health endpoints, and usage metrics.
One click for managed platforms. One command for any VPS.