Ultra-lightweight personal AI assistant framework. Multi-provider support, extensible tools, container runtime isolation — all in a binary smaller than a selfie.
ZeptoClaw proves that size isn't everything. Packed with features typically found in frameworks 10x larger.
Seamlessly switch between Claude, GPT-4, Gemini, Groq, and more. Use the best model for each task without changing your workflow.
Built-in tools for file operations, shell commands, and web search. Write custom tools in Rust with the simple Tool trait.
Execute shell commands in Docker or Apple Container for security. Falls back to native runtime when containers aren't available.
Chat with your AI via Telegram, Discord, Slack, or CLI. Same assistant, your choice of interface.
Long-running conversations with context. Sessions survive restarts with filesystem or in-memory storage.
Memory safety without garbage collection. Zero runtime crashes. Minimal resource usage even under heavy load.
Built on a MessageBus architecture for loose coupling and easy extensibility.
Telegram, Discord, Slack, CLI — unified message interface
MessageBus → LLM provider → Tool execution → Response
Pluggable LLM backends with unified interface