🦀 ZeroClaw:31k Stars 的 Rust AI 个人助手,一个二进制文件跑遍所有平台
你有没有这感觉——装个 AI 助手比上班还累?先装 Python,再 pip install 十几个包,配 API key,装插件,最后发现只支持 macOS。ZeroClaw 说:一个 Rust 二进制文件,curl 装好就能用,Windows / macOS / Linux / Docker 全覆盖。
开源地址:https://github.com/zeroclaw-labs/zeroclaw
数据:31k Stars · Rust · Apache-2.0 · 2026 年 2 月发布
⚡ 安装:一条命令
curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/install.sh | bash
想省空间?--minimal 只有 6.6 MB,比最精简的 Docker 镜像还小。想从源码编译?--source 走 Rust 构建链,还能自定义 feature:
./install.sh --minimal # 6.6 MB 内核版
./install.sh --source --features agent-runtime,channel-discord # 自定义功能集
./install.sh --skip-onboard # 只安装,稍后配置
装完跑 zeroclaw onboard,交互式向导帮你选 LLM 服务商(Anthropic / OpenAI / Ollama 等 20+ 家),配好 Discord / Telegram / Matrix / 邮件 / webhook / CLI 等 30+ 渠道。然后:
zeroclaw agent # 终端里直接聊天
zeroclaw service install # 注册为系统服务
zeroclaw service start # 后台常驻运行
最骚的是——你拥有全部。Agent、数据、运行机器,全是你的。
🔧 架构:一个二进制 = 完整 Agent 运行时
┌────────────────────────────────────────────┐
│ channels(30+) gateway(REST/WS) ACP │
│ ↓ │
│ ZeroClaw runtime │
│ ┌─────────┬──────────┬──────────┐ │
│ │ agent │ security │ SOP │ │
│ │ loop │ policy │ engine │ │
│ └─────────┴──────────┴──────────┘ │
│ ↓ ↓ ↓ │
│ providers tools memory │
│ (Anthropic, (shell, (SQLite, │
│ OpenAI, browser, embeddings) │
│ Ollama…) HTTP, MC… │
└────────────────────────────────────────────┘
配置走一个 TOML 文件 ~/.zeroclaw/config.toml。简单场景就两行:
default_provider = "openai-codex"
default_model = "gpt-5-codex"
我用过哪些东西印象深刻?
- 多通道:同一个 agent,Discord 发消息它回,Telegram 发它也回,邮件发它还回。配置一个 channel block 就好。
- 安全策略:默认 supervised 模式——中等风险操作要你批准,高风险直接拦截。工具 receipts 加密签名每一步操作,YOLO 模式留给信任环境。
- 硬件能力:GPIO / I2C / SPI / USB,在树莓派、STM32、Arduino 上直接控制外设。把你的 agent 接到物理世界。
- SOP 引擎:MQTT / webhook / cron / 外设事件触发标准操作流程,带审批门和可恢复执行。
🔐 配置示例:接入 OpenAI Codex
default_provider = "openai-codex"
default_model = "gpt-5-codex"
看到 provider streaming failed, falling back to non-streaming chat?别慌——ZeroClaw 自动降级重试。跑 zeroclaw auth status 检查认证状态。
📦 对比一下同类方案
| 维度 | ZeroClaw | Goose | OpenClaw |
|---|---|---|---|
| 二进制大小 | ~7 MB | ~30 MB | ~25 MB |
| 支持平台 | Win/Mac/Linux/+ | Mac/Linux | Mac/Linux |
| 渠道数 | 30+ | 10+ | 15+ |
| 硬件外设 | GPIO/I2C/SPI | ❌ | ❌ |
| 运行时 | 原生 Rust | Go | TypeScript |
别问为啥 Rust 写 agent runtime 更香——单二进制部署、零依赖、跨平台、6 MB 起步。踩过的坑都是泪,但 ZeroClaw 把坑填平了。
要点总结:
- 一条 curl 命令装好,zeroclaw onboard 交互式配置,30 分钟跑通
- 全平台覆盖,6.6 MB 极简模式,不挑机器
- 30+ 渠道 + 20+ LLM 服务商 + GPIO 硬件控制,真的什么都能接
- Supervised 安全策略 + YOLO 模式,从开发到生产都有对应方案
- 配置走一个 TOML 文件,零运行时依赖
🦀 ZeroClaw:31k Stars Rust AI Personal Assistant — One Binary Runs Everywhere
Ever feel like setting up an AI assistant is harder than your actual job? Python venv, pip install 15 packages, configure API keys, install plugins... and it only supports macOS. ZeroClaw says: one Rust binary, curl to install, runs on Windows / macOS / Linux / Docker. Period.
Repo: https://github.com/zeroclaw-labs/zeroclaw
Stats: 31,372 Stars · Rust · Apache-2.0 · MIT dual-license
⚡ Install: One Command
curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/install.sh | bash
Want the kernel-only version? --minimal gets you 6.6 MB. Building from source with custom features?
./install.sh --minimal # 6.6 MB kernel
./install.sh --source --features agent-runtime,channel-discord
./install.sh --skip-onboard # install only, configure later
After install, zeroclaw onboard launches an interactive wizard: pick your LLM provider (Anthropic, OpenAI, Ollama, ~20 more), wire up channels (Discord, Telegram, Matrix, email, webhook, CLI — 30+ adapters). Then:
zeroclaw agent # interactive chat in terminal
zeroclaw service install # register as systemd/launchctl/Windows Service
zeroclaw service start # always-on background mode
The best part — you own everything. The agent, the data, the machine.
🔧 Architecture: One Binary = Full Agent Runtime
┌────────────────────────────────────────────┐
│ channels(30+) gateway(REST/WS) ACP │
│ ↓ │
│ ZeroClaw runtime │
│ ┌─────────┬──────────┬──────────┐ │
│ │ agent │ security │ SOP │ │
│ │ loop │ policy │ engine │ │
│ └─────────┴──────────┴──────────┘ │
│ ↓ ↓ ↓ │
│ providers tools memory │
│ (Anthropic, (shell, (SQLite, │
│ OpenAI, browser, embeddings) │
│ Ollama…) HTTP, MC… │
└────────────────────────────────────────────┘
Config lives in one TOML file at ~/.zeroclaw/config.toml. Simple setups need just two lines:
default_provider = "openai-codex"
default_model = "gpt-5-codex"
Stuff that genuinely impressed me:
- Multi-channel — one agent loop, your messages from Discord, Telegram, or email all land in the same conversation, and it responds in the same channel.
- Security-first — default supervised mode: medium-risk ops need approval, high-risk blocked. Tool receipts cryptographically sign every action. YOLO mode for trusted dev boxes.
- Hardware capable — GPIO / I2C / SPI / USB. Plug your agent into a Raspberry Pi, STM32, or Arduino. Real-world automation.
- SOP engine — MQTT / webhook / cron / peripheral events trigger standard operating procedures with approval gates and resumable execution.
Config example — OpenAI Codex:
default_provider = "openai-codex"
default_model = "gpt-5-codex"
See provider streaming failed, falling back to non-streaming chat? No panic — ZeroClaw auto-retries without streaming. Run zeroclaw auth status to check credentials.
Key takeaways:
- One curl command installs; zeroclaw onboard configures everything in ~30 minutes
- Cross-platform with a 6.6 MB minimal footprint
- 30+ channels + 20+ LLM providers + GPIO hardware control — genuinely universal
- Supervised security with YOLO mode, from dev to production
- One TOML config file, zero runtime dependencies