🔥 Reasonix:2.9k Stars 的 DeepSeek 原生 AI 编码 Agent,99.8% Cache 命中率一天省 $50
🔥 Reasonix:2.9k Stars 的 DeepSeek 原生 AI 编码 Agent,99.8% Cache 命中率一天省 $50
> 🚀 [esengine/DeepSeek-Reasonix](https://github.com/esengine/DeepSeek-Reasonix) | ⭐ 2,972 | 🛠 TypeScript | 📅 2026-04-21
---
老实说,我试过在 DeepSeek 上跑 Claude Code 的替代方案,效果一言难尽——要么 API 绕着圈打水漂,要么会话一长 token 成本就起飞。Reasonix 这个项目有意思的地方在于:它**从第一行代码就是为 DeepSeek 的 prefix-cache 写的**,不是哪个通用框架魔改出来的。
项目才发布两周就冲到 2.9k stars,oosmetrics 上在 Agent 类别增速 Top 2、LLM 类别 Top 3。
⚡ **核心卖点:Cache 稳定性是架构级的**
Reasonix 不做多 provider 支持——它就绑定 DeepSeek,因为 prefix-cache 的字节级稳定性需要 loop 层的精心适配。实测数据:**435M 输入 token,99.82% cache hit,全天花费 ~$12**。换没有 cache 优化的客户端跑同样负载,v4-flash 上至少要 ~$61。五个小时的 LLM 活,省了 $50。
🛠 **一条命令上手**
```bash
npm install -g reasonix
reasonix code my-project
```
首次运行粘贴 DeepSeek API key,key 持久化后可以用很久。不想全局装就用 npx:
```bash
cd my-project
npx reasonix code
```
三种运行模式:
| 命令 | 场景 |
|------|------|
| `reasonix code [dir]` | 编码 Agent(默认,有文件系统 + shell 工具) |
| `reasonix chat` | 纯聊天模式(无文件写入权限) |
| `reasonix run "task"` | 一次性任务,输出到 stdout,适合管道 |
🔧 **三大架构支柱**
1. **Cache 优先的推理循环**——每个工具调用都复用同一个 prefix,不产生新的 cache miss。系统提示、技能、记忆全部嵌入 prefix,不走二次加载
2. **工具调用修复**——当 DeepSeek 返回格式有偏差的 tool call 时,Reasonix 内部自动修正,不把异常抛给用户
3. **成本控制体系**——`/effort` 旋钮调节推理深度,`reasonix stats` 实时看 token 消耗和 cache 命中率
💡 **技能系统(不用远程注册中心)**
```bash
# 创建项目级技能
/skill new my-skill
# 创建全局技能(跨项目可用)
/skill new my-skill --global
```
技能文件就是 Markdown + frontmatter,支持 `runAs: subagent` 让技能跑在独立的子 Agent loop 里,不干扰主会话的上下文。
📡 **QQ 远程通道**
Reasonix 支持把 QQ 作为 chat/code 会话的远程通信通道。先启动会话,然后在终端里输入:
```
/qq connect
```
之后助手回复和确认提示都能从 QQ 继续,不需要一直守在终端前。
**要点总结**
- DeepSeek 专用编码 Agent,99.8% prefix-cache 命中率,成本是通用方案的 1/5
- 一条 `npm install -g reasonix && reasonix code ./project` 就能开始用
- 内置 MCP、技能、记忆、Web 搜索、权限控制——不需要额外工具链
- 桌面客户端(Tauri)已出 prerelease,CLI 仍然是主力接口
- MIT 开源,社区驱动,`good first issue` 标签的工单对新贡献者友好
---
🔥 Reasonix: 2.9k Stars DeepSeek-Native AI Coding Agent — 99.8% Cache Hit Rate Saves $50/Day
> 🚀 [esengine/DeepSeek-Reasonix](https://github.com/esengine/DeepSeek-Reasonix) | ⭐ 2,972 | 🛠 TypeScript | 📅 2026-04-21
---
Honestly, I've tried running DeepSeek on generic agent harnesses—token costs balloon in long sessions and cache hits are hit-or-miss. Reasonix is different: **it was built from day one around DeepSeek's prefix-cache mechanic**, not retrofitted.
Two weeks old, 2.9k stars. Ranked Top 2 in Agents and Top 3 in LLMs by growth velocity on oosmetrics.
⚡ **Cache Stability Is an Architectural Invariant**
Reasonix is DeepSeek-only on purpose. Real-world data: **435M input tokens in one day, 99.82% cache hit rate, ~$12 total**. Same workload without cache optimization on v4-flash: ~$61. That's **$50/day saved** on a single heavy session.
🛠 **Start in One Command**
```bash
npm install -g reasonix
reasonix code my-project
```
Paste your DeepSeek API key on first run—it persists. Or use `npx` to try without installing:
```bash
cd my-project
npx reasonix code
```
Three modes:
| Command | Use Case |
|---------|----------|
| `reasonix code [dir]` | Coding agent (filesystem + shell tools) |
| `reasonix chat` | Plain chat (no file write access) |
| `reasonix run "task"` | One-shot task, streams to stdout |
🔧 **Three Architecture Pillars**
1. **Cache-first loop** — every tool call reuses the same prefix. System prompts, skills, memory all embedded into the prefix, zero reloads.
2. **Tool-call repair** — auto-fixes malformed tool calls from DeepSeek instead of crashing.
3. **Cost control** — `/effort` knob adjusts reasoning depth, `reasonix stats` shows live token/cache metrics.
💡 **Skill System (No Remote Registry)**
```bash
/skill new my-skill # project-level
/skill new my-skill --global # cross-project
```
Skills are Markdown with frontmatter. `runAs: subagent` spawns an isolated sub-agent loop so heavy skills don't pollute the main context.
📡 **QQ Remote Channel**
Start a session first, then connect from inside:
```
/qq connect
```
After that, follow-ups and confirmations continue through QQ—no need to stay glued to the terminal.
**Key Takeaways**
- DeepSeek-native coding agent with 99.8% prefix-cache hit rate—costs 1/5 of generic solutions
- `npm install -g reasonix && reasonix code ./project` to get started
- Built-in MCP, skills, memory, web search, permissions—no extra toolchain needed
- Tauri desktop client available as prerelease; CLI remains the primary interface
- MIT licensed, community-driven, `good first issue` tickets are beginner-friendly