欣淇
发布于 2026-05-15 / 0 阅读
0
0

🤖 Deep Agents:22.7k Stars 的 LangChain 出品 Agent Harness,pip install 就有一个能用的 AI 编码助手

🤖 Deep Agents:22.7k Stars 的 LangChain 出品 Agent Harness,pip install 就有一个能用的 AI 编码助手

智能体框架这两年卷疯了,各有各的立场——有的非要你定义全套 graph,有的绑定特定模型。LangChain 最近出的 Deep Agents 走了一条极简路线:给你一个开箱即用的 agent,不用搭积木

项目 22.7k Stars,MIT 协议,纯 Python,目前由 LangChain 官方维护。


Deep Agents 是什么?

这玩意儿就是 LangChain 版的 Claude Code。核心作者很直白——就是照着 Claude Code 的思路,想搞清楚"什么让一个通用型编码 agent 真正好用",然后把它做得更开放。

装一下:

pip install deepagents

或者用 uv:

uv add deepagents

然后四行代码就跑起来了:

from deepagents import create_deep_agent

agent = create_deep_agent()
result = agent.invoke({"messages": [{"role": "user", "content": "Research LangGraph and write a summary"}]})
print(result)

不用定义 graph,不用配置 state,不用写 tool schema——这些都内置了。


默认带了什么?

create_deep_agent() 返回的就是一个编译好的 LangGraph 对象,所以 LangGraph 的所有功能(streaming、checkpointing、Studio 可视化)都能直接用。但它的默认工具集已经够你应付大部分场景:

  • write_todos — agent 自己拆任务、追踪进度
  • read_file / write_file / edit_file — 读写文件
  • ls / glob / grep — 文件系统操作
  • execute — 跑 shell 命令(有沙箱)
  • task — 创建子 agent,隔离 context,并行干活
  • 最骚的操作是 context 管理:对话长了自动做 summarization,大输出自动存文件,不会把 token 全烧在历史记录里。


    CLI 模式

    不写 Python 也行,一行装 CLI:

    curl -LsSf https://langch.in/gh-da-cli | bash
    

    装完就是终端里的 AI 编码助手。支持交互式 TUI、网页搜索、无头模式(给 CI 用)。说白了,跟 Claude Code 一个用法,但底层是 LangChain 的生态。


    换成自己的模型

    不绑 OpenAI,只要支持 tool calling 的模型都能换:

    from langchain.chat_models import init_chat_model
    
    agent = create_deep_agent(
        model=init_chat_model("anthropic:claude-sonnet-4"),
        tools=[my_custom_tool],
        system_prompt="You are a research assistant.",
    )
    

    MCP 支持走 langchain-mcp-adapters,想接文件系统、数据库、GitHub 都没问题。


    到底值不值得用?

    客观说几点:

  • 上手极快uv add deepagents 就有一台能用的 agent,不像 LangGraph 本身要学 graph 那套概念
  • 2. 100% 兼容 LangChain 生态 — 如果你已经在用 LangChain/LangGraph,这是最自然的 agent 入口

    3. 但默认行为偏通用 — 做垂直场景(比如专门写代码、专门做研究)可能需要调 prompt 和 toolset

    4. CLI 还比较新 — 相比 Claude Code 的功能成熟度还有差距,但胜在模型自由

    适合想快速上手 agent 开发、或者需要在 LangChain 生态里搭一个可靠 agent 底座的人。老实说,这比从零搭 LangGraph 省了至少半天工。


    要点总结:

  • LangChain 出品,22.7k Stars,MIT 协议,纯 Python
  • pip install deepagentsuv add deepagents 立即可用
  • 内置 filesystem、shell、planning、sub-agent 工具集
  • CLI 模式 curl -LsSf https://langch.in/gh-da-cli | bash 一键安装
  • 支持任意 tool-calling 模型,不绑定厂商

  • 🤖 Deep Agents:LangChain's 22.7k Stars Agent Harness — pip install and You Get a Working AI Coding Assistant

    The agent framework space is crowded, and every player has their own opinion — some force you to define a full graph, others lock you into a specific model. LangChain's recently released Deep Agents takes a different route: a ready-to-run agent out of the box, no assembly required.

    22.7k stars, MIT license, pure Python, actively maintained by LangChain.


    What is Deep Agents?

    Think of it as LangChain's take on Claude Code. The authors were straightforward about it — they looked at what makes a general-purpose coding agent genuinely useful, then made it even more open.

    Install:

    pip install deepagents
    

    Or with uv:

    uv add deepagents
    

    Then four lines of Python and you're running:

    from deepagents import create_deep_agent
    
    agent = create_deep_agent()
    result = agent.invoke({"messages": [{"role": "user", "content": "Research LangGraph and write a summary"}]})
    print(result)
    

    No graph definitions, no state configs, no tool schema boilerplate — it's all built in.


    What's Included Out of the Box

    create_deep_agent() returns a compiled LangGraph object, so everything LangGraph offers (streaming, checkpointing, Studio visualization) works immediately. The default toolset covers most scenarios:

  • write_todos — task breakdown and progress tracking
  • read_file / write_file / edit_file — file operations
  • ls / glob / grep — filesystem navigation
  • execute — shell commands (with sandboxing)
  • task — spawn sub-agents with isolated context windows
  • The killer feature is context management: auto-summarization on long conversations, large outputs saved to files automatically. No burning tokens on history.


    CLI Mode

    Don't want Python? One-liner CLI install:

    curl -LsSf https://langch.in/gh-da-cli | bash
    

    You get an interactive TUI coding assistant in your terminal, with web search support and headless mode for CI. Same workflow as Claude Code, but powered by the LangChain ecosystem.


    Bring Your Own Model

    No vendor lock-in — any model with tool calling support works:

    from langchain.chat_models import init_chat_model
    
    agent = create_deep_agent(
        model=init_chat_model("anthropic:claude-sonnet-4"),
        tools=[my_custom_tool],
        system_prompt="You are a research assistant.",
    )
    

    MCP support via langchain-mcp-adapters — filesystem, databases, GitHub, whatever you need.


    Should You Use It?

    A few honest observations:

  • Fastest onboardinguv add deepagents gives you a working agent; no LangGraph concepts needed
  • 2. 100% LangChain compatible — if you're already in the LangChain ecosystem, this is the natural entry point

    3. Default behavior is generic — vertical use cases (coding-only, research-only) may need prompt/tool tweaks

    4. CLI is young — feature-wise behind Claude Code, but the model freedom is a real advantage

    Best for developers who want to quickly start agent development or need a reliable agent foundation in the LangChain ecosystem. Saves you at least half a day versus building from scratch with LangGraph.


    Key takeaways:

  • LangChain's 22.7k stars agent harness, MIT licensed, pure Python
  • pip install deepagents or uv add deepagents to get started
  • Built-in filesystem, shell, planning, and sub-agent tools
  • CLI: curl -LsSf https://langch.in/gh-da-cli | bash
  • Supports any tool-calling model, no vendor lock-in

  • 评论