欣淇
发布于 2026-05-14 / 0 阅读
0
0

🌐 Mirage:2.1k Stars 的统一虚拟文件系统,让 AI Agent 用 bash 操作一切后端

🌐 Mirage:2.1k Stars 的统一虚拟文件系统,让 AI Agent 用 bash 操作一切后端

项目地址:github.com/strukto-ai/mirage | ⭐ 2,135 Stars | 🛠 TypeScript/Python | 📅 2026-05-06


老实说,现在的 AI Agent 生态有个挺尴尬的事——每个后端服务都要学一套新 API。S3 一套 SDK,Slack 一套,Gmail 又一套,再加上 MCP server 满天飞。Agent 的 tool-use 调用链路越搞越长,debug 起来想骂人。

Mirage 的思路就很暴力:把所有后端挂成一个虚拟文件系统,Agent 直接用 bash 操作。cp、grep、cat、jq——LLM 最熟悉的那些 Unix 命令,直接操作 S3、Slack、GitHub、Gmail 的数据。不需要学新东西。

一、核心思路:一切皆文件

Mirage 把每个服务抽象成一个 "Resource",全部挂到同一个目录树下:

from mirage import Workspace
from mirage.resource.s3 import S3Config, S3Resource
from mirage.resource.slack import SlackResource

ws = Workspace({
    "/data":  RAMResource(),
    "/s3":    S3Resource(S3Config(bucket="my-bucket")),
    "/slack": SlackResource(),
    "/docs":  GDocsResource(),
})

配完这些,你的 Agent 就能在同一个 bash session 里跨服务操作:

# 跨服务管道——从 Slack 捞告警,跟 S3 数据做关联
grep alert /slack/general/*.json | wc -l
cp /s3/report.csv /data/report.csv
cat /docs/mirage/README.md

最骚的操作是 cp 在跨服务间直接工作——/s3/report.csv 复制到 /data/ 就是 RAM 里的本地文件,Agent 不需要知道底层在干嘛。

二、快速上手

装一个 CLI 就行:

# Python
uv add mirage-ai

# CLI
curl -fsSL https://strukto.ai/mirage/install.sh | sh

# TypeScript
npm install @struktoai/mirage-node

创建 workspace 并跑命令:

mirage workspace create ws.yaml --id demo
mirage execute --workspace_id demo --command "cp /s3/report.csv /data/report.csv"
mirage workspace snapshot demo demo.tar

三、跟 Agent 框架集成

Mirage 原生支持 OpenAI Agents SDK、LangChain、Vercel AI SDK 等主流框架。最简单的玩法是直接给 Claude Code 挂上虚拟文件系统:

from agents import Runner
from agents.sandbox import SandboxAgent
from mirage.agents.openai_agents import MirageSandboxClient

client = MirageSandboxClient(ws)
agent = SandboxAgent(
    name="Mirage Agent",
    instructions=ws.file_prompt,
)
result = await Runner.run(agent, "Summarize /s3/data/logs.jsonl")

TypeScript 也能用:

import { Workspace, S3Resource, SlackResource } from '@struktoai/mirage-browser'

const ws = new Workspace({
  '/s3':    new S3Resource({ bucket: 'my-bucket' }),
  '/slack': new SlackResource({}),
})

await ws.execute('grep alert /slack/general/*.json | wc -l')

四、跟 MCP 对比

Mirage 不是 MCP 的替代品,思路更接近 FUSE 文件系统。MCP 给 Agent 暴露一个个独立的 tool endpoint,Mirage 直接给一个统一的文件系统抽象。好处是:

  • Agent 不需要维护 N 个 tool definitions
  • 管道操作天然跨服务(grep /slack/* | cp /s3/
  • LLM 对 bash 的语感比调 API 好得多
  • 五、目前能挂什么

    官方支持的 Resource 列表:S3/R2/GCS、Gmail/GDrive/GDocs、GitHub/Linear/Notion、Slack/Discord/Telegram、Redis、MongoDB、SSH 远程主机,还有 RAM 和本地磁盘。

    总结

  • Mirage 把后端服务挂成统一文件系统,Agent 用 bash 操作一切
  • 支持 Python 和 TypeScript SDK,CLI 模式开箱即用
  • 跟 OpenAI Agents SDK、LangChain、Vercel AI SDK 开箱集成
  • 比 MCP 更薄的一层抽象——Agent 不用学新 API

  • 🌐 Mirage: 2.1k Stars Unified Virtual Filesystem — Your AI Agent Uses Bash for Everything

    GitHub: github.com/strukto-ai/mirage | ⭐ 2,135 Stars | 🛠 TypeScript/Python


    Let's be real — the current AI agent ecosystem has a friction problem. Every backend service needs a different SDK, a different tool definition, and sometimes even a different MCP server. The function-calling chain gets absurdly long.

    Mirage takes a different approach: mount every backend as a virtual filesystem, and let your agent use plain Unix commands. cp, grep, cat, jq — the commands LLMs know best — work across S3, Slack, GitHub, and Gmail out of the box. No new vocabulary needed.

    Core Idea: Everything is a File

    Mirage abstracts each service as a "Resource" and mounts them under one directory tree:

    from mirage import Workspace
    from mirage.resource.s3 import S3Config, S3Resource
    from mirage.resource.slack import SlackResource
    
    ws = Workspace({
        "/data":  RAMResource(),
        "/s3":    S3Resource(S3Config(bucket="my-bucket")),
        "/slack": SlackResource(),
        "/docs":  GDocsResource(),
    })
    

    Once configured, your agent can pipe data across services in a single bash session:

    grep alert /slack/general/*.json | wc -l
    cp /s3/report.csv /data/report.csv
    cat /docs/mirage/README.md
    

    The killer feature: cp works cross-service. Copying from /s3/report.csv to /data/ places it into RAM — the agent doesn't need to know what happens underneath.

    Quick Start

    # Python
    uv add mirage-ai
    
    # CLI one-liner
    curl -fsSL https://strukto.ai/mirage/install.sh | sh
    
    # TypeScript
    npm install @struktoai/mirage-node
    

    Create a workspace and run commands:

    mirage workspace create ws.yaml --id demo
    mirage execute --workspace_id demo --command "cp /s3/report.csv /data/report.csv"
    mirage workspace snapshot demo demo.tar
    

    Agent Framework Integration

    Mirage drops into OpenAI Agents SDK, LangChain, and Vercel AI SDK as a sandbox layer:

    from agents import Runner
    from agents.sandbox import SandboxAgent
    from mirage.agents.openai_agents import MirageSandboxClient
    
    client = MirageSandboxClient(ws)
    agent = SandboxAgent(name="Mirage Agent", instructions=ws.file_prompt)
    result = await Runner.run(agent, "Summarize /s3/data/logs.jsonl")
    

    TypeScript equivalent:

    import { Workspace, S3Resource, SlackResource } from '@struktoai/mirage-browser'
    
    const ws = new Workspace({
      '/s3': new S3Resource({ bucket: 'my-bucket' }),
      '/slack': new SlackResource({}),
    })
    await ws.execute('grep alert /slack/general/*.json | wc -l')
    

    Mirage vs MCP

    Mirage isn't a replacement for MCP — it takes a FUSE-like approach. MCP exposes individual tool endpoints; Mirage gives a unified filesystem abstraction. The wins are:

  • No need to maintain N tool definitions per service
  • Pipes compose naturally across services (grep /slack/* | cp /s3/)
  • LLMs are far more fluent in bash than in API call patterns
  • Supported Resources

    S3/R2/GCS, Gmail/GDrive/GDocs, GitHub/Linear/Notion, Slack/Discord/Telegram, Redis, MongoDB, SSH hosts, RAM, and local disk — mounted side by side under one root.

    TL;DR

  • Mirage unifies backends into one filesystem — agents use bash for everything
  • Python + TypeScript SDKs with a zero-config CLI
  • Drops into OpenAI Agents SDK, LangChain, Vercel AI SDK
  • Thinner abstraction than MCP — agents don't learn new APIs

  • 评论