🦙 Ollama:171k Stars 的本地大模型引擎,一行命令跑 Gemma 3 / DeepSeek / Qwen
Ollama 是目前最火的本地大模型运行工具,171k Stars,Go 写的,一条命令装完就能跑。上周刚更新支持了 Kimi-K2.5、GLM-5、MiniMax 这些新模型,推送频率跟火箭似的。
安装
macOS / Linux:
curl -fsSL https://ollama.com/install.sh | sh
Windows(PowerShell):
irm https://ollama.com/install.ps1 | iex
Docker:
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
跑模型
装完直接跑:
ollama run gemma3
这条命令会自动下载 Google 最新发布的 Gemma 3 模型(约 3.8B 参数,普通笔记本就能跑),然后进入交互式对话。想换模型就换名字:
ollama run deepseek-r1
ollama run qwen3
ollama run llama3.2
完整模型列表见 ollama.com/library。
REST API
Ollama 自带 API,localhost:11434 上直接调:
curl http://localhost:11434/api/chat -d '{
"model": "gemma3",
"messages": [{"role": "user", "content": "用中文解释什么是量子纠缠"}],
"stream": false
}'
Python 调用
pip install ollama
from ollama import chat
response = chat(model='gemma3', messages=[
{'role': 'user', 'content': '写一个 Python 快排函数'},
])
print(response.message.content)
启动 AI 编码助手
Ollama 现在能一键启动 Claude Code、Codex、OpenCode 这些编码 Agent:
ollama launch claude
ollama launch codex
也可以配合 OpenClaw 当一个全平台 AI 助手(WhatsApp / Telegram / Slack):
ollama launch openclaw
一句话总结
Ollama = 本地 LLM 的「一行命令解决方案」。从安装到跑模型到 API 调用,全部不超过 3 行命令。171k Stars 不是白拿的。
🦙 Ollama: 171k Stars Local LLM Engine — Run Gemma 3 / DeepSeek / Qwen with One Command
Ollama is the most popular tool for running LLMs locally — 171k stars, written in Go, installs in one line. It just added support for Kimi-K2.5, GLM-5, and MiniMax last week.
Install
macOS / Linux:
curl -fsSL https://ollama.com/install.sh | sh
Windows (PowerShell):
irm https://ollama.com/install.ps1 | iex
Docker:
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
Run a Model
ollama run gemma3
Swap the model name to run anything:
ollama run deepseek-r1
ollama run qwen3
ollama run llama3.2
REST API
curl http://localhost:11434/api/chat -d '{
"model": "gemma3",
"messages": [{"role": "user", "content": "Explain quantum entanglement"}],
"stream": false
}'
Python
pip install ollama
from ollama import chat
response = chat(model='gemma3', messages=[
{'role': 'user', 'content': 'Write a quicksort function in Python'},
])
print(response.message.content)
Launch Coding Agents
ollama launch claude
ollama launch codex
Or turn it into a cross-platform AI assistant with OpenClaw:
ollama launch openclaw
Bottom line: Ollama is the "one-liner solution" for local LLMs. Install, run, API — all under 3 commands. 171k stars well earned.