Curated repos, tools, and frameworks shaping the developer ecosystem. Live data from GitHub.
by ollama
Get up and running with Kimi-K2.5, GLM-5, MiniMax, DeepSeek, gpt-oss, Qwen, Gemma and other models.
Start building with open models.
curl -fsSL https://ollama.com/install.sh | sh
irm https://ollama.com/install.ps1 | iex
curl -fsSL https://ollama.com/install.sh | sh
The official Ollama Docker image ollama/ollama is available on Docker Hub.
ollama
You'll be prompted to run a model or connect Ollama to your existing agents or applications such as claude, codex, openclaw and more.
To launch a specific integration:
ollama launch claude
Supported integrations include Claude Code, Codex, Droid, and OpenCode.
Use OpenClaw to turn Ollama into a personal AI assistant across WhatsApp, Telegram, Slack, Discord, and more:
ollama launch openclaw
Run and chat with Gemma 3:
ollama run gemma3
See ollama.com/library for the full list.
See the quickstart guide for more details.
Ollama has a REST API for running and managing models.
curl http://localhost:11434/api/chat -d '{
"model": "gemma3",
"messages": [{
"role": "user",
"content": "Why is the sky blue?"
}],
"stream": false
}'
See the API documentation for all endpoints.
pip install ollama
from ollama import chat
response = chat(model='gemma3', messages=[
{
'role': 'user',
'content': 'Why is the sky blue?',
},
])
print(response.message.content)
npm i ollama
import ollama from "ollama";
const response = await ollama.chat({
model: "gemma3",
messages: [{ role: "user", content: "Why is the sky blue?" }],
});
console.log(response.message.content);
Want to add your project? Open a pull request.
SwiftChat, Enchanted, Maid, Ollama App, Reins, and ConfiChat listed above also support mobile platforms.
Stable Diffusion web UI