Configure providers
Wire OwlCoda to one or many backends — local runtimes, cloud providers, or both at once.
owlcoda init writes a starter config.json and auto-detects whichever local runtime is listening on the standard ports. Below are the explicit recipes per provider.
Local: Ollama
owlcoda init --router http://127.0.0.1:11434/v1
owlcoda
Local: LM Studio
owlcoda init --router http://127.0.0.1:1234/v1
owlcoda
Local: vLLM
owlcoda init --router http://127.0.0.1:8000/v1
owlcoda
Cloud: Kimi (Moonshot)
export KIMI_API_KEY=sk-...
owlcoda init --router https://api.moonshot.ai/v1
Then edit config.json:
{
"routerUrl": "https://api.moonshot.ai/v1",
"models": [
{
"id": "kimi-k2",
"label": "Kimi K2",
"backendModel": "moonshot-v1-128k",
"endpoint": "https://api.moonshot.ai/v1",
"apiKeyEnv": "KIMI_API_KEY",
"aliases": ["default", "kimi"],
"default": true
}
]
}
Cloud: MiniMax (Messages-shaped)
{
"routerUrl": "https://api.minimax.io/anthropic",
"models": [
{
"id": "minimax-m27",
"label": "MiniMax M2.7",
"backendModel": "minimax-m2.7-highspeed",
"endpoint": "https://api.minimax.io/anthropic",
"apiKeyEnv": "MINIMAX_API_KEY",
"localRuntimeProtocol": "anthropic_messages",
"aliases": ["default", "minimax"],
"default": true
}
]
}
Cloud: OpenRouter (multi-model gateway)
{
"routerUrl": "https://openrouter.ai/api/v1",
"models": [
{
"id": "openrouter-default",
"label": "OpenRouter selection",
"backendModel": "qwen/qwen3-coder",
"endpoint": "https://openrouter.ai/api/v1",
"apiKeyEnv": "OPENROUTER_API_KEY",
"aliases": ["default"],
"default": true
}
]
}
Cloud: Bailian / DashScope (Alibaba)
{
"routerUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1",
"models": [
{
"id": "qwen-plus",
"label": "Qwen Plus",
"backendModel": "qwen-plus",
"endpoint": "https://dashscope.aliyuncs.com/compatible-mode/v1",
"apiKeyEnv": "BAILIAN_API_KEY",
"aliases": ["default"],
"default": true
}
]
}
Mixed local + cloud (multiple models in one config)
{
"routerUrl": "http://127.0.0.1:11434/v1",
"models": [
{ "id": "qwen-local", "backendModel": "qwen2.5-coder:7b",
"aliases": ["default", "fast"], "default": true },
{ "id": "kimi-cloud", "backendModel": "moonshot-v1-128k",
"endpoint": "https://api.moonshot.ai/v1",
"apiKeyEnv": "KIMI_API_KEY",
"aliases": ["heavy", "kimi"] }
]
}
Run owlcoda --model heavy → Kimi. Default → local Qwen.
Schema reference
See config.example.json for the full schema. Key per-model fields:
| Field | Purpose |
|---|---|
id | Stable model id used in the API |
label | Human-readable name shown in UI |
backendModel | Model id the backend itself expects |
endpoint | Per-model override of routerUrl |
apiKey / apiKeyEnv | Cloud credential (literal or env var name) |
localRuntimeProtocol | auto / openai_chat / anthropic_messages |
aliases | Alternate names accepted by --model |
tier | fast / balanced / heavy (UI grouping) |
default | One model per config should be the default |