Configure providers

Wire OwlCoda to one or many backends — local runtimes, cloud providers, or both at once.

owlcoda init writes a starter config.json and auto-detects whichever local runtime is listening on the standard ports. Below are the explicit recipes per provider.

Local: Ollama

owlcoda init --router http://127.0.0.1:11434/v1
owlcoda

Local: LM Studio

owlcoda init --router http://127.0.0.1:1234/v1
owlcoda

Local: vLLM

owlcoda init --router http://127.0.0.1:8000/v1
owlcoda

Cloud: Kimi (Moonshot)

export KIMI_API_KEY=sk-...
owlcoda init --router https://api.moonshot.ai/v1

Then edit config.json:

{
  "routerUrl": "https://api.moonshot.ai/v1",
  "models": [
    {
      "id": "kimi-k2",
      "label": "Kimi K2",
      "backendModel": "moonshot-v1-128k",
      "endpoint": "https://api.moonshot.ai/v1",
      "apiKeyEnv": "KIMI_API_KEY",
      "aliases": ["default", "kimi"],
      "default": true
    }
  ]
}

Cloud: MiniMax (Messages-shaped)

{
  "routerUrl": "https://api.minimax.io/anthropic",
  "models": [
    {
      "id": "minimax-m27",
      "label": "MiniMax M2.7",
      "backendModel": "minimax-m2.7-highspeed",
      "endpoint": "https://api.minimax.io/anthropic",
      "apiKeyEnv": "MINIMAX_API_KEY",
      "localRuntimeProtocol": "anthropic_messages",
      "aliases": ["default", "minimax"],
      "default": true
    }
  ]
}

Cloud: OpenRouter (multi-model gateway)

{
  "routerUrl": "https://openrouter.ai/api/v1",
  "models": [
    {
      "id": "openrouter-default",
      "label": "OpenRouter selection",
      "backendModel": "qwen/qwen3-coder",
      "endpoint": "https://openrouter.ai/api/v1",
      "apiKeyEnv": "OPENROUTER_API_KEY",
      "aliases": ["default"],
      "default": true
    }
  ]
}

Cloud: Bailian / DashScope (Alibaba)

{
  "routerUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1",
  "models": [
    {
      "id": "qwen-plus",
      "label": "Qwen Plus",
      "backendModel": "qwen-plus",
      "endpoint": "https://dashscope.aliyuncs.com/compatible-mode/v1",
      "apiKeyEnv": "BAILIAN_API_KEY",
      "aliases": ["default"],
      "default": true
    }
  ]
}

Mixed local + cloud (multiple models in one config)

{
  "routerUrl": "http://127.0.0.1:11434/v1",
  "models": [
    { "id": "qwen-local", "backendModel": "qwen2.5-coder:7b",
      "aliases": ["default", "fast"], "default": true },
    { "id": "kimi-cloud", "backendModel": "moonshot-v1-128k",
      "endpoint": "https://api.moonshot.ai/v1",
      "apiKeyEnv": "KIMI_API_KEY",
      "aliases": ["heavy", "kimi"] }
  ]
}

Run owlcoda --model heavy → Kimi. Default → local Qwen.

Schema reference

See config.example.json for the full schema. Key per-model fields:

FieldPurpose
idStable model id used in the API
labelHuman-readable name shown in UI
backendModelModel id the backend itself expects
endpointPer-model override of routerUrl
apiKey / apiKeyEnvCloud credential (literal or env var name)
localRuntimeProtocolauto / openai_chat / anthropic_messages
aliasesAlternate names accepted by --model
tierfast / balanced / heavy (UI grouping)
defaultOne model per config should be the default