Build with the model fleet you choose.

A coding workbench that wires Ollama, Kimi, MiniMax, OpenRouter, and more — all from one config. Your models. Your tools. Your data.

$ owlcoda init && owlcoda
  • Ollama
  • LM Studio
  • vLLM
  • Kimi
  • MiniMax
  • OpenRouter
Apache-2.0Local-first42+ toolsNo telemetry

Why OwlCoda

Local-first by default

Your sessions stay under ~/.owlcoda. No telemetry, no upload — there is no OwlCoda server.

~/.owlcoda/sessions/<id>.json

Bring your own model fleet

Ollama for everyday work, Kimi or MiniMax for the heavy lift, switch with --model.

owlcoda --model heavy   →   Kimi K2

Native terminal REPL

42+ tools, 69+ slash commands, sessions, learned skills. Drag-select and copy work like any terminal app.

/model · /cost · /budget · /skills

Quickstart (30 seconds)

Source install for now; npm / Homebrew / standalone binary planned for 1.0. Requires Node ≥ 18 and any OpenAI-compatible local backend.

  1. 01

    Clone and build OwlCoda

    git clone https://github.com/yeemio/owlcoda.git
    cd owlcoda && npm install && npm run build
  2. 02

    Expose owlcoda globally

    npm link  # or: node /path/to/owlcoda/dist/cli.js
  3. 03

    Point at a backend (Ollama shown)

    owlcoda init --router http://127.0.0.1:11434/v1
  4. 04

    Start the REPL

    owlcoda

Supported backends

OwlCoda does not ship its own model — you point it at one. Out of the box it speaks to:

Local runtimes (auto-detected by owlcoda init)

Ollama OpenAI-compatible http://127.0.0.1:11434/v1
LM Studio OpenAI-compatible http://127.0.0.1:1234/v1
vLLM OpenAI-compatible http://127.0.0.1:8000/v1
Custom OpenAI-compatible user-supplied

Cloud providers (user-configured, BYO API key)

Kimi (Moonshot) OpenAI-compatible https://api.moonshot.ai/v1
Kimi Coding provider-native https://api.kimi.com/coding
MiniMax Messages-shaped https://api.minimax.io/anthropic
OpenRouter OpenAI-compatible https://openrouter.ai/api/v1
Bailian / DashScope OpenAI-compatible https://dashscope.aliyuncs.com/compatible-mode/v1
OpenAI OpenAI-compatible https://api.openai.com/v1

Native REPL highlights

OwlCoda runs as a native terminal REPL — not a web wrapper.

  • 42+ tools — Bash, Read, Write, Edit, Glob, Grep, MCP-served tools, agent dispatch, scheduling, plugins.
  • 69+ slash commands — /model, /cost, /budget, /perf, /doctor, /config, /trace, /tokens, /sessions, /skills, and more.
  • Selection-first transcript — Drag-select and copy work the way they do in any other terminal app.
  • Session persistence — Every conversation lands under ~/.owlcoda/sessions/. Resume any of them with --resume <id>.
  • Learned skills (L2) — Repeated workflows get extracted and re-injected on later matching tasks.
  • Training pipeline (L3, opt-in) — Score and export high-quality sessions as JSONL / ShareGPT for local fine-tuning.

Privacy posture

  • Sessions stay under ~/.owlcoda/ on your machine.
  • Training-data collection is opt-in and PII-sanitized before disk write.
  • No telemetry endpoint. No outbound requests beyond the backends in your config.
  • There is no OwlCoda server. There is no OwlCoda account.