Source of truth: CHANGELOG.md on GitHub
Changelog
All notable changes to OwlCoda are documented here.
Runtime version truth comes from [package.json](package.json) and is
exposed at runtime through [src/version.ts](src/version.ts) and
owlcoda --version.
[0.1.2] — 2026-04-25
User-facing README rewrite + second-pass legal-positioning polish.
Changed
README.md/README.zh.mdrewritten to put OwlCoda first:admin/src/pages/StartPage.tsx— the local-runtime protocoltests/provider-probe.test.ts— third-party model name fixtures- `skills/collaboration/{using-git-worktrees, phase-prompting,
installation, supported backend matrix (local: Ollama / LM Studio /
vLLM; cloud: Kimi / Moonshot / MiniMax / OpenRouter / Bailian /
OpenAI / user-configured Messages-shaped providers / custom), and concrete config snippets per
provider. The previous README read as an Ollama tutorial; the new
one reads as the OwlCoda manual it should be.
picker option labeled Anthropic messages is now `Messages-shaped
API`; the help text references "Anthropic-compatible providers
and similar gateways" rather than naming a single vendor.
replaced with neutral messages-vendor-* names.
receiving-code-review}/SKILL.md — generic AGENTS.md` /
instruction phrasing replaces the host-app-specific filename
that the original methodology pack used.
Added
skills/README.md— explicit positioning of the in-tree pack as
an OwlCoda curated methodology pack, with a non-list of
third-party SaaS skills that intentionally do not ship here.
Removed
skills/meta/— the maintenance / governance scripts under
gardening-skills-wiki, pulling-updates-from-skills-repository,
sharing-skills, testing-skills-with-subagents, and
writing-skills were tooling for an upstream skill-pack
ecosystem, not user-facing capability. Several of them carried
hardcoded host paths that pointed at an external maintainer's
local checkout.
[0.1.1] — 2026-04-25
Legal-positioning and provenance polish. No runtime behavior change.
Removed
skills/collaboration/remembering-conversations/— depended on askills/debugging/systematic-debugging/CREATION-LOG.md— extraction
third-party AI agent SDK at runtime and on a host-app hook
directory for deployment, neither of which fit OwlCoda's
independent posture. Users who want conversation-recall workflows
should install a third-party skill pack rather than ship one
in-tree.
log referencing a third-party developer's home directory; not
user-facing content.
Changed
NOTICE.mdadds a "Protocol Interoperability vs Affiliation"README.md/README.zh.mdarchitecture diagram saysskills/collaboration/using-git-worktrees/SKILL.md,
section that explicitly disclaims any partnership / endorsement /
derivative-work claim with respect to third parties whose wire
formats OwlCoda implements (Messages-shaped API, OpenAI Chat
Completions). The @anthropic-ai/sdk devDependency is
documented as an interoperability test artifact, not a runtime
dependency.
"Messages-shaped API" instead of naming a single upstream vendor,
matching the protocol-not-affiliation posture.
skills/debugging/root-cause-tracing/SKILL.md, and
scripts/smoke-presentation.mjs had hardcoded third-party
developer paths and model names replaced with neutral
/Users/example/... and generic model identifiers.
[0.1.0] — 2026-04-25
Initial public release. OwlCoda enters the public source tree as a
**Developer Preview** — feature-complete enough for daily use, but
the API surface, slash-command set, and config schema may still
evolve before 1.0.
Native REPL
- Default interactive path: native REPL with 42+ built-in tools
- Selection-first transcript surface: terminal-native drag-select
- Multi-client live REPL with a shared daemon, per-client session
- Session persistence under
~/.owlcoda/sessions/, including - Headless mode:
owlcoda -p "..."andowlcoda run --prompt "..."
(Bash, Read, Write, Glob, Grep, MCP-served tools, more) and 69+
slash commands.
and copy stay available on the primary screen.
affinity, and live clients list / clients detach control plane.
--resume id|last, /sessions, /tag, /branch, /history.
return end-to-end LLM responses with full tool support.
Protocol & routing
- Anthropic Messages API ↔ OpenAI Chat Completions translation,
- Multi-backend auto-discovery (Ollama 11434, LM Studio 1234, vLLM
- Production middleware: retry, rate-limit, fallback, circuit
including streaming + non-streaming + tool-use protocol.
8000) and intent-aware routing across local + cloud catalogs.
breaker, response cache (LRU 100 / 5 min TTL), per-model timeout
override, hot config reload.
Skills (L2)
- TF-IDF–matched skill injection from
~/.owlcoda/skills/and owlcoda skillsCLI: `info / list / show / synth / delete /- Auto-synthesis pipeline that extracts reusable skills from
the curated skill pack into the system prompt at request time.
search / match / stats / cleanup / export / import`.
complex completed sessions.
Training data pipeline (L3, opt-in / off by default)
- Quality-scored session collection (5 weighted dimensions),
PII sanitization before disk write, JSONL / ShareGPT / insights
export formats. Disabled unless explicitly opted-in via
trainingCollection: true in config.json or
OWLCODA_TRAINING_COLLECTION=1.
Browser admin
owlcoda ui/owlcoda adminprints a one-shot admin URL.--open-browserto launch directly;--routeand--select- Provider failure diagnostics unified across main agent,
for focused handoffs to specific admin views.
subagent, /v1/messages, /v1/chat/completions, admin
test connection, and /warmup.
Diagnostics & observability
owlcoda doctor— environment, runtime, and model health.- `owlcoda config / validate / models / health / status / inspect /
- HTTP API:
/v1/perf,/v1/latency,/v1/cost,/v1/recommend,
audit / cache / logs / benchmark / export`.
/v1/usage, /v1/audit, /v1/cache, /v1/skills,
/v1/insights/:sessionId, /v1/training/*, /v1/captures,
/v1/search, /openapi.json, /metrics.
Privacy posture
- Sessions stay local under
~/.owlcoda/. Training data
collection is opt-in. Nothing is uploaded to any external
service by OwlCoda itself.
Known limitations
- Mouse-wheel transcript scrollback is not yet routed through the
- LSP tools require the user to install the corresponding
- OAuth-style remote MCP servers are not yet supported; stdio MCP
in-tree Ink fork. Use PgUp / PgDn / Ctrl+↓ or /history
for in-app scrollback.
language server (typescript-language-server, pyright,
rust-analyzer, gopls, etc.) and wire it via a plugin.
servers are fully functional.