Engineer: Explore → Plan → Execute
The default agent researches your codebase, creates a structured plan, and waits for your approval before making changes. You review the approach before any code is touched.
Your AI, your rules. Prompts, agents, themes, models, and permissions are all files you control.
Engineer: Explore → Plan → Execute
The default agent researches your codebase, creates a structured plan, and waits for your approval before making changes. You review the approach before any code is touched.
Coder: Direct Implementation
For straightforward tasks, the coder agent skips planning and makes changes immediately. Switch with Tab — use the right workflow for the task at hand.
Sandboxed by Default
Every tool call is confined to your project directory. Path escapes, delete commands, and external access require explicit permission. Glob-based allow/ask/deny rules per tool and file pattern.
Multi-Provider, Your Choice
Anthropic, OpenAI, Google, Groq, Ollama, LM Studio, or any OpenAI-compatible endpoint. Switch mid-session. Mix cloud and local models.
Configurable at Every Layer
Utility models, budget caps, reasoning effort, permissions, LSP servers, diagnostics, agents, system prompts, themes — all controlled via YAML config and markdown files.
Project Memory
Layer global, project, and personal instructions via KODA.md files. Use /remember to save learnings on the fly. Conventions carry across sessions.
No plugins required for the core workflow.
Cost Control
Utility model routing sends titles, compaction, and research tasks to a cheap model. Adaptive reasoning budgets scale down during tool loops. Per-session budget caps stop runaway spending.
Context Management
Pruning, compaction, and safety limits keep long sessions coherent. Old tool outputs are replaced with summaries. The model doesn’t lose track of what you’re doing.
LSP Code Intelligence
Go-to-definition, find references, hover info, and diagnostics via language servers. Auto-discovers servers from your project’s tooling. Auto-lints after every edit.
Live Diff Preview
See file changes as the model writes them — before they hit disk. Cancel mid-write if something looks wrong.
Background Tasks & Subagents
Long-running commands execute in the background while the model keeps working. Subagents handle parallel research. Results feed back automatically.
MCP Integration
Extend with any Model Context Protocol server — stdio or SSE. Tools merge with built-ins and respect the same permission system.
Sessions & Snapshots
Persistent sessions with branching and per-turn git snapshots. Resume where you left off or replay the workspace state at any turn.
Loop Detection
Detects when the model gets stuck repeating tool calls. Progressive intervention: warn, nudge, then stop. Prevents wasted tokens on infinite loops.
Auto-Diagnostics
After every file edit, linters and LSP diagnostics run automatically. The model sees errors immediately and fixes them in the same turn — no manual re-run needed.
# macOS / Linuxcurl -fsSL https://raw.githubusercontent.com/sageil/koda/main/install.sh | sh
# Homebrewbrew tap sageil/tap && brew install koda
# Gogo install github.com/sageil/kodacode/v1/cmd/koda@latestThen run koda to start.