Integrations · JetBrains
Run your .kolm inside JetBrains IDEs in 60 seconds.
One command writes the MCP server entry for the AI Assistant. Restart your IDE. Your compiled AI shows up as a callable tool in the AI Assistant panel. Same recipe for IntelliJ IDEA, PyCharm, GoLand, WebStorm, RubyMine, RustRover, CLion, PhpStorm, Rider, DataGrip.
Step 01
Install the integration.
One command. The installer detects every JetBrains IDE on disk, writes the MCP server entry to each AI Assistant config, and pins the kolm binary by absolute path. Re-running is idempotent.
kolm install jetbrains --apply
Step 02
Restart the IDE.
JetBrains IDEs (2024.3 or newer with AI Assistant) re-read the MCP config at startup. After restart your compiled model shows up as an MCP tool inside the AI Assistant chat. No further configuration. No keys to paste.
# File · Invalidate Caches and Restart # or just close and reopen the IDE
Step 03
Use it. The model runs locally.
Ask AI Assistant to call your kolm tool. The recipe runs in-process. Zero outbound API calls. The same byte-exact .kolm artifact you compiled on the laptop is what the editor invokes.
# in AI Assistant chat > /kolm.run support-triage classify this ticket > /kolm.run pr-review score this PR
What lands in the JetBrains MCP config
Open the AI Assistant settings file after running the installer. This is the entire change. No telemetry. No phone-home. The command is the absolute path of the kolm binary the installer detected, so the entry stays valid even if your PATH changes.
JetBrains AI Assistant · MCP servers (per-IDE config)
{
"mcpServers": {
"kolm": {
"type": "stdio",
"command": "/usr/local/bin/kolm",
"args": ["serve", "--mcp"],
"env": {
"KOLM_HOME": "~/.kolm"
}
}
}
}
What you get.
Every JetBrains IDE.
IntelliJ IDEA, PyCharm, GoLand, WebStorm, RubyMine, RustRover, CLion, PhpStorm, Rider, DataGrip - same install, same MCP entry. One config per IDE family.
Local execution.
The MCP server runs in-process over stdio. No daemon. No HTTP unless you opt in with --http on port 8765.
Zero outbound calls.
The IDE talks to the kolm process. The kolm process executes the recipe against your local model. Nothing leaves the machine.
Reproducible.
Every tool call ships _kolm metadata (artifact hash, recipe id, K-score). AI Assistant logs it. You can replay any call.
Idempotent install.
Re-running kolm install jetbrains --apply updates the entry in place. No duplicate servers. No drift.
Dry-run first.
Drop the --apply and the installer prints what it would write to each detected IDE. Audit it. Then re-run with --apply.
Ready to compile.
If you don't have the CLI yet, the quickstart is one page. If you want the MCP surface itself, the spec page lists every tool the kolm server exposes.