Legal AI without the privilege risk.
Most law firms have banned cloud LLMs on client matter. The usual line is "every prompt is potentially discoverable." kolm flips the architecture: you compile the firm's redline policy and document-review patterns into a single signed file, then the firm runs that file locally. No client document touches a third-party API at run time. Privilege intact, audit defensible, model behavior frozen.
What you'd compile first.
Each recipe ships as a standalone .kolm. Compile once on the firm's gold examples, distribute to partners and associates via MDM. The artifact has no network requirement at run time.
| contract redline | Take a counterparty draft, return tracked-changes that match the firm's playbook on indemnity, IP assignment, MFN, term and termination. Compile from 80–200 historical redlines. |
| discovery summarization | Take a custodian's mailbox export, produce per-thread summaries with privilege/relevance/responsiveness flags. Compile from labelled training corpus reviewed by senior associates. |
| cite-checker | Take a memo or brief, surface every citation, flag anything that doesn't match a current Bluebook form or has been overruled. Compile from the firm's citation manual + a hold-out test set. |
Redline recipe in five commands.
$ kolm init redline-policy
$ kolm examples add ./examples/redlines/*.json # 142 historical pairs
$ kolm spec set ./schemas/markup.json # JSON Schema for tracked-changes
$ kolm compile --base qwen2.5-7b-instruct --recall ./playbook/
# K-score 0.81 (ship floor 0.74)
$ kolm sign --org="Firm LLP" redline-policy.kolm
# signed sha256:7a4c1f2e… receipt anchored
Inside the firm, offline, signed.
$ kolm run redline-policy.kolm --in counterparty-draft.docx --out redlined.docx input counterparty-draft.docx (12.4 KB) recall hits 7 (firm playbook v2026.02) output redlined.docx (14.1 KB, 23 tracked changes) receipt receipt-9b110d56.json (chain ok, anchor ok) runtime 14.2s on M3 Pro, 0 network calls
Why the receipt matters at deposition.
A signed receipt is the firm's record that this artifact was the one that produced this output for this input on this date. It binds the model state, the recall sources, and the input/output hashes into a single chain. If opposing counsel asks how the AI was used, the firm produces the artifact and the receipt. Reproducibility is the audit defense.
| artifact hash | sha256 of the .kolm zip. Pinned in the firm's matter management system at deployment. Cannot drift. |
| recall provenance | Every recall hit is content-addressed. The receipt names which playbook section was retrieved. |
| input/output hashes | HMAC-chained. Tampering with output after the fact breaks the chain on verify. |
| retention policy | Receipts are local. Firm retention rules apply. No third-party archives to subpoena. |
What's shipped. What needs review.
Scope: the architecture removes the runtime cloud egress that most cloud-AI bans cite as the disqualifier. Privilege protection itself remains the firm's call with the firm's GC.
| runtime cloud egress | Removed by design. The .kolm runs locally on the firm's hardware. |
| compile-time data flow | Compile happens against your gold examples. Use de-identified or synthetic examples for the first pass; switch to real once policy is reviewed. |
| frontier teacher used at compile | Yes; the LoRA distillation step uses your model API key. The teacher only sees your training set, not your production matter. |
| privilege determination | The firm's GC decides. We provide architecture and receipts, not legal opinion. |