kolm  /  compliance  /  nist ai rmf

kolm × NIST AI RMF 1.0

The NIST AI Risk Management Framework defines four functions: GOVERN, MAP, MEASURE, MANAGE. Below: the specific controls that kolm artifacts and receipts cover, with pointers to the evidence inside a .kolm.

This page maps NIST AI RMF subcategories to the field in the .kolm artifact or receipt that supplies the evidence. It is not the framework itself; the framework is a living document at nist.gov/itl/ai-risk-management-framework. We track 1.0 (Jan 2023) with the GenAI Profile updates from Jul 2024.

The framework has 72 subcategories. We don't claim coverage of all 72 — many are organizational policy controls that no tool can produce. We do cover the artifact-and-evidence subset: 31 subcategories below.

function 01 . govern

Risk-management culture and processes

The artifact-level controls below produce the documentation a board / risk committee asks for.

idcontrolevidence in .kolm
GOVERN 1.4Legal and regulatory requirements are documented per AI system.manifest.compliance.frameworks[] declares HIPAA / SR-11-7 / SOX / FedRAMP. Verified at compile time.
GOVERN 1.6Mechanisms exist to inventory AI systems.kolm registry list returns CID + manifest digest + builder for every artifact issued by the org.
GOVERN 2.1Roles and responsibilities are documented for AI development.manifest.provenance.builders[] records signing identity for compile + sign + deploy events.
GOVERN 3.1Diverse perspectives are leveraged during the AI lifecycle.process control — not artifact-derivable. We surface review-stage signatures in provenance.reviewers[] when configured.
GOVERN 4.1Organizational practices are in place to address risks of AI systems.manifest.policies.kscore_gate + policies.airgap + policies.allowed_egress[] declare the artifact's runtime constraints.
GOVERN 5.1Policies and procedures are in place for verifying AI system claims.Every receipt independently verifiable via /verify-prod or kolm verify. Months later. Offline.

function 02 . map

Context and risks identified per system

The compile-time spec defines the artifact's intended use; the verifier defines its guardrails.

idcontrolevidence in .kolm
MAP 1.1Intended purpose, use cases, and expected benefits are documented.manifest.purpose + manifest.spec.task_description are mandatory fields; rejected at compile-time if absent.
MAP 1.2Categorization is performed (general vs. specialized purpose).manifest.kind enum: specialist / router / verifier. kolm artifacts are always specialists.
MAP 2.3Scientific integrity and TEVV considerations are documented.manifest.eval records benchmark suite, gold set size, K-score formula version, random seed.
MAP 3.4Processes for human oversight are defined.manifest.policies.human_in_the_loop declares review gates; bypass attempts emit a receipt with verifier_invoked: false.
MAP 4.1Approaches for mapping AI technology and risk are followed.manifest.spec.failure_modes[] enumerates declared failure modes; verifier checks each.
MAP 5.1Likelihood and magnitude of harms are identified and documented.operator-declared; we record the declaration in manifest.risk.harm_categories[] and the K-score gate above which the operator authorizes deployment.

function 03 . measure

Risks are quantified and monitored

K-score is the artifact's measured quality. Receipts are the running measurement.

idcontrolevidence in .kolm
MEASURE 1.1Approaches and metrics for measurement are identified and applied.K-score formula in manifest.eval.kscore_definition. Formula version pinned per artifact so historic receipts replay identically.
MEASURE 2.1Test sets are representative.manifest.eval.gold_set_cid anchors the gold set itself as a content-addressed artifact. Re-runnable.
MEASURE 2.3AI system performance is evaluated regularly.kolm eval + kolm score on cron + receipt emitted each run. Bridge feed exposes the time series.
MEASURE 2.6Safety risks are evaluated and documented.manifest.verifiers[] declares dangerous-output checks (PHI leak, prompt-injection, credential-exfil patterns).
MEASURE 2.8Risks of confabulation are evaluated.Receipt records whether constrained-decoding was active. Hallucination patterns blocked at decode time, not post-hoc filtered.
MEASURE 2.10Privacy risk of the AI system is examined.manifest.privacy.pii_classes[] + redactor verifier output recorded in receipt per inference.
MEASURE 2.11Fairness and bias are evaluated.manifest.eval.subgroup_breakdown records K-score per declared subgroup. Reviewer sees disparity at a glance.
MEASURE 3.1Mechanisms for tracking risks over time are identified.HMAC-chained receipt log. Each receipt links to the previous; tamper-evident at the chain level.
MEASURE 4.1Approaches for tracking risk-related feedback are followed.kolm capture records every input/output flagged by ops. Promotes the next training batch.

function 04 . manage

Risks are prioritized, responded to, and monitored

Gates, kill-switches, and rollback are first-class operations.

idcontrolevidence in .kolm
MANAGE 1.1A determination is made as to whether the AI system achieves its intended purpose.K-score gate decision is the determination. Receipt records gate value + actual + accept/reject.
MANAGE 1.3Responses to identified AI risks are based on assessed priority.kolm tune rollback restores prior head atomically when a regression is detected.
MANAGE 1.4Negative impacts are decommissioned or remediated.kolm registry revoke publishes a revocation receipt referencing the offending CID. Verifiers refuse to load.
MANAGE 2.1Resources are allocated to relevant AI risks.operator-managed; we surface compile + run cost per artifact in receipt.compute so risk owners can audit spend.
MANAGE 2.2Mechanisms are in place and applied to sustain AI risk management.Scheduled kolm verify via GitHub Actions cron. Quarterly re-verification is the default.
MANAGE 2.3Procedures are in place to respond to and recover from incidents.Receipt + kolm registry revoke + signed deprecation notice. Public registry surfaces the revocation.
MANAGE 2.4Mechanisms are in place to communicate AI risks externally./verify-prod + /sbom + /spec/rs-1 are the public communication surfaces.
MANAGE 3.1Third-party risk is regularly monitored.CycloneDX SBOM refreshed weekly; cosign verify on every dependency.
MANAGE 3.2Pre-trained models from third parties are evaluated.manifest.model.base_digest pins the upstream model hash. K-score gate re-applied on every recompile.
MANAGE 4.1Post-deployment monitoring plans are implemented.Receipt log + bridge feed JSONL. Operators ship to their SIEM. /audit-log is the in-product view.

Get the mapping doc

The PDF version of this mapping (with cross-references to the AI RMF Playbook and EU AI Act Article 13 requirements) is available on request. Useful for a procurement bundle or a regulator submission.

compliance@kolm.ai