Skip to content

Enterprise Governance

This guide describes a lightweight convention for keeping a documented AI system inventory — the thing every modern AI-governance framework asks for — without adopting a governance platform.

You should be able to read this in under ten minutes and have something running by the end.

Every modern AI-governance framework expects a documented inventory of AI systems:

  • NIST AI RMF GOVERN-1.3 — documented AI system inventory.
  • ISO/IEC 42001:2023 Clause 7 — AI system documentation.
  • EU AI Act Annex IV — technical documentation per high-risk system.

Large enterprises typically answer this with governance platforms (Credo AI, OneTrust AI Governance, ServiceNow AI Control Tower, IBM watsonx.governance). Smaller teams, open-source projects, or orgs that haven’t invested in a platform need a lighter pattern that still satisfies an auditor.

A Git-native manifest per repo, aggregated nightly via a GitHub Action, gets you audit-grade inventory at zero infra cost. If you later adopt a governance platform, the same manifests become its import source — nothing has to be re-keyed.

In the repo root of each AI system, commit a .ai-register.yaml:

system:
id: example-support-agent
name: Example Customer Support Agent
owner: support-platform-team
risk_tier: high # EU AI Act vocabulary
deployment: production
data_classification: restricted
description: Answers customer-support questions over chat.
models:
- provider: anthropic
model: claude-opus-4-7
evals:
path: evals/
runs_in_ci: true
controls: # <FRAMEWORK>-<VERSION>:<ID>
- NIST-AI-RMF-1.0:GOVERN-1.3
- ISO-42001-2023:Clause-7
- EU-AI-ACT-2024:Art.55
- INTERNAL-AI-POLICY-1.0:CTRL-CUSTOMER-ISOLATION
last_reviewed: 2026-04-24

The full example, including comments, is in the agentv repo at examples/governance/ai-register/.ai-register.yaml.

  • risk_tier — EU AI Act vocabulary (prohibited | high | limited | minimal). Other vocabularies (e.g. NIST 800-30) work too; pick one and stick with it.
  • controls — same string format as the eval-level governance schema documented in governance metadata. That overlap is intentional: a control declared on a system can be cross-referenced against the controls exercised by its evals.
  • last_reviewed — a date. Aggregators flag entries older than whatever cadence your governance team works to.
  • evals.path — a pointer to the agentv evals that exercise this system. The aggregator does not run them; it just records that they exist.

In a dedicated ai-register repo (or your existing governance repo), drop .github/workflows/aggregate.yml from examples/governance/ai-register/. The workflow:

  1. Searches the org via gh api search/code for every .ai-register.yaml.
  2. Fetches each one via gh api repos/.../contents.
  3. Aggregates them with a small Python script into register.csv and a self-contained register.html table.
  4. Surfaces stale entries (last_reviewed > 90 days) on the workflow summary and uploads the CSV + HTML as workflow artifacts.

Required secret: GH_AGGREGATE_TOKEN with repo (or read:org) scope, scoped to the org you want to enumerate. For public repos the default GITHUB_TOKEN is sufficient.

The workflow is fewer than 150 lines of YAML, runs in a single job, and has no third-party dependencies beyond gh (preinstalled on ubuntu-latest) and PyYAML.

A useful starting cadence:

  • Engineers update .ai-register.yaml whenever a system enters or leaves production, or its model / scope changes materially.
  • The aggregator runs weekly via cron.
  • The workflow summary is the source of truth for stale entries; if your team prefers a Slack ping, add one extra step that posts to a webhook.
  • Quarterly, the governance team walks the CSV and updates last_reviewed on the systems they signed off on.

That’s the whole loop.

agentv does not parse .ai-register.yaml. The convention is orthogonal:

  • The manifest documents which AI systems exist, who owns them, and which controls they are accountable for.
  • The eval YAML documents which behaviour a given system was tested against.

Both files use the same <FRAMEWORK>-<VERSION>:<ID> control format, so a script can intersect “manifest claims this system is covered by NIST-AI-RMF-1.0:MEASURE-2.7” with “eval results show 14 cases tagged NIST-AI-RMF-1.0:MEASURE-2.7 ran this quarter.”

When and if your org adopts Credo AI / OneTrust AI Governance / ServiceNow AI Control Tower / IBM watsonx.governance:

  • Each platform accepts CSV / JSON imports keyed on system identifiers.
  • Your register.csv artifact already has the per-system row each importer expects.
  • The controls column maps directly onto the framework-control fields the platform exposes — there is nothing to re-key.

You don’t have to rip out the manifest convention either. Most teams keep the Git-native artifact as the canonical source and the platform as the operations surface, syncing one direction.