Your autonomous systems are making decisions.
Can you name who owns them?

Not the project sponsor. Not the steering committee.
The accountable owner — documented, auditable, defensible.

UK-focused. Built for regulated environments. Designed to survive audit, litigation, and underwriting scrutiny.

The 7 Questions Your Board Will Ask

These aren't AI questions. They're accountability questions. If you can't answer them, you have an exposure.

1

Who authorised the system's objectives?

Not "who approved the project" — who approved what the system is allowed to pursue.

2

Who owns behaviour over time?

Name a single accountable role for the system's actions after go-live — not a committee.

3

What delegation level are we operating at?

Is this D0, D1, D2, or D3 — and is that classification documented and reviewed?

4

What can it do without human approval?

Tools, permissions, data access, external communications, transactions, changes in production.

5

Can we reconstruct decisions after an incident?

Do we have behaviour logging, traceability, and retention that would survive audit and discovery?

6

Who can intervene — and how fast?

Is there a tested kill/rollback path, escalation authority, and a defined threshold for intervention?

7

What happens when it acts within policy but causes harm?

Where does liability land — Legal, Risk, Technology, the business — and is that agreed in advance?

If your answers are partial, inconsistent across functions, or depend on "we'll investigate if needed" — that's the gap.

The Problem

Organisations are deploying agentic systems faster than governance is adapting. The problem isn't model performance. It's accountability over time.

When an autonomous system causes harm while acting within policy, most organisations can't demonstrate:

That's a litigation problem. An audit problem. An insurance problem. Not a technology debate.

The question isn't whether your systems work. It's whether you can prove who was accountable when they did.

The Diagnostic

4–6 weeks. Board-ready output.

What we do

  • Identify autonomous and agentic systems in scope
  • Classify delegation level (D0–D3)
  • Map accountability ownership and escalation gaps
  • Define minimum defensible controls for D2/D3 systems

What you get

  • Board memo (2–3 pages) naming accountability gaps
  • Risk register entry in adoptable wording
  • Delegation map of in-scope systems
  • Minimum defensible controls baseline

We don't implement tooling. Output is designed to be defensible across Risk, Legal, Audit, and Technology.

The Standard

D0-D3:2026 — Classification of delegated authority in autonomous decision systems

D0-D3 is a trigger-based classification that answers one question: when does autonomous system delegation become an underwriting, audit, or liability issue?

Classification Levels

  • D0 — Assistive only. Human acts.
  • D1 — System proposes. Human approves each action.
  • D2 — System acts within limits. No per-action approval. ← inflection point
  • D3 — Ongoing authority. System can change its own behaviour.

Underwriting implications: D0/D1 = standard operational risk. D2 = requires documented limits and controls as placement conditions. D3 = treated as autonomous authority — coverage priced as if the insured signed the outcomes.

Underwriting Conditions (D2/D3)

  • Objective authorisation documented
  • Scope and boundary definition reviewed annually
  • Behaviour logging admissible to claims review
  • Named accountable owner (senior manager level)
  • Escalation and kill/rollback capability tested

The Quick Reference is open. The full D0-D3:2026 standard is available on request.

Who This Is For

UK and EU regulated environments.

Request a Briefing

If you can't name who owns autonomous system behaviour, you've found the gap.

contact@graventure.com

Include your industry, jurisdiction (UK/EU/US), and whether you have D2/D3 systems in scope.