Executive Summary
Most AI contracts in the market today were drafted for 2022 software. It is 2026, and that gap is a board-level liability. MIT NANDA reports that 95% of GenAI pilots return zero measurable value, and Gartner forecasts more than 2,000 "death by AI" legal claims by year-end 2026 (MIT NANDA, 2025; Gartner, 2025c). The failure is not technology selection — it is governance architecture missing before the contract is signed. The GOVERN Before You Buy framework provides the six-question pre-procurement diagnostic boards and C-suites use to verify governance readiness across Governance accountability, Organisational data readiness, Verified regulatory posture, Established ROI baseline, Risk-managed exit strategy, and Non-negotiable contract clauses. Run it before signature, not after incident.
Each letter of GOVERN names a specific accountability gap. Answer all six with documented evidence before procurement begins. Hesitation on any answer is the diagnostic — the gap is yours, not the vendor's.
Named Accountable Exec
A specific senior official accountable for every decision the system affects, including edge cases and adverse outcomes — before procurement begins, not after.
CIO presents three names — cannot pick one → defer procurement until accountability is assigned, in writing.
Autonomy Threshold
For agentic systems: the financial value or decision consequence at which the agent pauses for human approval. Documented with delegated-authority rigour.
Agent authorised to approve invoices below $10,000 → finance VP signs the threshold, audit committee receives quarterly variance report.
Escalation Chain
The named owner who handles exceptions the agent cannot resolve. Routed, time-bound, and integrated with existing incident response.
Exception → ops manager (15 min) → CIO (1 hr) → Risk Committee (24 hr) — logged in change-management system.
Configuration Authority
The named owner of the agent's operational parameters. Modification rights documented with the same rigour as any delegated authority instrument.
Configuration changes require dual approval (CAIO + system owner) → logged with rationale → reviewed quarterly by audit committee.
Completeness
Whether all required data fields are populated for the use case the AI system will execute. Missing fields produce confident wrong answers.
Customer file 92% complete on tier-1 attributes → remediate to 99% before vendor evaluation, or scope vendor to handle missingness explicitly.
Consistency
Whether the same data point reads the same way across systems of record. Inconsistency multiplies model error rates without warning.
Customer ID exists in CRM and ERP but field formats differ → harmonise before procurement — do not pay vendor to discover this.
Accuracy
Whether the data correctly reflects the real-world entity it describes. Inaccurate data trains inaccurate models.
Address fields validated against postal authority → 7% mismatch rate → remediate before training begins.
Validity
Whether the data conforms to defined business rules and formats. Validity failures cascade silently into model output.
Date fields validated to ISO 8601 → 12% non-conforming → rule-based remediation before model ingest.
Uniqueness
Whether duplicate records inflate or distort training data. Duplicates bias every downstream output.
Master data management identifies 4% duplicate customer records → resolve before training, or vendor inherits the bias.
Timeliness
Whether the data is fresh enough for the decision the AI will make. Stale data produces decisions about a world that no longer exists.
Pricing model trained on 18-month-old data → commodity prices have shifted 30% → refresh data architecture before procurement.
TBS Directive Class
Government of Canada: the system's risk classification under the TBS Directive on Automated Decision-Making. Compliance deadline for existing systems is 24 June 2026.
System classified Level 3 → AIA required → ADM-level approval and bilingual publication on Open Government Portal before production.
EU AI Act Tier
Cross-border: the system's risk tier under the EU AI Act (Unacceptable, High, Limited, Minimal). High-risk systems trigger registration, conformity assessment, and post-market monitoring.
Recruitment screening AI sold into EU → High-risk → vendor must show CE marking and conformity assessment before contract proceeds.
California EO N-5-26
North American: vendor certification under California Executive Order N-5-26 (signed 30 March 2026). Now functioning as a de facto continental procurement benchmark.
Vendor cannot produce N-5-26 certification documentation → that is not a data gap, it is a governance red flag — defer.
AIA / Risk Documentation
Vendor capability to provide written risk classification and conformity documentation across applicable jurisdictions, on request.
RFP question: "Provide risk classification under TBS, EU AI Act, and N-5-26" → vendor responds in five business days with documentation.
CPI Threshold
Cost Performance Index: the predefined threshold below which the AI investment triggers escalation. CPI below 0.90 is systemic failure, not a scheduling issue.
Threshold set at CPI 0.95 (amber) and 0.90 (red) → PMO escalates RED to Governance Board within 5 business days.
SPI Threshold
Schedule Performance Index: the predefined threshold below which delivery slippage triggers replanning. Pre-set thresholds remove emotion from reporting.
SPI below 0.85 → mandatory replan within 10 business days → documented variance analysis presented to Risk Committee.
BAC vs EAC
Budget at Completion vs Estimate at Completion: forecast variance that signals systemic capital exposure. Predictive, not historical.
Q3 EAC forecasts 18% over BAC → CFO presents three-option rebalance to Board → decision documented before next investment tranche.
Counterfactual Baseline
The cost and outcome that would have occurred without the AI system. Mandatory. The vendor's projected ROI is not your baseline — your current process performance is.
Manual process baseline: $2.1M/yr, 14-day cycle → AI alternative measured against this, not vendor's "industry average" benchmark.
Data Portability
Format, timeframe, and verification method for retrieving organisational data on contract termination. Specified in the contract, not negotiated at renewal.
Contract clause: "Data exported in vendor-neutral format within 30 days of termination notice, verified by independent third party."
Certified Data Destruction
Deletion obligations across primary, backup, and derivative data stores — including model fine-tunes that contain organisational data.
Contract clause: "Vendor provides destruction certificate for primary, DR, backup, and derivative artefacts within 90 days, signed by named officer."
Interoperability
Whether data, model fine-tunes, and configuration artefacts are portable to an alternative vendor without penalty or technical lock-in.
Contract clause: "Configuration and fine-tune artefacts exported in open formats → no proprietary encoding, no migration penalty."
Supervision & Audit Rights
What the vendor logs, for how long, and whether you can audit the reasoning chain behind any AI decision. Contracts must address accountability failures, not only uptime failures.
Contract clause: "Vendor logs full reasoning chain for 7 years → customer audit rights with 10 business days' notice, on-site or remote."
IP Indemnification
Vendor indemnifies against intellectual property claims on AI-generated outputs. Google, Microsoft, and IBM provide this. Many vendors do not. Non-negotiable for customer-facing or regulated workflows.
Contract clause: "Vendor indemnifies customer against third-party IP claims arising from outputs of the model in the agreed scope of use."
Autonomy Liability
For agentic systems: vendor liability when the agent takes an autonomous action with financial consequences. The contract must specify who is liable, up to what amount, and what the remedy is.
Contract clause: "Vendor liable up to $5M per incident for autonomous-action errors above defined threshold → cure period 30 days → reimbursement schedule defined."
Figure 1: The Six GOVERN Questions — Pre-Procurement Reference Matrix
The six questions sequence from organisational readiness through contract architecture to board authorisation. Skipping any link in the chain creates a downstream gap that costs ten times more to close after signature than before.
Step 1 · Pre-Procurement Readiness
G-accountable→O-data-quality
Confirm a named accountable executive and verify data quality across the six dimensions before vendor conversations begin.
Step 2 · Regulatory Risk Screen
V-jurisdiction→risk-classification
Establish jurisdictional exposure (GC, EU, California) and require vendor risk classification documentation in the RFP.
Step 3 · Value-Capture Baseline
E-counterfactual→CPI/SPI thresholds
Document current process performance as the counterfactual. Pre-set CPI and SPI escalation thresholds before signature.
Step 4 · Vendor Dependency Mitigation
R-portability→R-destruction→R-interoperability
Architect the exit before the entry. Data portability, certified destruction, and interoperability are contract clauses, not aspirations.
Step 5 · Contract Architecture
N-audit-rights→N-IP-indemnity→N-autonomy-liability
Replace 2022 SaaS templates with 2026 AI-specific clauses. Indemnification, audit rights, and autonomous-action liability are non-negotiable.
Step 6 · Board Authorisation
three-question-test→approve or defer
Run the three-question board test (Section 5). If management cannot answer clearly, the procurement does not proceed.
Warning
Skipping any chain step creates a cascading gap. The most common failure pattern is jumping from Step 1 directly to Step 5 — signing the contract without establishing the regulatory and ROI architecture. The contract then carries risk it was never designed to carry.
Figure 2: The GOVERN Diagnostic Chain — Sequential Question Architecture
The same framework applies to every AI procurement — only the emphasis shifts by situation. Use this matrix to identify which GOVERN dimension demands the most attention for the deal in front of you.
| Procurement Situation |
GOVERN Focus |
Why It Matters |
| Standalone AI tool (single use case) |
G · E · R |
Accountability and exit clarity dominate — tool-level lock-in is the primary risk if value-capture stalls. |
| Embedded AI feature added to existing platform |
V · N |
The vendor relationship pre-exists — new contract clauses and re-verified regulatory classification are the leverage points. |
| Generative AI enterprise service (LLM-based) |
O · N |
Output quality is data-quality-bounded; IP indemnification and audit rights are the contract-architecture priorities. |
| Agentic system (autonomous action inside workflows) |
G · N |
Autonomy threshold, configuration authority, and autonomous-action liability are the contract architecture — standard SaaS terms cannot carry this weight. |
| Vendor consolidation (replacing three tools with one) |
R · E |
Dependency concentration risk increases; exit architecture and ROI baseline must reflect the consolidated single point of failure. |
| Renewal at price increase |
E · R |
Pricing leverage at renewal is forecast at 15.2% growth in 2026 — the counterfactual baseline and exit credibility determine your negotiating position. |
| Government of Canada Protected B deployment |
G · V |
TBS Directive on ADM, AIA, ADM-level approval, and Open Government Portal publication are non-negotiable before production. |
| Cross-border US/Canada deal |
V · R |
California EO N-5-26 certification, EU AI Act tier (if applicable), and data residency dictate contract architecture. |
| M&A-acquired AI platform (governance debt) |
G · O |
Inherited models often lack documented accountability and data lineage — treat as new procurement, not in-place asset. |
| Proof-of-concept transitioning to production |
E · N |
POC contracts are written to be cheap, not durable — production-grade ROI baseline and contract clauses replace POC terms before scaling. |
Figure 3: Procurement Situation Matrix — Ten Executive Scenarios Mapped to GOVERN Focus
The framework applies across all AI vendor archetypes, but the difficulty of executing each question varies. Agentic systems demand the most rigorous governance — standard SaaS contracts cannot carry the autonomous-action risk.
| GOVERN Dimension |
Standalone AI |
Embedded Feature |
GenAI Service |
Agentic System |
| G — Governance |
Good |
Fair |
Good |
Critical |
| O — Data Readiness |
Excellent |
Fair |
Excellent |
Excellent |
| V — Regulatory Posture |
Excellent |
Good |
Excellent |
Excellent |
| E — ROI Baseline |
Excellent |
Good |
Excellent |
Excellent |
| R — Exit Strategy |
Excellent |
Fair |
Good |
Good |
| N — Contract Clauses |
Good |
Fair |
Good |
Critical |
Pro Tip
Agentic systems require an upgrade to every dimension — particularly Governance and Contract Clauses. Standard governance asks "who is accountable when the system provides incorrect advice?" Agentic governance asks "who approved the configuration that authorised the agent to act, and at what autonomy threshold?" Accountability shifts from the operator to the executive who approved the operational parameters. Standard 2022 SaaS contracts cannot carry that weight (Morgan Lewis, 2026).
Figure 4: AI Vendor Archetype Compatibility — Six Questions Across Four Vendor Types
Three questions separate rubber-stamp approval from architected governance. Run them in any board or executive committee meeting before approving any AI procurement — standalone tool, embedded feature, GenAI service, or agentic system.
The Three-Question Board Test
1
Name the accountable executive. Who is the named senior official accountable for this system's governance, compliance, and outcomes? For agentic systems, who approved the autonomy thresholds, and at what financial value does the agent pause for human review? G + autonomy threshold
2
Score governance readiness. What is our readiness score across the six GOVERN dimensions, and what is the documented gap? Where the score is incomplete, what is the closure plan and the named owner? G + O + V + E + R + N
3
Verify the contract. Does our contract include IP indemnification, supervision and audit rights, and vendor liability for autonomous errors — or does it still read like a 2022 SaaS agreement? N — non-negotiable
If management cannot answer clearly, the procurement does not proceed. The diagnostic is not an obstacle to AI adoption — it is the architecture that makes AI adoption survivable.
Six factors that govern how the framework is interpreted and applied. The Executive Insight at the close is the single meta-takeaway of the entire QRC.
| Factor |
Detail |
| Scope boundary |
GOVERN is a pre-procurement governance diagnostic, not a vendor technical evaluation. Pair it with a Gartner, McKinsey, or PMI vendor evaluation for the buy-side technical scoring. |
| Agentic-specific upgrade |
Standard 2022 SaaS contracts do not address autonomy thresholds, configuration authority, or autonomous-action liability. For any agentic deployment, treat the N (Non-negotiable Contract Clauses) dimension as load-bearing — not a checklist. |
| Government of Canada context |
The TBS Directive on Automated Decision-Making compliance deadline for existing systems is 24 June 2026. Algorithmic Impact Assessments must be ADM-approved and published bilingually on the Open Government Portal before production deployment (Treasury Board of Canada Secretariat, 2025). |
| Counterfactual baseline requirement |
The vendor's projected ROI is not your baseline. Your current process performance is. Establish the baseline before vendor conversations begin — otherwise the vendor's number becomes the anchor by default. |
| Honest limitation |
EVM (Earned Value Management) for AI investment requires three organisational prerequisites: integrated data architecture connecting scope, schedule, and cost in near-real time; executive sponsorship that treats variance as signal not blame; and a PMO with enforcement mandate. Without these, EVM produces the illusion of control, not the reality. |
| Review cadence |
The framework is the minimum bar, not a ceiling. Reassess annually or on any material regulatory change — including amendments to the TBS Directive, the EU AI Act delegated acts, or California EO N-5-26 implementation guidance. |
Executive Insight
The procurement contract is the last point where governance can be architected at low cost. After signature, every correction costs ten times more — and for agentic systems, the correction may arrive only after the financial or reputational loss has already been booked. The six GOVERN questions are not obstacles to AI adoption. They are the architecture that makes AI adoption survivable. Run the diagnostic before the procurement, not during the incident response.
Verifiable References (APA 7)
- Gartner. (2025a, October 22). Gartner forecasts worldwide IT spending to grow 9.8% in 2026, exceeding $6 trillion for the first time. https://www.gartner.com/en/newsroom/press-releases/2025-10-22-gartner-forecasts-worldwide-it-spending-to-grow-9-point-8-percent-in-2026-exceeding-6-trillion-dollars-for-the-first-time
- Gartner. (2025b, November). Predicts 2026: AI sourcing excellence for cost control and enhanced value. https://www.gartner.com/en/documents/7206530
- Gartner. (2025c, October 21). Strategic predictions for 2026: How AI's underestimated influence is reshaping business. https://www.gartner.com/en/articles/strategic-predictions-for-2026
- McKinsey & Company. (2025, June). The state of AI: How organisations are rewiring to capture value. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
- MIT NANDA. (2025, July). The GenAI divide: State of AI in business 2025. Massachusetts Institute of Technology, Project NANDA. https://nanda.mit.edu/state-of-ai-in-business-2025
- Morgan Lewis. (2026, April). From assistant to actor: What the rise of agentic AI means for your business. https://www.morganlewis.com/blogs/sourcingatmorganlewis/2026/04/from-assistant-to-actor-what-the-rise-of-agentic-ai-means-for-your-business
- Newsom, G. (2026, March 30). Executive Order N-5-26: AI vendor certification and procurement framework. State of California. https://www.gov.ca.gov/wp-content/uploads/2026/03/3.30-FINAL-Trusted-AI-Procurement-EO-N-5-26.pdf
- RAND Corporation. (2025). The root causes of failure in artificial intelligence projects and how to succeed. https://www.rand.org/pubs/research_reports/RRA2680-1.html
- Treasury Board of Canada Secretariat. (2025). Directive on Automated Decision-Making. https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592
Prepared by Patrick van Abbema, MBA, CMC, PMP, CBAP · AltNexus AI Core · AI Strategy. Architected.™
Methodology: Six-question pre-procurement diagnostic synthesised from Government of Canada AI governance practice (TBS Directive on Automated Decision-Making), enterprise AI procurement caselaw (Morgan Lewis, 2026), MIT NANDA failure-mode research, and 30+ years leading $1M–$100M+ transformation programmes across the Government of Canada and Fortune 500. Fractional CDAO/CAIO engagements begin with a 60-minute procurement readiness diagnostic.