AI CAPABILITY • FOUNDATION
The Foundation
Strategy, data, sustainability, and the adoption landscape
Five interconnected topics that form the base layer for any AI capability programme. Each section expands to reveal the full deep-dive.
The decisions your organisation needs to make — AI strategy, governance frameworks, vendor selection, and policy.
In 30 Seconds
AI Strategy answers: Where are we going with AI, and why? AI Governance answers: How do we get there safely and responsibly?
Most organisations need both – but many have one without the other. A strategy without governance creates risk. Governance without strategy creates bureaucracy.
Where we help: Connecting strategy to execution. Many organisations have AI strategies that never translate into capability. We bridge that gap through practical implementation – making decisions actionable while keeping governance embedded in workflows.
Two Distinct Disciplines
AI Strategy
“What are we trying to achieve with AI?”
- • Vision: How AI fits your business direction
- • Priorities: Which use cases matter most
- • Investment: Where to allocate resources
- • Roadmap: Sequencing and dependencies
- • Measurement: How you'll know it's working
- • Efficiency vs Opportunity: Are you using AI to cut costs or to expand capacity? Jensen Huang: “companies with imagination will do more with more.” Organisations using AI to expand what's possible outperform those focused purely on headcount reduction.
AI Governance
“How do we use AI safely and responsibly?”
- • Policy: What's allowed, what's not
- • Risk: Identifying and managing AI-specific risks
- • Compliance: Regulatory requirements (EU AI Act, etc.)
- • Accountability: Who decides what, and who's responsible
- • Controls: Technical and procedural safeguards
What Organisations Need to Implement
Effective AI strategy and governance requires concrete components – not just documents.
Strategy Components
AI Vision Statement
Clear articulation of how AI supports business objectives – shared across leadership.
Use Case Portfolio
Prioritised list of AI applications with business cases and success criteria.
Investment Framework
Budget allocation, build vs buy decisions, and ROI measurement approach.
Capability Roadmap
Sequenced plan for building AI capability – technology, people, and process.
Governance Components
AI Policy
Clear expectations for AI use – what's permitted, what requires approval, what's prohibited.
Roles & Responsibilities
Who owns AI decisions across legal, tech, data, risk, and business functions.
Risk Framework
AI-specific risk assessment – layered by use case risk level (internal vs customer-facing).
Technical Controls
Approved tools, data classification, access controls, and monitoring.
Decision Trace Logging
Record exceptions, approvals, and rationale so decisions are explainable and auditable.
Canonical Truth Contracts
Clear ownership of which systems are authoritative for critical metrics and records.
The Connecting Tissue
Working Groups
Cross-functional forums connecting strategy direction with governance requirements.
Leadership Fluency
Board and executive capability to set direction and govern effectively.
Privacy by Architecture
Confidentiality is not just a legal concern. It is a trust and adoption barrier. If teams cannot prove how sensitive data is protected, AI use stalls or moves underground.
Technical Guarantees
- • Data minimisation and selective retrieval
- • Encryption in transit and at rest, with clear key ownership
- • Auditable access controls and approved tool lists
- • Clear data retention and deletion paths
Workflow Safeguards
- • Informed consent for sensitive use cases
- • Minimal identifiers and redaction by default
- • Human review and accountability for outputs
- • Usage logging and periodic audits
Policy promises are not enough. The bar is technical proof and repeatable safeguards that stand up to audit.
Governance as Enabler
The most effective organisations treat governance as a way to move faster, not slower.
| Blocking Governance | Enabling Governance |
|---|---|
| Rules without clarity | Clear expectations everyone understands |
| Block external tools | Provide approved alternatives |
| Fear-based compliance | Education-based empowerment |
| Policies on shelves | Governance embedded in tools and workflows |
When governance is done well, employees know exactly what's expected. They have approved tools that work. They feel safe to experiment within clear boundaries. Auditability comes from decision traces, not just policy documents. The result: more innovation, not less.
Trust Maturity: From Approval to Monitoring
Research into how people actually use AI agents reveals a clear maturity pattern. The shift from new to experienced AI use isn't “hands off” – it's “hands different.”
Early-Stage Teams
- • Approve most AI actions manually
- • Rarely interrupt or redirect
- • Treat AI as subordinate needing oversight
- • Default to short, safe, familiar tasks
Mature Teams
- • Auto-approve 40% of routine actions
- • Interrupt nearly twice as often on what matters
- • Treat AI as a colleague they trust but actively steer
- • Delegate complex, multi-step, hours-long work
What This Means for Governance
Governance frameworks need to evolve with trust maturity. A team using AI for the first time needs different guardrails than one that's been working with it for six months. Static, one-size-fits-all policies either restrict experienced teams or give too much latitude to new ones.
What people actually worry about: Anthropic's 81,000-person study (March 2026) found that unreliability (26.7%) is the #1 concern — ahead of job loss (22.3%), autonomy loss (21.9%), and cognitive atrophy (16.3%). This reframes governance priorities: trust infrastructure and quality assurance matter more than workforce protection theatre.
The most effective approach: tiered governance that matches the team's trust maturity. Low-risk tasks with clear boundaries can move to monitoring faster. High-stakes decisions keep human approval regardless of maturity.
A reassuring finding: AI agents stop themselves to ask for clarification twice as often as humans interrupt them. The risk isn't that AI will “run away” with a task – it's that teams won't push AI far enough. Good governance enables that push.
Measuring AI ROI
Understanding what value you're pursuing – and how to measure it – is a strategic decision.
Efficiency AI vs Opportunity AI
Efficiency AI
Doing existing work faster, cheaper, or with fewer errors. Automation, summarisation, data processing. Measurable, often the starting point. ROI is relatively straightforward.
Opportunity AI
Doing things that weren't previously possible. New capabilities, new insights, new products. Harder to measure, often more valuable. ROI requires new frameworks because there's no baseline to compare against.
The strategic question: Most organisations start with Efficiency AI because it's easier to justify. But Opportunity AI is where the competitive advantage lives. The best strategies invest in both – using efficiency gains to fund opportunity exploration.
Types of AI Value
Efficiency Benefits
Easier to quantify, often the starting point:
- • Time savings: Hours saved on routine tasks
- • Cost reduction: Lower cost per output
- • Increased output: More work with same resources
- • Quality improvement: Fewer errors, better consistency
Strategic Benefits
Harder to measure, often more valuable:
- • Better decisions: Right information at right time
- • New capabilities: Doing what wasn't possible before
- • Risk reduction: Early warnings, error catching
- • Revenue growth: New streams or enhanced offerings
Practical Guidance
Start simple: Pick one or two use cases with clear baselines. Measure before and after. Learn what works before scaling.
Include all costs: Tools, training, integration time, ongoing maintenance. Many ROI calculations fail by underestimating total cost of ownership.
Be patient: Most AI value comes from compounding gains over time, not overnight transformation.
Track strategic value: Don't just measure hours saved. Document the decisions improved, capabilities gained, and risks avoided.
Workforce Planning: Automation vs New Task Creation
MIT research introduces a critical filter for workforce planning: are we automating existing tasks, or creating entirely new ones? Most AI strategy focuses on automation (doing existing work faster). But the transformative value often comes from new tasks that weren't possible before — roles, capabilities, and workflows that only exist because AI enables them.
Task Automation
Doing existing work faster or cheaper. Easier to measure, familiar territory.
New Task Creation
Work that didn't exist before AI. Harder to forecast, often more valuable.
Use this as a planning filter: for each AI initiative, ask whether you're automating or creating. Both matter, but the balance shapes your workforce strategy.
AI Economics: The Subsidised Era Is Ending
Inference costs are becoming a strategic planning input, not just a technical detail.
Token Budget Planning
AI usage has real, variable costs. Organisations need to budget for tokens the way they budget for cloud compute – with visibility, limits, and cost-per-outcome tracking.
Cost-Per-Outcome Frameworks
Not “how much per token” but “how much per code review, per report, per analysis.” The $15-25/PR backlash against Anthropic's code review pricing (March 2026) previews the conversations every organisation will have.
Multi-Model as Economics
Using the right model for each task isn't just technical preference – it's economic imperative. Model right-sizing will become a standard practice.
Strategic implication: AI usage costs scale with usage – more like cloud compute than software licences. Budget accordingly.
Emerging Governance Challenges
March 2026 surfaced governance challenges that most frameworks don't yet address.
AI Is Officially Political
Vendor selection is now policy risk. Government contracts have been revoked overnight based on CEO statements (the Dario Amodei memo incident, March 2026). Blue Rose Research data shows AI ranked 29th of 39 tracked issues but rising faster than any other. 72% of voters fear wage depression, 77% fear industry elimination. Even Trump voters choose “protect jobs” over “keep innovating” by 2:1. The Pentagon vs Anthropic dispute is now in federal court — Judge Rita Lynn called Pentagon conduct “troubling.”
Governance response: Vendor neutrality as a principle. Multi-vendor capability as risk management. Political risk is now a vendor selection criterion, not just a policy footnote.
White House AI Legislative Framework
NEW — MARCH 2026The White House released a 6-point AI legislative framework. Key positions: no new regulatory body (sector-specific approach using existing agencies), strong state preemption push (federal floor for AI rules), IP and copyright deferred to courts, and a workforce section that observers called “hand-wavy.” Dean Ball described it as “an opening move in a multidimensional public negotiation.” Meanwhile, states are acting independently — NY chatbot restrictions, CA AI bills, a 291-page federal bill from Blackburn. No resolution expected before midterms; 12-18 months of compliance uncertainty.
Governance response: Track state-level AI regulation where you operate. Federal preemption is aspirational, not enacted. Prepare for a patchwork compliance landscape through at least 2027.
Agent Compliance Precedent
The Amazon vs Perplexity dispute is setting legal precedent for how AI agents access third-party services. Key distinction: first-party agents vs third-party agents.
Governance response: Audit agent access patterns. Ensure agents operate within ToS boundaries.
Memory Portability
As AI agents accumulate context and memory about your organisation, that data becomes strategically significant. Data portability regulations may extend to AI memory and context.
Governance response: Vendor-agnostic context architecture. Own your knowledge layer.
Security Governance: The McKinsey Lilli Lesson
NEW — MARCH 2026McKinsey's internal AI tool, Lilli, suffered a security breach exposing confidential client data — including work from Amazon, Pfizer, and government clients. The root cause was not sophisticated: basic API security was missing. Even the world's most prominent advisory firm got this wrong.
Governance response: AI security isn't optional or “phase 2.” Basic API security, data classification, and access controls must be in place before internal AI tools go live. If McKinsey can miss this, so can you.
The “Tilly Tax”: AI Displacement Compensation
NEW — MARCH 2026Hollywood unions are negotiating a fee for studios that use AI-generated actors instead of human performers. Named after Tilly Norwood — an AI actress created by Particle 6 Productions — this is the first formal AI displacement compensation mechanism to move from concept to the negotiation table.
Governance response: Every sector will face this conversation. Whether it's called a “Tilly Tax,” an automation levy, or a transition fund — organisations using AI to replace roles need a position on workforce impact before unions or regulators define one for them.
UK AI Copyright Task Force
NEW — MARCH 2026The UK government has established a task force on AI-generated content, working on labelling best practices and transparency standards. An interim report is expected by autumn 2026. This sits alongside existing debates on training data rights and IP ownership.
Governance response: UK organisations should track this actively. Labelling and provenance requirements are likely to become compliance obligations. Build transparency into AI-generated content workflows now, rather than retrofitting later.
The Governance Enforcement Gap
In creative industries, writers are signing declarations that they haven't used AI — while privately using it extensively. The same pattern is emerging across professional services, journalism, and consulting. Policies exist, but enforcement is performative.
Governance response: Realistic governance beats theatrical governance. Policies that acknowledge AI use and set quality standards work better than blanket bans that everyone quietly ignores. The question isn't “did you use AI?” — it's “is the output good enough?”
How We Help
We partner with strategy and governance specialists. Our focus is making decisions actionable.
Strategy to Execution
AI strategies often stall because they don't translate into practical capability. We help bridge the gap – taking strategic priorities and building the context systems, skills architecture, and team fluency to deliver on them.
Governance in Practice
Good governance isn't just policies – it's embedded in how AI is actually used. Our context engineering and skills-based approach builds governance into workflows, not documents.
Leadership Fluency
Directors and senior managers need AI fluency to govern effectively. We help build this capability through practical understanding, not technical training.
Explore Other Tiers
The Foundation connects to every other layer of the AI capability framework.
Building on Solid Foundations
Strategy, data, sustainability, adoption, and the right tools – these foundations determine whether your AI investments deliver. If any of these feel uncertain, we can help you get them right.