ALTITUDE
AI Signal – February 2026
Monthly signals mapped to a four-tier framework. February's central finding: the deployment overhang — the growing gap between what AI tools can do and what organisations actually ask them to do. Plus: the time savings era ends, always-on agents go mainstream, AI governance meets geopolitics, and public trust in AI hits its lowest recorded level.

February's big finding: most organisations are using a fraction of the AI capability they've already paid for. Research published this month shows that when given clearly scoped tasks — building features, processing data, running test suites — AI agents can work autonomously for up to five hours. But the median real-world use is forty-five seconds. That gap is the deployment overhang, and it's this month's deep dive.
Elsewhere: AI governance got real, always-on AI agents went from one platform to an industry-wide pattern, computer use approached human-level accuracy, and the cost of AI capability collapsed again.
At a Glance
Key takeaway: the deployment overhang. Most organisations are using a fraction of the AI capability they've already paid for. The gap between what the tools can do and what people actually ask them to do is this month's central finding — and the six cards below show where it's widest.
February 2026 — six signals across capability, cost, adoption, skills, governance, and tools
CAPABILITY
5 hrs vs 45 sec
- AI agents can handle 5-hour tasks
- Median real-world use: 45 seconds
- Computer use accuracy: under 15% (2024) → 72.5% (Feb 2026)
- Vibe coding now used by 69% of surveyed AI users — most from outside engineering
COST
150x cheaper
- Than 21 months ago
- 5x drop in February alone
- Budget is no longer the barrier
ADOPTION
Time savings is no longer #1
- Experienced users report capability expansion as primary value
- 37.6% now using agentic AI workflows
- Accenture (700K employees) ties AI usage to career progression
SKILLS
AI fluency is measurable
- Iteration — building on AI responses rather than accepting the first answer — is the strongest predictor of effective use, more than doubling the rate of other skilled behaviours
- Polished AI output is a risk — when results look finished, users are measurably less likely to check facts, question reasoning, or spot what's missing
- Only 30% of users tell AI how to work with them — simple instructions like 'challenge my assumptions' or 'flag what you're uncertain about' change the entire dynamic
GOVERNANCE
Governance gets real
- Responsible Scaling Policy reaches v3.0 with external review
- 58% of Americans now distrust AI (YouGov) — highest on record
- Tensions between AI companies and governments over military AI usage escalated, with no legislative framework yet in place
- New AI Fluency Index tracks how people develop AI skills
TOOLS
Always-on agents go mainstream
- OpenClaw hit 1.5M agents in days; dedicated hardware sold out
- In a single late-February week, Anthropic, Perplexity, Notion, Airtable, and Microsoft all shipped persistent agent or scheduled task features
- Claude Code now authors 4% of all public code on GitHub
- Nvidia posted $68.1B quarterly revenue (up 73%) — AI infrastructure demand is sustained and accelerating
Model Releases
Seven major releases across four providers — one of the most concentrated months of frontier AI development on record.
ANTHROPIC
Feb 5
Claude Opus 4.6
New flagship. Tops major coding and reasoning benchmarks. First Opus-class model with a 1M token context window. Can coordinate autonomous agent teams.
OPENAI
Feb 5
GPT-5.3 Codex
Built for agentic coding tasks running hours to weeks with minimal human input. OpenAI flagged elevated cybersecurity risk in its own safety card.
OPENAI
Feb 12
GPT-5.3 Codex Spark
Speed-focused variant on Cerebras chips. Over 1,000 tokens per second — fast enough for real-time pair programming.
ANTHROPIC
Feb 17
Claude Sonnet 4.6
Users preferred it over last year's flagship 59% of the time — at a fraction of the cost. Computer use accuracy jumped from under 15% (late 2024) to 72.5%.
XAI
Feb 18
Grok 4.20
Uses four internal agents that debate each query before answering. Pulls real-time data from X (Twitter) for live analysis.
Feb 19
Gemini 3.1 Pro
Google's strongest reasoning model. Introduces 'Agentic Vision' — the model can iteratively zoom, crop, and analyse images. 1M token context.
ALIBABA
Pre-Feb 16
Qwen 3.5
Latest open-weight release. Part of a broader pattern: Chinese labs shipping competitive models at accelerating pace.
Feb 27
NanoBanana 2
Production-ready image generation — half the cost, seconds-fast. Signals AI image creation moving from novelty to everyday infrastructure.
The gap between the best model and the second-best is now measured in days, not months. Stop trying to pick the 'right' model. Build team capability and workflows that work regardless of which model is on top.
Signal Map
Each month we map the key signals to the four-tier framework from our AI capability page. Same structure, same tiers — so you always know where to place what you're reading. This month's deep dive follows below.
The Landscape
The territory before the strategy — what's available and what changed
Cost keeps falling
150x cheaper than 21 months ago. 5x drop in February alone. Budget is no longer a valid reason for limited AI use.
Revenue is real
Anthropic's coding tool now writes 4% of all public code on GitHub — doubled in a single month. The market has made its decision.
Always-on agents
OpenClaw exploded to 1.5M agents. By month's end, Anthropic, Perplexity, and Notion all shipped persistent agent features. AI that works while you sleep is now an industry pattern.
Computer use leap
Benchmark accuracy jumped from under 15% (late 2024) to 72.5%. Anthropic acquired Vercept, a specialist in AI perception. The acquisition race for computer-use talent has begun.
Model security
Large-scale distillation attacks are targeting frontier AI capabilities — reasoning, coding, autonomous tool use. Protecting models is becoming as important as protecting data.
The Foundation
What needs to be in place — governance, data readiness, and sustainable deployment
Governance got real
Anthropic's Responsible Scaling Policy reached v3.0 — with external review, a public safety roadmap, and an honest admission that some challenges need industry-wide solutions. AI governance is moving from principle to practice.
Adoption is mandatory
Accenture (700,000 employees) now ties AI usage to career progression. In large organisations, 'I don't use AI' is becoming a career risk.
Trust as differentiator
As capability converges, providers differentiate on values, transparency, and safety commitments. Which AI companies you trust with your data is becoming a real business decision, not just a technical one.
Public trust is eroding
58% of Americans now distrust AI — the highest figure on record. The US ranks last globally in excitement-to-concern ratio. Nine distinct categories of concern exist, from job displacement to environmental impact. Most are legitimate and addressable, but the window for constructive engagement is narrowing.
The Practice
How to work with AI effectively — context, coordination, and capability
AI fluency is measurable
A study of nearly 10,000 AI conversations identified specific, trainable behaviours that distinguish effective users. Iteration — building on previous exchanges — is the strongest signal, doubling the rate of other effective behaviours.
Vibe coding goes mainstream
69% of surveyed AI users now build software through conversational coding. Most aren't engineers. The boundary between 'technical' and 'non-technical' work is dissolving in practice.
Experts monitor differently
Research shows expert AI users auto-approve 40% of actions but interrupt nearly twice as often as beginners. The shift isn't less oversight — it's smarter oversight. A learnable skill, not an innate talent.
Agents self-organise
16 AI agents wrote a working C compiler — 100,000 lines of code — with no human orchestrator. They coordinated via shared text files, inventing their own project management.
The Application
Where AI meets your world — applied work, integration, and delivery
500 vulnerabilities found
Claude Code Security found 500+ vulnerabilities in production open-source software — bugs missed by years of expert review. Every finding goes through multi-stage verification before reaching an analyst.
Agents beyond code
Over half of agent actions in a major study were non-engineering: back office (9%), marketing (4.4%), sales and CRM (4.3%), finance (4%). Agents aren't just for developers.
Hard ROI numbers
Walmart's AI assistant increased basket size by 35%. Spotify's senior engineers haven't written a line of code since December — all coding through AI, humans reviewing and directing.
Framework Check
DID ANYTHING CHANGE HOW WE THINK ABOUT AI CAPABILITY?
The four-tier framework holds. But February sharpened it in two ways.
First, the deployment overhang is now backed by three independent data sources, not one. It's moved from an interesting finding to a central concept for any organisation thinking about AI. The overhang isn't just about usage volume — it's about usage quality. Time savings was the entry point. Capability expansion is the destination.
Second, the governance tier (Foundation) got real this month. Responsible scaling frameworks reached version 3.0 with external review, tensions between AI companies and governments over military usage escalated to the point of direct confrontation, and public trust in AI dropped to its lowest recorded level. AI governance is no longer a checkbox. It's where the hardest — and most consequential — decisions are being made.
Deep Dive: The Deployment Overhang
YOU'RE USING A FRACTION OF WHAT YOU'VE ALREADY GOT — AND USING IT FOR THE WRONG THINGS
Three separate data sources published in February converge on the same conclusion. The biggest barrier to AI value isn't the technology — it's how people use it.
The gap is enormous
Industry research measured how people actually use AI agents in practice. When given well-defined, testable tasks — implement a feature against a spec, process a dataset, debug a failing test suite — AI agents can work autonomously for up to five hours, managing their own state through files rather than relying on a single conversation window. The most demanding real-world usage — the 99.9th percentile — peaks at 42 minutes. The median task? Forty-five seconds. Draft an email reply. Summarise a document. Explain an error message. That's how most people use AI most of the time — and it barely scratches the surface.
That gap exists in every organisation using AI today. And it's not because the tools aren't ready — it's because people haven't changed what they ask for.
Time savings is no longer the point
A February survey of active AI users (97.6% daily users, 43% spending 10+ hours per week) found that time savings — the universal entry point just months ago — is no longer the number one benefit. The shift is toward capability expansion: people using AI to do things they couldn't do before, not just things they were already doing.
The same survey found that 69% now use "vibe coding" tools to build software — and most of them come from outside engineering. Over a third (37.6%) report using agentic AI workflows. These are people who have moved past the "draft an email faster" stage into genuinely new territory.
The implication: if your organisation's AI usage is still concentrated on time-saving tasks — summarising documents, drafting messages, reformatting data — you're optimising for AI's least valuable capability.
The skills to close the gap are identifiable
A large-scale study of nearly 10,000 AI conversations, published by Anthropic in February, measured specific behaviours that indicate effective AI use. Three findings stand out:
Iteration is the strongest signal. 85.7% of the conversations studied showed users building on previous exchanges rather than accepting the first response. These conversations exhibited more than double the rate of other effective behaviours — including being 5.6 times more likely to question the AI's reasoning.
Polished outputs reduce critical thinking. When AI produced finished-looking work — apps, documents, code — users were less likely to identify missing context (-5.2 percentage points), check facts (-3.7pp), or question the reasoning (-3.1pp). The better the output looks, the less people scrutinise it.
The better the output looks, the less people scrutinise it.
Most people don't set the terms. Only 30% of users tell AI how they'd like it to interact with them. Instructions like "push back if my assumptions are wrong" or "tell me what you're uncertain about" are simple to add and change the dynamic of the entire conversation.
Expert users don't approve more — they monitor differently
The same research on AI agent usage revealed a clear pattern:
New Users
Approve most actions manually. Rarely interrupt. Treat AI like a subordinate that needs constant oversight.
Expert Users
Auto-approve 40% of actions. Interrupt nearly twice as often. Treat AI like a colleague they trust but actively oversee.
The shift isn't "hands off." It's "hands different." Expert users give more trust upfront but intervene more assertively when something matters. This is a learnable skill, not an innate talent.
Not all tasks want the same leash
The five-hour figure is real — but it describes a specific kind of work. AI agents that run for hours are typically executing well-defined, testable tasks: implement a feature against a spec, process a dataset, run a migration. The output is verifiable. Tests pass or they don't.
Most knowledge work isn't like that. Editorial judgment, client positioning, strategic framing, tone — these have no test suite. "Correct" is a human call. In these contexts, the expert pattern above isn't just preferable, it's the only approach that works. The check-in is the value.
The deployment overhang isn't closed by giving AI more rope across the board. It's closed by matching the supervision level to the task: autonomy for the mechanical, active oversight for the judgment-heavy. Most organisations default to one mode for everything — either micromanaging code generation or rubber-stamping strategy documents. The skill is knowing which mode a task needs.
Autonomy for the mechanical. Active oversight for the judgment-heavy. The skill is knowing which mode a task needs.
Closing the overhang
Four moves to close the deployment overhang
- 1Audit scope, not just usage. It's not enough to know whether your team uses AI. Ask what they use it for. If the answer is short, routine tasks, you have a deployment overhang.
- 2Move past time savings. Ask your team: what could you do with AI that you can't do at all today? That's where the real value lives — not in doing the same things 20% faster.
- 3Train for iteration, not prompting. The data is clear: the single most important AI skill is staying in the conversation — pushing back, refining, building on previous exchanges. A one-shot prompt wastes most of the capability.
- 4Set the terms upfront. Tell AI how to interact with you: challenge assumptions, flag uncertainty, explain reasoning. Only 30% of users do this. It's the simplest change with the biggest impact.
Emerging Signals
EARLY PATTERNS — WORTH WATCHING
Not every signal has an obvious action attached. These are trends from February that don't have immediate business implications but are worth tracking.
The "SaaSpocalypse." Major enterprise software stocks dropped 20–37% in early 2026. Investors are pricing in broad disruption — AI-native tools replacing entire software categories. The practical takeaway: if you're renewing expensive software contracts, ask whether an AI-native alternative now exists. But don't expect overnight change in large organisations.
Public AI scepticism is at its highest recorded level. 58% of Americans now distrust AI (YouGov). 63% expect AI to reduce jobs, versus just 7% who expect it to create them. The US ranks last globally in excitement-to-concern ratio (Pew). The concerns aren't monolithic — researchers have identified at least nine distinct categories, from job displacement to data centre environmental impact to artist intellectual property. Most are legitimate and addressable. But the window for constructive engagement is narrowing, and organisations deploying AI tools now face growing stakeholder scrutiny.
AI's physical limits. Training a single frontier model will soon require gigawatts of electricity — equivalent to powering a small city. If sustainability is part of your organisation's mandate, the environmental footprint of AI tools is a question that's getting harder to ignore.
"AI laundering" enters the vocabulary. Block (formerly Square) cut 40% of its workforce — 4,000 employees — citing AI-enabled restructuring, and the market rewarded it with a 25% stock surge. Economist Alex Imas coined the term "AI laundering" to describe the risk of attributing layoffs to AI regardless of whether AI is the genuine driver. An insider account from Block noted that the employees being cut were already proficient AI users — suggesting that AI tool mastery alone may not protect jobs during restructuring. Whether other companies follow the same playbook remains to be seen, but the market incentive is now visible.
A tightening knowledge-work market. Professional services job openings hit an 11-year low in February. Whether AI-driven, cyclical, or both — the practical implication is the same: the value of staff who can work effectively with AI is going up — but domain expertise and strategic positioning matter as much as tool proficiency.
AI governance meets geopolitics. Tensions between AI companies and governments over how AI should be used in military and sensitive contexts escalated significantly in late February. The core question — whether companies, governments, or a jointly developed framework should set the boundaries for AI deployment — remains unresolved, with no legislative framework in place. Regardless of where one stands on the specifics, the practical takeaway for organisations is clear: AI governance is no longer an internal policy exercise. It now includes vendor relationship management, supply chain considerations, and an awareness of the regulatory environment in which your AI tools operate.
AI as infrastructure, not a tool. One of the month's most thought-provoking arguments: AI may be better for plumbers than programmers. The logic is that reduced software costs make previously uneconomic niche markets viable — scheduling tools for trades, inventory systems for small workshops, custom apps for local businesses. If this plays out, AI's biggest impact may not be in the industries that talk about it most.
What To Do This Month
Three actions for February
- 1Audit the deployment overhang. Ask your team — or yourself — what you actually use AI for. If it's mostly short, routine tasks, you're leaving most of the value on the table. The capability for longer, more complex, genuinely new work is already there. Time savings is no longer the point.
- 2Revisit your AI budget assumptions. Costs dropped 5x in a single month. Whatever you decided about AI spend three months ago is probably wrong. The same investment now covers dramatically more ground.
- 3Pick one non-technical workflow and test an AI agent on it. Over half of agent usage is now in operations, marketing, sales, and finance. The assumption that agents are only for developers is already wrong.
AI Signal is published monthly by Pandion. We help organisations build real AI capability — the foundations, the practice, and the fluency that turn tools into results.
Have a question about something in this guide? Get in touch.
FAQs
What is the AI deployment overhang?
The deployment overhang is the gap between what AI tools can technically do and what organisations actually ask them to do. Research shows AI agents can handle tasks lasting five hours, but in practice most people use them for tasks under a minute. Separate survey data shows that time savings — the original entry point for most AI use — is no longer the primary benefit reported by experienced users. The gap isn't just about how much you use AI. It's about what you use it for.
What does 'the time savings era is over' mean?
A February 2026 survey of active AI users found that time savings is no longer the number one benefit people get from AI. For experienced users, the value has shifted from doing the same things faster to doing things they couldn't do before — building software without engineering backgrounds, analysing data without analyst training, creating tools for problems too niche to have off-the-shelf solutions. If your team still thinks of AI primarily as a time-saver, you're using it for its least valuable purpose.
Should my organisation worry about AI replacing its software?
Not panic, but pay attention. Major enterprise software stocks dropped 20-37% in early 2026 as investors price in AI disruption. The practical takeaway: review your software subscriptions with fresh eyes. Some categories — particularly tools that aggregate, summarise, or route information — may be replaced by AI-native alternatives. But large organisations move slowly, and most existing software won't disappear overnight.
What is 'AI laundering'?
A term coined by economist Alex Imas to describe the risk of companies attributing workforce reductions to AI capability gains regardless of whether AI is the genuine driver. In February 2026, Block cut 40% of its workforce citing AI-enabled restructuring and was rewarded with a 25% stock surge — creating a visible market incentive for others to follow. The practical concern: an insider account from Block noted that the employees being cut were already proficient AI users, suggesting that AI tool mastery alone may not protect jobs during restructuring waves.