ALTITUDE

AI Signal – March 2026

Monthly signals for anyone trying to keep up with AI – whether you run a business, lead a team, or just want to understand where it's all heading. March's central finding: every major AI lab converged on work automation as the priority. The capability leapt forward. The gap between what's possible and what most people can actually use widened again. Here's where the pieces fit.

31 March 202622 min readAIAI SignalWork AGIAdoptionAgentsUpskillingEnterprise2026
Every path converged on the same point: work.
Every path converged on the same point: work.

In March, every major AI lab – independently, in the same four-week window – signalled the same thing: knowledge work automation is now the priority above all else. OpenAI discontinued its video product and redirected everything toward work tools. Anthropic went to federal court to defend its enterprise market. SpaceX/xAI filed for the largest IPO in history.

The tools aimed at the work you actually do – writing, analysis, coordination, client delivery, decision-making – advanced more in March than in any single month this year.

But the gap between what these tools can do and what most people are actually using them for got wider, not smaller. If you're running a small business, recently found yourself out of a role, or just trying to figure out how AI fits into your working life – this edition maps where all the pieces landed in March and what to do about it.

At a Glance

Key takeaway: the fluency gap. Every major AI lab converged on work automation as the priority. Capability leapt forward. But the gap between what these tools can do and what most people use them for got wider, not smaller. The differentiator isn't access or budget — it's whether you're building AI fluency. Independent workers who do report three times the benefit of those who don't.

Six signals from March, mapped across industry direction, capability, adoption, policy, practice, and the individual.

March 2026 – six signals across industry direction, capability, adoption, policy, practice, and the individual

INDUSTRY DIRECTION

All in on work

  • OpenAI discontinued Sora, redirected compute to work automation
  • SpaceX/xAI filed $75B IPO – largest in history
  • Anthropic in federal court defending enterprise market access
  • Every major lab now explicitly prioritising knowledge work automation

CAPABILITY

82% task parity

  • GPT 5.4: 82% of professional tasks at human-level
  • Computer use accuracy: 75% (human-level 72.4%)
  • Claude shipped 6+ major features in one month
  • 1M token context windows – general availability

ADOPTION

The gap is widening

  • Most organisations: 'remarkably little has changed'
  • The layoff narrative and actual AI displacement don't match
  • February's deployment overhang widened, not closed
  • The fluency gap is now the gap that matters

POLICY & WORKFORCE

Playing catch-up

  • White House 6-point framework published – workforce provisions light
  • AI rising as a policy issue faster than any other (Blue Rose Research)
  • 72% fear wage depression – but unreliability (26.7%) is the #1 actual concern, not job loss (22.3%)
  • Regulatory landscape uncertain – 12-18 months of compliance ambiguity ahead

PRACTICE

Agents crystallise

  • Best results from giving AI one clear task at a time
  • 25 minutes of direction produced 3+ hours of autonomous work
  • AI can now operate any software – including legacy systems with no modern interface
  • Mental shift from 'using a tool' to 'delegating to a colleague'

THE INDIVIDUAL

3x benefit for independents

  • Solo and small-business workers gain the most from AI
  • Side projects and personal use drive the biggest satisfaction
  • A third of all AI goals are about making room for life, not productivity
  • Big corporates are training their people – but who trains everyone else?

Model Releases

March's model story isn't one headline release – it's the shift from all-purpose AI models toward industry-specific ones. Companies are now training AI on their own customer data to outperform the big general models in their particular field. The frontier is no longer just about bigger models. It's about better data.

OPENAI

Mar 6

GPT 5.4

82% professional task parity. Computer use accuracy 75% (human baseline: 72.4%). Half the cost of Opus 4.6, with 47% token reduction. The price-performance gap between frontier models is now razor thin.

ANTHROPIC

Mar (throughout)

Claude feature wave

Not one model – six major capabilities in a single month: Remote Control, Dispatch, Channels, Computer Use, Scheduled Tasks, and 1M token context windows. The shift from 'tool you open' to 'colleague that works while you don't.'

GOOGLE

Mar 27

Gemini 3.1 Flash Live

Real-time voice model enabling continuous dialogue. Implications for always-on voice interfaces – and for Apple's Siri, which still lacks a competitive conversational model.

CURSOR

Mar 27

Composer 2

A coding tool trained on how its own users actually work – and it outperforms bigger, more expensive general-purpose models in its specific domain. A signal of where the industry is heading: specialist beats generalist.

INTERCOM

Mar 27

Fin Apex

A customer service AI trained on millions of real support conversations – and it outperforms the big general-purpose models at that specific job. Another data point: domain expertise beats raw intelligence.

GOOGLE

Mar 26

TurboQuant + Lyria 3 Pro

TurboQuant: 6x memory reduction, 8x speed improvement for model inference. Lyria 3 Pro: 3-minute AI-generated music tracks. Infrastructure and creative capability advancing in parallel.

Don't wait for the 'right' model. The gap between general and specialised is now the story – and the organisations building proprietary data loops on top of open-weight models are creating moats that API-only approaches can't match. The question isn't which model to use. It's what data makes YOUR model better.


Signal Map

Each month we map the key signals to the four-tier framework from our AI capability page. Same structure, same tiers – so you always know where to place what you're reading.

The Landscape

The territory before the strategy – what's available and what changed

The work bet

OpenAI discontinued its video product and concentrated focus on Codex, renaming its product division 'AGI Deployment.' Anthropic is in federal court defending enterprise market access. Jensen Huang says AGI is here – for work tasks. SpaceX/xAI filed a $75B IPO. The labs have made their decision.

Revenue acceleration

Anthropic reportedly reached $19B ARR – from roughly 10% to over 60% of business AI spend in 12 months. Cursor doubled to $2B in 3 months. Lovable grew $100M in a single month. The enterprise adoption S-curve is steep and accelerating.

Platform convergence

Every AI platform is becoming every other AI platform. Code generation is the gateway to all knowledge work. 'No barriers to entry, but also no moats.' The differentiator isn't the tool – it's domain expertise.

Capability leap

Claude shipped Remote Control, Dispatch, Channels, Computer Use, Scheduled Tasks, 1M context, and Skills in Office – in one month. GPT 5.4 hit 82% professional task parity at half the cost of Opus. Multi-model strategy is now essential.

Capital flood

SpaceX/xAI $75B IPO. AMI Labs $1.03B seed. Thinking Machines 1GW compute deal. Fundrise ETF at 16x NAV. But xAI lost 9 of 11 co-founders – capital doesn't guarantee stability.

The Foundation

What needs to be in place – governance, data readiness, and sustainable deployment

AI enters workforce policy

Blue Rose Research: AI is the fastest-rising policy issue in America. 72% fear wage depression. 77% fear entire industries eliminated. 'Protect jobs' beats 'keep innovating' 2:1 across voter demographics. The workforce dimension of AI is now firmly on the agenda.

Regulatory landscape forming

White House 6-point framework: no new regulatory body, sector-specific approach, state preemption push. Workforce provisions remain light. State vs federal tension unresolved. 12-18 months of compliance uncertainty ahead for businesses.

Fear vs reality mismatch

Anthropic's 81K-person study: unreliability (26.7%) is the #1 concern – not job loss (22.3%). 60% of hiring managers admit emphasising AI in layoffs because it 'plays better.' Only 9% say AI has actually replaced any roles. The narrative doesn't match the data.

Enterprise AI security is exposed

Multiple major enterprises reported AI-related security incidents in March. In one case, a single-person company's autonomous AI agent reportedly breached a Fortune 500 firm's AI platform in under two hours. The pattern: AI adoption is outpacing security governance, even at organisations that should know better.

The Practice

How to work with AI effectively – context, coordination, and capability

Agent patterns emerge

The best approach emerging across multiple teams: give each AI agent one clear task, coordinate through shared files, and keep instructions in plain text. These patterns appeared independently in five separate projects in a single month – a sign the practice is maturing fast.

Tool to delegate

Multiple users independently report the same shift: features like Dispatch change the mental model from 'operating a tool' to 'delegating and checking in.' Pavel Huron: 25 minutes of direction produced 3+ hours of execution. Morning coffee, dog walk, passenger seat – all productive.

Always-on orchestration

Computer Use (any app, any workflow), Channels (event-driven agents), Scheduled Tasks (no local machine needed), Dispatch (delegate from phone). AI shifted from 'tool you open' to 'colleague that works while you don't.'

Middle management compressed

Palantir CTO: 'AI is the antidote to the managerial revolution of the 20th century.' Individual contributors with agent teams bypass traditional hierarchy. But work intensification is the risk – AI raises output expectations, not just output.

The Application

Where AI meets your world – applied work, integration, and delivery

Enterprise goes all-in

Meta deployed company-wide AI agents built on OpenClaw (the open-source agent framework) and Claude. Agents talking to each other to resolve issues. AI use factored into performance reviews. The most advanced real-world enterprise agent deployment to date.

Small businesses benefit most

Anthropic 81K study: independent workers report 3x economic empowerment. Employees with side projects: 58% real gains. Gusto data: small businesses using AI hired MORE, not fewer. 'Most entrepreneurial generation ever.'

Training at scale

FedEx: 400K employees in bespoke AI training via Accenture. Role-based, continuous, with 'communities of practice.' But the pace of change has outstripped traditional certification – by the time a curriculum is written, the tools have moved on.

Legacy unlocked

Computer Use means AI can interact with any application – including 20-year-old legacy software – without APIs. For organisations stuck in older systems, the barrier to AI adoption just dropped dramatically.


Framework Check

DID ANYTHING CHANGE HOW WE THINK ABOUT AI CAPABILITY?

The four-tier framework holds – but March added a dimension we've been tracking implicitly and now need to name explicitly: the individual.

January asked how you use AI (delegation vs inquiry). February asked how much you use it (deployment overhang). March's data makes the third question unavoidable: are you building AI fluency at all? The signals point clearly to individuals – not just organisations – as the unit of analysis. The framework still maps Landscape, Foundation, Practice, and Application. But from this edition forward, each tier speaks to individuals alongside organisations.


Deep Dive: AI Fluency – Why Adoption and Upskilling Matter Now

THE TOOLS ARE MOVING FAST. THE QUESTION IS HOW TO KEEP UP.

March wasn't a month of gradual progress. It was a month of strategic convergence. Every major AI lab – independently, in the same four-week window – signalled the same priority: work.

Where the industry is pointing

On March 25, OpenAI discontinued Sora – its flagship video generation product – and concentrated resources on Codex, its coding and work automation agent. The same week, OpenAI renamed its entire product division "AGI Deployment." Not "AGI Research." Not "AGI Safety." Deployment.

The signal is clear. OpenAI is betting that AGI – if it arrives – will arrive through work. That doesn't mean creative AI is slowing down – if anything, it's accelerating. London-based Particle6 launched an AI-generated music video starring virtual actress Tilly Norwood with an 18-person production team, reporting 50% cost and time reductions versus traditional methods. Major studios are quietly integrating AI across production pipelines. Creative AI is thriving – but the lab investment priority has shifted to automating the tasks that knowledge workers do every day.

They weren't alone. Anthropic spent the month in federal court defending its access to the enterprise market. Jensen Huang declared that AGI has already arrived for specific work tasks and called OpenClaw – the open-source AI agent framework now used by Meta and others – "the most important software release probably ever." SpaceX/xAI filed for a $75 billion IPO – the largest in history – betting that the future of AI is the future of work.

OpenAI didn't discontinue Sora because it failed. They discontinued it because work matters more.

The phrase emerging across the industry captures it: "work AGI" – the idea that general intelligence will matter most in the context of professional work. It's not a prediction. It's a description of where the largest technology companies are already pointing their resources.

The capability is real

This isn't speculative. The capability signals in March are clear:

GPT 5.4 achieved 82% parity with human professionals across a broad task range – at half the cost of Opus 4.6. Computer use accuracy hit 75%, surpassing the human baseline of 72.4%. Anthropic shipped six major features in a single month: Remote Control (continue sessions from your phone), Dispatch (delegate tasks and check in later), Channels (agents that respond to external events automatically), Computer Use (control any application), Scheduled Tasks (AI that works while your machine is off), and a 1 million token context window.

Lovable – a tool that lets anyone build working software through conversation – reportedly grew from $300 million to $400 million ARR in a single month. Cursor doubled to $2 billion ARR in three months.

The "deployment overhang" we identified in February hasn't closed. It's widened – because capability advanced faster than adoption.

The adoption gap

And this is where March's most important story lives. Not in what the tools can do, but in what people are actually doing with them.

Ethan Mollick, professor at Wharton and one of the most widely cited researchers on AI adoption, described the current state bluntly: "Remarkably little has changed" in most organisations. The tools are there. The capability is proven. And the overwhelming majority of knowledge workers are still using AI for the same short, routine tasks they were six months ago – if they're using it at all.

The gap between early adopters and everyone else is enormous:

Early adopters

Small teams building entire products with AI agents. Meta deploying AI company-wide, with usage factored into performance reviews. Individual practitioners finding that 25 minutes of direction produces 3+ hours of autonomous execution.

Everyone else

'Remarkably little has changed.' Most AI use is still short, routine tasks. Median real-world agent use: 45 seconds. The deployment overhang from February widened in March.

This isn't a technology problem. It's a skills problem. And it affects individuals at least as much as organisations.

The policy response

The White House published a 6-point AI legislative framework in March. Blue Rose Research data shows AI climbing as a policy issue faster than any other. 72% of Americans are concerned about wage depression. 77% worry about entire industries being eliminated.

But Anthropic's 81,000-person study reveals a gap between public concern and practitioner experience: the number one fear among people who actually use AI isn't job loss (22.3%). It's unreliability (26.7%). The people closest to the technology aren't worried about being replaced. They're worried about whether the tools work well enough to trust.

The layoff narrative adds another layer of complexity. According to a widely cited survey of US hiring managers, 60% admit they emphasise AI in layoff announcements because it "plays better with stakeholders." Only 9% report that AI has actually fully replaced any roles. The real mechanism appears to be wage pressure rather than mass replacement. Clara Shih (CEO of Salesforce AI) identified three pathways: intra-sector compression (AI makes your peers cheaper), labour supply outpacing demand (more people can do what you do), and inter-sector spillover (displaced workers from one industry flood into yours).

60% of hiring managers cite AI in layoff announcements because it plays better. Only 9% say AI has actually replaced any roles.

The public debate centres on job displacement. The practitioner data points toward skills commoditisation – a different problem, but one with more actionable solutions.

The individual imperative

Here's what the data actually says about who benefits from AI:

Independent workers report economic empowerment at three times the rate of institutional employees. People with side projects report real gains 58% of the time. Gusto's data shows small businesses using AI are hiring more, not fewer.

And when Anthropic asked 81,000 people what they actually want from AI, the answer wasn't "productivity." A third of all visions were about making room for life – more time with family, personal projects, creative work, learning.

The opportunity isn't just professional. It's personal. But it requires a skill that's still emerging: knowing how to work with AI effectively.

FedEx is training 400,000 employees. Meta has company-wide AI programmes. Accenture ties AI usage to career progression. But these are the exceptions – massive corporations with the resources to build bespoke training. The other 160 million workers in the US – the freelancers, the small business owners, the employees at companies that haven't figured this out yet, the people who just lost their jobs to a restructuring that cited AI as the reason – are on their own.

That's the gap. Not between AI capability and AI adoption. Between the people who are building AI fluency and the people who aren't. And unlike the technology gap, which will close as tools improve, the skills gap will widen as expectations rise.

The skills gap won't close on its own. Unlike the technology gap, which narrows as tools improve, the fluency gap widens as expectations rise.

What "upskilling" actually means

The word "upskilling" is overused and under-defined. Here's what the March data says it actually involves:

It's not learning to code. The most valuable AI skill identified across multiple studies this month is iteration – building on AI responses rather than accepting the first answer. That's a thinking skill, not a technical one.

It's not just taking a course. Some courses are genuinely valuable – Anthropic Academy and DeepLearning.AI offer practical, up-to-date programmes. But the pace of change means traditional certifications date quickly. The deeper skill is learning to learn with AI – using the tools themselves as part of the learning environment.

It's not about your employer. The 3x benefit for independent workers isn't because they have better tools. It's because they have more agency over how they use them. They're not waiting for a corporate AI strategy. They're experimenting with their actual work.

It is about practice with real problems. The Anthropic study found that only 30% of users tell AI how to interact with them. Simple instructions – "challenge my assumptions," "flag what you're uncertain about" – change the entire dynamic. The skill ceiling is high, but the entry point is low. You need your actual work and a willingness to iterate.


Emerging Signals

EARLY PATTERNS – WORTH WATCHING

Not every signal has an obvious action attached. These are trends from March that are worth tracking into Q2.

The "Work AGI" thesis could unify everything. March's four strongest editorial threads – execution becoming free, AI entering workforce policy, the agent stack crystallising, and the adoption gap widening – all converge on the same point: knowledge work automation is now the central fact of the AI industry. Every other signal is downstream of this bet.

AI-on-AI attacks are real. In March, reports emerged of autonomous AI agents breaching enterprise AI platforms – in one widely reported case, a single-person company's agent reportedly compromised a major consultancy's internal AI system in under two hours, exposing millions of records. The new threat model isn't hackers with keyboards. It's agents probing agents, at machine speed, around the clock. AI adoption is outpacing security governance across the board.

The subsidised inference era is ending. SourceGraph's CEO warned that "AI inference costs start to look closer to labor costs than to software costs." CTOs who budgeted for AI as a software expense face a reckoning in 2-4 quarters. Usage-based pricing (enabled by Stripe's token billing infrastructure) is the likely response.

Platform risk intensifies. "Anthropic is the new Amazon" – labs are absorbing application-layer functions (code review, meeting recording, security scanning). The average user is now across 3.5 different models. The moat isn't the tool. It's domain expertise and the ability to orchestrate across tools.

Chinese OpenClaw adoption is outpacing the West. ByteDance, Alibaba, and Tencent are offering hosted instances. Western cloud giants haven't matched this yet. 80+ global meetups. The a16z Consumer AI Top 100 shows three distinct ecosystems forming: Western, Chinese, and Russian.

"Making room for life" as a market signal. Anthropic's 81K study found that even when people frame their AI goals as productivity, the underlying desire is personal: time with family, creative projects, reading, learning. Products and services that frame AI as "live better" rather than "work faster" may find a larger market than expected.


What To Do This Month

Three actions for April – based on what March told us

  1. 1Recognise where the industry is heading. Every major AI lab signalled in March that knowledge work automation is the priority. This isn't a trend to watch – the capital, products, and talent are already moving. The question is no longer whether AI will change your work. It's whether you'll be ready when it does.
  2. 2Close your personal adoption gap. Pick one real task from your actual work – not a generic exercise – and spend a week doing it with AI. Iterate. Push back on the first answer. Tell the AI how to interact with you. The Anthropic data is clear: the single biggest skill gap is between people who accept the first response and people who build on it.
  3. 3Start building AI fluency now – don't wait for your employer. Independent workers benefit 3x more from AI than institutional employees. The people reporting the highest satisfaction are using AI for personal projects and 'making room for life.' You don't need corporate training or a course. You need practice with problems you care about.

AI Signal is published monthly by Pandion Studio for anyone navigating the AI shift – whether you run a business, lead a team, or are building your own capability.

If you want to build real AI fluency through hands-on work – not courses, not theory – that's what AI Sessions are for.

FAQs

What is 'work AGI'?

Work AGI is the emerging industry consensus that artificial general intelligence will arrive first – and matter most – in the context of knowledge work. In March 2026, OpenAI discontinued its Sora video product, renamed its product division 'AGI Deployment,' and redirected all compute toward work automation. Anthropic went to federal court to defend its enterprise market. Jensen Huang declared AGI already here for specific work tasks. The term describes not a single product but a strategic convergence: every major lab has decided that automating professional knowledge work is the priority above all else.

Is AI really taking people's jobs?

The picture is more nuanced than headlines suggest. 60% of hiring managers admit emphasising AI in layoff announcements because it 'plays better with stakeholders' – only 9% say AI has fully replaced any roles. The real mechanism isn't mass replacement but wage pressure: AI makes some tasks cheaper, increasing supply of people who can do them, which pushes down what employers pay. Meanwhile, independent workers who actively use AI report 3x more economic benefit than institutional employees. The data suggests AI fluency is becoming a career differentiator, not a career ender – but only for those who develop it.

How should I upskill with AI if my employer doesn't offer training?

Start with your actual work, not generic courses. The strongest predictor of effective AI use is iteration – building on AI responses rather than accepting the first answer. Pick one task you do regularly, spend a week doing it with AI, and focus on pushing back, refining, and building on what it produces. The Anthropic 81K study found that people who use AI for 'making room for life' – personal projects, side businesses, creative work – report the highest satisfaction. You don't need corporate training. You need practice with real problems.

What is the adoption gap?

The adoption gap is the growing distance between what AI tools can do and what most people actually ask them to do. In February, we reported a deployment overhang: AI agents can handle 5-hour tasks, but median real-world use is 45 seconds. March's data shows this gap widened. AI capability leapt forward – computer use at human-level, 1M token context windows, always-on orchestration – but most organisations still report 'remarkably little has changed.' The gap isn't about access or cost. It's about skills, confidence, and knowing what to ask for.

AI Signal – March 2026 | Pandion Studio