ALTITUDE
AI Signal – February 2026
Monthly signals mapped to a four-tier framework: February's central finding is the deployment overhang – the growing gap between what AI tools can do and what organisations actually ask them to do.

February's big finding: most organisations are using a fraction of the AI capability they've already paid for. Research published this month shows AI agents can handle tasks lasting five hours — but the median real-world use is forty-five seconds. That gap is what we're calling the deployment overhang, and it's this month's deep dive.
Elsewhere: AI costs collapsed again, always-on AI agents went from a hacker experiment to over a million users in days, revenue numbers settled the "is AI real?" question, and agent teams built working software with no human orchestrator.
TLDR
IF YOU READ NOTHING ELSE
- You're probably using 10% of what your AI tools can do. The gap between AI capability and actual usage is enormous — and it's the main thing holding organisations back.
- AI costs dropped 5x in a single month. Budget is no longer a valid reason for limited AI use.
- AI agents work well beyond technical teams. Over half of agent actions are now in operations, marketing, sales, and finance.
- Always-on AI agents went mainstream overnight. OpenClaw's agent platform hit 1.5 million agents in days. Dedicated hardware sold out. Running AI agents 24/7 on cheap hardware is now practical for anyone.
- Autonomy isn't something you buy — it's something you build. Giving people access isn't a strategy. You have to design the workflows.
- If you do one thing this month: audit what your team actually uses AI for. If the answer is short, routine tasks, you have a deployment overhang.
At a Glance
The deployment overhang — the gap between AI capability and actual usage
MODELS
Opus 4.6 + Sonnet 4.6
- Opus 4.6: state-of-the-art on agentic coding, search, and knowledge work
- Sonnet 4.6: approaches Opus-level intelligence at Sonnet pricing
- Both feature 1M token context windows — a first for these model classes
COST
150x cheaper
- Than 21 months ago
- 5x drop in February alone
- Budget is no longer the barrier
CAPABILITY
5 hrs vs 45 sec
- AI agents can handle 5-hour tasks
- Median real-world use: 45 seconds
- That gap is the story
TOOLS
Always-on AI goes mainstream
- OpenClaw hit 1.5M agents in days
- Dedicated agent hardware sold out within hours
- Claude Code now authors 4% of all public code on GitHub
ADOPTION
AI usage tied to career progression
- Accenture (700K employees) mandates AI adoption
- 51% of agent actions now in non-technical functions
- Spotify engineers haven't written code since December
TRENDING
The SaaSpocalypse
- Enterprise software stocks down 20–37%
- White-collar job openings at 11-year low
- AI as infrastructure: better for plumbers than programmers?
Model Releases This Month
Seven major releases across four providers — one of the most concentrated months of frontier AI development on record.
ANTHROPIC
Feb 5
Claude Opus 4.6
Anthropic's new flagship. Tops major coding and reasoning benchmarks. First Opus-class model with a 1M token context window. Can now spin up agent teams that coordinate autonomously.
OPENAI
Feb 5
GPT-5.3 Codex
OpenAI's answer to agentic coding. Built for tasks that run hours to weeks with minimal human input. So capable at code generation that OpenAI flagged elevated cybersecurity risk in its own safety card.
OPENAI
Feb 12
GPT-5.3 Codex Spark
A speed-focused variant running on dedicated Cerebras chips. Outputs over 1,000 tokens per second — fast enough for real-time pair programming. First in a planned family of speed-optimised models.
ANTHROPIC
Feb 17
Claude Sonnet 4.6
In blind testing, users preferred it over last year's flagship (Opus 4.5) 59% of the time — at a fraction of the cost. Significant jump in computer use ability. 1M token context window.
XAI
Feb 18
Grok 4.20
Uses four internal agents that debate each query before answering — a novel approach to reducing errors. Pulls real-time data from X (Twitter) for live sentiment analysis.
Feb 19
Gemini 3.1 Pro
Google's strongest reasoning model yet. Tops several academic benchmarks. Introduces 'Agentic Vision' — the model can iteratively zoom, crop, and analyse images. 1M token input context.
ALIBABA
Pre-Feb 16
Qwen 3.5
Latest open-weight release from Alibaba. Part of a broader pattern: Chinese labs are shipping competitive models at an accelerating pace.
The gap between the best model and the second-best is now measured in days, not months. Stop trying to pick the 'right' model. Build the team capability and workflows that work regardless of which model is on top.
Key Signals This Month
SIX THINGS FROM FEBRUARY 2026 THAT MATTER
-
Your AI tools can do far more than you're asking them to do. Research published in February shows AI agents can handle tasks lasting up to five hours. In practice, the vast majority of usage peaks at under a minute. Most organisations are sitting on capability they've already paid for.
-
The same AI capability now costs 5x less than two months ago. A new model released in February outperforms the previous best option — at one-fifth the price. The cost barrier to running AI workflows at scale has effectively collapsed.
-
Always-on AI agents went from experiment to mainstream in a week. OpenClaw — a platform that turns AI models into persistent, always-on assistants — exploded to 1.5 million agents in days. Dedicated hardware for running agents 24/7 sold out. People are now running teams of 10+ AI agents on a small computer in their home, handling research, project management, and monitoring around the clock. The concept of AI that works while you sleep just became practical for anyone willing to set it up.
-
AI is the fastest-growing technology in history. Anthropic (the company behind Claude) hit a $14 billion annual revenue run-rate, growing 10x per year for three consecutive years. Their coding tool now authors 4% of all public code on GitHub. This isn't hype — it's the largest revenue ramp in the history of software.
-
Sixteen AI agents built a working piece of software — with no human orchestrator. A team of AI agents wrote 100,000 lines of code, producing software that actually runs. No one coordinated them. They organised themselves using shared text files.
-
AI agents are expanding well beyond technical work. New data shows over half of AI agent actions are now in non-technical functions: back office, marketing, sales, and finance. The assumption that agents are only for developers is already wrong.
Emerging Signals
EARLY PATTERNS – WORTH WATCHING
Not every signal has an obvious action attached. These are trends from February that don't have immediate business implications but are worth tracking. If any of them accelerate, they could reshape how AI affects your organisation.
The "SaaSpocalypse." Investors are pricing in broad disruption of traditional software companies. Some major enterprise software stocks dropped 20–37% in early 2026. The concern: AI-native tools will replace entire software categories, not just augment them. The reality is more nuanced — large organisations move slowly, and most existing tools won't vanish overnight — but if you're renewing expensive software contracts, it's worth asking whether an AI-native alternative now exists.
AI's physical limits. Training a single frontier AI model will soon require gigawatts of electricity — equivalent to powering a small city. The energy and infrastructure demands of AI are becoming a real constraint. If sustainability is part of your organisation's mandate, the environmental footprint of AI tools is a question that's going to get harder to ignore.
A tightening knowledge-work market. Professional services job openings hit an 11-year low in February, according to industry data. Hiring rates in knowledge-work sectors have dropped significantly. Whether this is AI-driven, cyclical, or both is debated — but the practical implication is the same: the value of existing staff who can work effectively with AI is going up, not down.
A generational shift in career attitudes. Nearly three-quarters of Gen Z parents now say they'd prefer their children to become trade entrepreneurs rather than tech employees. Only 16% believe a university degree guarantees job security. The future workforce may be less concentrated in the knowledge-work roles AI is disrupting most.
AI as infrastructure, not just a tool. One of the month's most thought-provoking arguments: AI may be better for plumbers than programmers. The logic is that reduced software costs make previously uneconomic niche markets viable — scheduling tools for trades, inventory systems for small workshops, custom apps for local businesses. If this plays out, AI's biggest impact may not be in the industries that talk about it most.
The Framework: February Deep Dive
Each month we map signals to four tiers that cover everything from the tools themselves to how you actually use them. This is the structured analysis behind the headlines above.
Tier 1: The Landscape
What's available now? Cost collapses, revenue records, and why the 'best model' changes every week.
Tier 2: The Foundation
What does my organisation need in place? Trust, governance, and why AI adoption is now mandatory at some firms.
Tier 3: The Practice
How should my team actually work with AI? This month's deep dive: the deployment overhang — why you're using a fraction of what you've already got.
Tier 4: The Application
Where is this being applied? Self-organising agent teams, AI beyond code, and hard ROI numbers.
Tier 1: The Landscape
What's available – and what changed
The cost of intelligence keeps falling. The same level of AI capability that cost a certain amount 21 months ago now costs 150 times less. That's not a typo. And it's not slowing down — in February alone, one major provider released a model that users prefer over their previous top-of-the-line option, at one-fifth the price. Google matched it with a model that doubled its own benchmark scores at zero price increase.
Revenue numbers have settled the "is AI real?" question. AI companies are generating revenue at a pace with no precedent in software. The fastest any traditional software company reached $10 billion was over 20 years. AI labs are doing it in about one year. Anthropic's coding tool — used by developers to write and edit code — now accounts for 4% of all public code on GitHub. That figure doubled in a single month. Whether or not your organisation uses AI today, the market has made its decision.
Always-on agents arrived — practically overnight. OpenClaw, a platform that turns AI models into persistent assistants running 24/7, went from a small community to 1.5 million agents in days. Dedicated hardware sold out. Early adopters are running teams of AI agents on small computers at home — handling continuous research, monitoring, and project management while they sleep. The highest-value use case so far isn't autonomous coding (which requires too much human feedback) but persistent research: agents that continuously scan, catalogue, and surface relevant information. This is a new category of AI use that didn't exist at scale a month ago.
The "best" model still changes every week. The leading AI companies — OpenAI (ChatGPT), Anthropic (Claude), and Google (Gemini) — are trading benchmark leadership on a near-weekly basis. The practical takeaway from January's guide still holds: stop trying to pick the right model. Build the team capability and workflows that work regardless of which model is on top.
WHAT THIS MEANS FOR YOU
The economics of AI changed again this month — in your favour. If budget was your reason for limited AI use, revisit that assumption. The same investment now goes dramatically further. The bottleneck has moved from cost to capability.
Tier 2: The Foundation
What your organisation needs in place
AI adoption is becoming mandatory. Accenture — with over 700,000 employees — now ties AI usage to career progression. Three other major consultancies confirmed that senior managers are the hardest group to convert. The signal is clear: in large organisations, AI isn't optional anymore. If this pattern spreads (and it will), "I don't use AI" stops being a personal preference and starts being a career risk.
Trust is becoming a competitive differentiator. In a landscape where capability is converging — most AI tools can do roughly the same things — the companies behind those tools are differentiating on values and trust. Anthropic committed to keeping their consumer product permanently ad-free. As AI tools become more capable, the question of which providers you trust with your data and workflows is becoming a real business decision, not just a technical one.
Autonomy isn't something you buy — it's something you build. A detailed study on how people use AI agents found that the degree of autonomy an agent exercises isn't determined by the technology alone. It's co-constructed — shaped by the model's capability, the product design, and the user's skill level. You can't just purchase an "autonomous AI." You have to design the workflows, train the people, and calibrate the trust level. There's no shortcut.
WHAT THIS MEANS FOR YOU
If your AI strategy is "give people access and see what happens," February's data says that won't work. Autonomy requires design — deliberate choices about what AI does automatically, what it checks with a human, and when it stops to ask. That's governance work, not technology work.
Tier 3: The Practice
How your team should actually work with AI – this month's deep dive
THE DEPLOYMENT OVERHANG: YOU'RE USING A FRACTION OF WHAT YOU'VE ALREADY GOT
Industry research published in February measured how people actually use AI agents in practice. The gap between what AI can do and what people ask it to do is enormous. According to the data, AI agents can handle tasks lasting up to five hours. The most demanding real-world usage — the 99.9th percentile — peaks at 42 minutes. The median task? Forty-five seconds.
That gap is the deployment overhang. And it exists in every organisation using AI today.
This isn't about needing better tools. It's about using what you already have.
The January edition showed that how people use AI matters more than which AI they use — the delegation vs inquiry finding. February extends that insight with hard numbers. The gap isn't just about interaction style. It's about scope. Most people are asking AI to do small, safe, familiar tasks when it's capable of handling complex, multi-step, hours-long work.
Expert users don't approve more — they monitor differently.
The same data revealed a clear pattern in how experienced AI users differ from new ones:
New Users
Approve most actions manually. Rarely interrupt the AI. Treat it like a subordinate that needs constant oversight.
Expert Users
Auto-approve 40% of actions. Interrupt the AI nearly twice as often as new users. Treat it like a colleague they trust but actively oversee.
The shift from new to expert isn't "hands off." It's "hands different." Expert users give more trust upfront but intervene more assertively when something matters. They monitor rather than approve. This is a learnable skill, not an innate talent — and it's the skill that closes the deployment overhang.
The AI is already better at knowing when to stop than most people expect.
One surprising finding from the same dataset: AI agents stopped themselves to ask for clarification twice as often as humans interrupted them — 16% of the time on complex tasks, compared to 7% human interruption. The safety concern that AI will "run away" with a task is less supported by the evidence than the concern that humans won't push AI far enough.
Closing the deployment overhang in your team
- 1Audit scope, not just usage. It's not enough to know whether your team uses AI. Ask what they use it for. If the answer is short, routine tasks, you have a deployment overhang.
- 2Shift from approval to monitoring. Train your team to give AI more autonomy on low-risk tasks and focus their attention on high-stakes moments. Approving every action is a bottleneck that limits value.
- 3Start with time, not complexity. The easiest way to close the gap: take a task your team currently spends 15 minutes on with AI, and ask whether it could handle a 2-hour version. The capability is probably already there.
- 4Name the overhang. Make it visible. 'We're using 10% of what our tools can do' is a more powerful motivator than any training programme.
Tier 4: The Application
Where AI is being applied – and what's emerging
Agent teams can self-organise. In one of February's most striking experiments, a research team tasked sixteen AI agents with writing a working C compiler — 100,000 lines of code that boots an operating system. No human orchestrator. The agents coordinated using shared text files, essentially inventing their own project management. The reported cost: approximately $20,000. The lesson isn't about compilers — it's that well-designed AI teams can handle genuinely complex, multi-week projects. The key design decisions: write very thorough tests, structure output for AI to read (not humans), and give each agent a clearly defined speciality.
AI agents work beyond code — and the data proves it. Over half of all AI agent actions in a major study were in non-engineering functions. Back office operations (9%), marketing (4.4%), sales and CRM (4.3%), and finance (4%) are all active agent deployment areas. If your organisation has been thinking of AI agents as a developer tool, it's time to widen the lens.
Hard ROI numbers are appearing. Walmart's AI shopping assistant increased average basket size by 35%, with half of online customers actively using it. A senior engineering team at Spotify reported they haven't written a single line of code since December — all coding happens through AI, with engineers reviewing and directing rather than typing. These aren't pilot programmes. They're production deployments at scale.
WHAT THIS MEANS FOR YOU
AI agents aren't a future capability — they're a current one, and they work well beyond technical teams. If you're in operations, marketing, finance, or customer service, the question isn't "could AI agents help?" It's "what are we waiting for?"
The framing matters: think "opportunity AI" (what new things can we do?) rather than "efficiency AI" (how do we cut costs?). The organisations getting the most value from AI are using it to do things they couldn't do before, not just to do existing things cheaper.
Framework Check
DID ANYTHING CHANGE HOW WE THINK ABOUT AI CAPABILITY?
The four-tier framework holds, but February introduced a concept that may become central: the deployment overhang.
January told us that how you use AI matters (delegation vs inquiry). February tells us that how much you use it matters too — and most organisations are barely scratching the surface. Together, these two findings form a complete picture: use AI in inquiry mode, and use it for bigger things.
One signal to watch: the argument that AI is evolving from a distinct capability to something more like infrastructure — embedded everywhere, like electricity. If that trajectory continues, the framework eventually shifts from "how to adopt AI" to "how to run a modern business." We're not there yet. But hold it lightly.
What To Do This Month
Three actions for February
- 1Audit the deployment overhang. Ask your team (or yourself): what do you actually use AI for? If it's mostly short, routine tasks — drafting emails, summarising notes — you're leaving most of the value on the table. The capability for longer, more complex work is already there.
- 2Revisit your AI budget assumptions. Costs dropped 5x in a single month. Whatever you decided about AI spend three months ago is probably wrong. The same investment now covers dramatically more ground.
- 3Look beyond code. If you've been thinking of AI agents as a developer tool, check the data: over half of agent usage is now in operations, marketing, sales, and finance. Pick one non-technical workflow and test it.
AI Signal is published monthly by Pandion. We help organisations build real AI capability – the foundations, the practice, and the fluency that turn tools into results.
Have a question about something in this guide? Get in touch.
FAQs
What is the AI deployment overhang?
The deployment overhang is the gap between what AI tools can technically do and what organisations actually ask them to do. Research shows AI agents can handle tasks lasting five hours, but in practice most people use them for tasks under a minute. Most organisations are using a fraction of their existing AI capability.
Are AI agents only useful for technical teams?
No. Recent data shows that over half of AI agent actions are now in non-technical functions — back office, marketing, sales, and finance. The expansion of agents beyond coding and engineering is one of February's clearest signals.
Should my organisation worry about the 'SaaSpocalypse'?
Not panic, but pay attention. Major software companies have seen significant stock drops as investors price in AI disruption. The practical takeaway: review your software subscriptions with fresh eyes. Some categories may be replaced by AI-native alternatives. But large organisations move slowly, and most existing software won't disappear overnight.