ALTITUDE
AI Signal – January 2026
Monthly signals mapped to a four-tier framework: what changed in the AI landscape, what your organisation needs in place, how teams should actually work with AI, and where it's being applied.

Key Signals This Month
FIVE THINGS FROM JANUARY 2026 THAT MATTER
-
AI costs dropped over 100x in a year. Jensen Huang put a number on it at CES. Every major provider cut prices in January. The barrier to AI access is collapsing – the question is whether your organisation is ready to absorb it.
-
Using AI the wrong way erodes skills. Anthropic published a rigorous study: developers using AI assistants scored 17% lower on skills assessments. But the how matters – delegation erodes skills, inquiry preserves them. Same tool, completely different outcomes.
-
Half of all jobs are now AI-touched. Up from 36% a year ago. And it's not just routine work – AI disproportionately covers tasks that previously required specialist knowledge.
-
AI governance shifted from checklists to reasoning. Anthropic rewrote Claude's constitution from rules to narrative. If your AI policy is still a list of do's and don'ts, it's already outdated.
-
Industry-specific AI is the dominant direction. Generic AI gets you to 90%. Domain-specific solutions get you to 99.999%. That gap is where expertise creates value.
Emerging Signals
EARLY PATTERNS – WORTH WATCHING
Not every signal has an obvious action attached. These are trends from January that don't have immediate business implications but are worth tracking. If any of them accelerate, they could reshape how AI affects your organisation.
AI chip geopolitics. The physical infrastructure behind AI – who makes the chips, who controls the supply – is becoming a geopolitical issue. If export controls tighten or supply chains shift, the cost and availability of AI tools could change quickly. Not directly about how your organisation uses AI, but it sets the ceiling on what's possible.
AI-native tools replacing, not just augmenting. There's a growing conversation about whether AI-powered tools will replace legacy business software entirely, rather than sitting on top of it. If that shift accelerates, the "right tools" question changes from "which AI assistant?" to "do we still need this entire software category?"
"Human-made" as a market signal. Some markets are seeing early demand for "human-verified" or "human-made" labels – the AI equivalent of organic certification. If this trend grows, it could affect how you position AI-assisted work to clients and stakeholders.
AI agents organising themselves. One experiment in January involved thousands of AI agents self-organising into something resembling an economy. It's too early to mean anything practical, but it's worth noting: most current thinking about AI assumes humans directing it. That assumption may not hold forever.
Enterprise slowness is structural, not just cultural. Jensen Huang's observation that enterprise is always the slowest adopter reframes expectations. If you're in a mid-market or enterprise environment and feel behind, that's normal. The question isn't how to move faster – it's how to move well.
The Framework: January Deep Dive
Each month we map signals to four tiers that cover everything from the tools themselves to how you actually use them. This is the structured analysis behind the headlines above.
Tier 1: The Landscape
What's available now? New models, price shifts, capability changes – a quick orientation on what moved.
Tier 2: The Foundation
What does my organisation need in place? Governance, readiness, data foundations – the groundwork.
Tier 3: The Practice
How should my team actually work with AI? Interaction patterns, skills, team fluency – the monthly deep dive.
Tier 4: The Application
Where is this being applied? Industry-specific use cases, emerging patterns, and what's working.
Tier 1: The Landscape
What's available – and what changed
The "best" AI tool changes every few weeks. The leading models – from companies like OpenAI (ChatGPT), Anthropic (Claude), and Google (Gemini) – now take turns outperforming each other. Six months ago, picking the right model mattered. Today, what matters more is how your team uses whichever tools you have.
Costs are in freefall. Jensen Huang (NVIDIA CEO) put a number on it at CES in January: the same AI capability that cost thousands a year ago is now over 100x cheaper. Every major provider dropped prices significantly in January alone. The direction is clear and accelerating.
The infrastructure is maturing. The connections between AI tools and business systems are becoming standardised. What was experimental a year ago – AI assistants that can read your documents, search your data, draft from your templates – is now production-ready tooling. The question is no longer "does this work?" but "are we set up to use it?"
WHAT THIS MEANS FOR YOU
Don't agonise over which AI tool to pick. The landscape shifts too fast for that to be your strategy. Instead, invest in the things that compound – your team's capability, your data foundations, and your workflows. Those don't depreciate when the next model drops.
Tier 2: The Foundation
What your organisation needs in place
AI governance is growing up. Anthropic rewrote Claude's entire operating constitution in January – and the big shift was from a checklist of rules to a narrative document that teaches reasoning. The old version was "don't do X." The new version is "here's how to think about trade-offs." They released it as public domain, so anyone can use it as a starting template. If your AI policy is still a list of do's and don'ts, it's already outdated.
Adoption is messier than headlines suggest. CJ Desai, President of MongoDB, talks to ten-plus large customers a week about AI. His read in January: AI writing assistants (like Microsoft Copilot) are delivering "not great" value in most deployments. AI coding tools are a genuine breakthrough. Customer support chatbots are "still tinkering." The pattern isn't that AI doesn't work – it's that most organisations haven't done the groundwork to absorb it. The gap between capability and readiness is where most companies are stuck.
Half of all jobs are now AI-touched. Anthropic (the company behind Claude) published research putting it at 49% – up from 36% just a year ago. And it's not just routine work. AI disproportionately covers tasks that previously required specialist knowledge. This isn't a future trend. It's the current reality, and most organisations haven't adjusted their hiring, training, or team structures to account for it.
WHAT THIS MEANS FOR YOU
The technology isn't your bottleneck. Readiness is. If your governance is a checklist, if your teams don't have clear guidance on how to use AI, if you haven't thought about which skills need protecting – that's the foundation work.
Tier 3: The Practice
How your team should actually work with AI – this month's deep dive
THE FINDING THAT SHOULD CHANGE HOW YOU THINK ABOUT AI TRAINING
In late January, Anthropic – one of the leading AI companies – published a study on their own product that most AI companies would never run. A rigorous, controlled study with software developers. The result: those using AI assistants scored 17% lower on skills assessments than those working without AI.
That's not a marginal difference. And the skills hit hardest – finding and fixing errors, navigating unfamiliar systems, learning new technologies – were exactly the ones that matter most for independent problem-solving.
Before you conclude "AI is bad for learning," here's the part that matters more.
It's not about using AI less. It's about using it differently.
The study found two distinct patterns of AI use:
Delegation
Hand the problem to the AI. Accept the answer. Move on. This is what most people do by default – it's fast and it's what produced the 17% gap.
Inquiry
Ask the AI to explain the problem. Explore approaches together. Challenge its reasoning. Then do the work yourself. This pattern preserved learning and captured most of the speed benefit.
Same tool. Same people. Completely different outcomes based on the interaction pattern.
Why this matters beyond coding
This isn't a story about developers. It's a story about every knowledge worker using AI right now. When you ask ChatGPT to draft an email and hit send without thinking, that's delegation. When you ask it to outline three approaches to a problem and then choose between them, that's inquiry.
The split Jensen Huang described at CES applies here too. He argued that AI automates tasks but humans expand into purpose. His example: despite 100% of radiology images being AI-processed, the number of radiologists has grown – because the purpose of the role expanded. But that expansion only happens if people develop the judgment to direct AI, not just accept its output.
What to do about it
- 1Name the patterns. Make "delegation vs inquiry" part of your team's vocabulary. Awareness alone shifts behaviour.
- 2Protect learning moments. When someone is new to a skill, encourage the inquiry pattern even though delegation is faster. The speed difference is small. The learning difference is enormous.
- 3Evaluate outputs, not just speed. If AI is making your team faster but your junior staff aren't developing judgment, you're borrowing from the future.
Tier 4: The Application
Where AI is being applied – and what's emerging
Industry-specific AI is the dominant direction. Jensen Huang's prediction: the next five years are about depth in specific industries, not general-purpose tools. Generic AI gets you to 90% accuracy. Industry-specific solutions get you to 99.999%. The gap between those two numbers is where domain experts – and specialist advisors – create value. If you're in sustainability, finance, legal, or any regulated field, this is directly relevant.
Non-technical teams are building software. AI coding tools have matured to the point where people without engineering backgrounds are creating working applications. The bottleneck has shifted from building to knowing what to build and verifying it works. Good judgment about what your business needs – not technical skill – is becoming the scarce resource.
AI "agents" are getting practical – but context is everything. An agent is an AI that can take actions, not just answer questions – booking meetings, processing data, drafting and sending documents. One experiment in January staffed an entire company with AI agents. The finding: their success depended almost entirely on how they were set up – what information they were given, what boundaries they had, and when they should stop. Setup determines outcomes, not raw capability.
WHAT THIS MEANS FOR YOU
If you're in a specialised field, your domain knowledge just became more valuable, not less. AI handles the general work. Your expertise handles the last mile that actually matters.
Framework Check
DID ANYTHING CHANGE HOW WE THINK ABOUT AI CAPABILITY?
The four-tier framework holds. But January sharpened one area significantly:
Tier 3 (Practice) now has hard evidence for something we've long suspected – that how people interact with AI matters more than which AI they use. The delegation vs inquiry pattern should become a standard part of how you introduce AI tools to any team. It's the closest thing we have to a universal rule for getting real value from AI.
AI Signal is published monthly by Pandion. We help organisations build real AI capability – the foundations, the practice, and the fluency that turn tools into results.
Have a question about something in this guide? Get in touch.
FAQs
What is the difference between delegation and inquiry when using AI?
Delegation means handing a problem to the AI and accepting the output. Inquiry means using the AI to explore, explain, and challenge – then doing the work yourself. Research shows delegation erodes skills while inquiry preserves learning and still captures most of the speed benefit.
Is there a growing AI skills gap in organisations?
Yes. Research from Anthropic shows AI assistants can reduce skills assessment scores by 17% when used in delegation mode. The gap isn't about AI capability – it's about how teams interact with the tools and whether learning moments are protected.
How often does the 'best' AI model change?
Roughly every few weeks. The leading models from OpenAI, Anthropic, and Google now take turns outperforming each other. This is why investing in team capability and workflows matters more than picking the 'right' model.