ALTITUDE

AI Signal – January 2026

Monthly signals mapped to a four-tier framework. January's central finding: the delegation gap — using AI assistants erodes skills by 17%, but only when used the wrong way. Plus: costs collapse 100x, half of all jobs are AI-touched, and governance shifts from checklists to reasoning.

30 January 202611 min readAIAI SignalDelegation GapSkillsEnterpriseAI Governance2026
Delegation vs inquiry — two paths from the same starting point. One follows a single straight line; the other branches, explores, and learns.
Delegation vs inquiry — two paths from the same starting point. One follows a single straight line; the other branches, explores, and learns.

January's big finding: AI assistants can erode the very skills they're supposed to enhance — but only when used in delegation mode. A rigorous Anthropic study found developers using AI scored 17% lower on skills assessments. The twist: those who used AI to explore and challenge rather than delegate lost nothing. Same tool, opposite outcomes. That delegation gap is this month's deep dive.

Elsewhere: AI costs collapsed over 100x in a year, half of all jobs are now AI-touched, governance shifted from rule checklists to reasoning narratives, and industry-specific AI emerged as the dominant direction.

At a Glance

Key takeaway: the delegation gap. The way you use AI matters more than which AI you use. January produced hard evidence that delegation — handing problems to AI and accepting the output — erodes skills, while inquiry — using AI to explore and challenge — preserves learning and captures most of the speed benefit.

January 2026 — six signals across cost, skills, adoption, governance, application, and tools

COST

100x cheaper

  • Jensen Huang put a number on it at CES
  • Every major provider cut prices in January
  • The barrier to AI access is collapsing

SKILLS

17% skills gap

  • Developers using AI assistants scored 17% lower on skills assessments
  • Delegation erodes skills — inquiry preserves them
  • Same tool, completely different outcomes based on interaction pattern

ADOPTION

49% of jobs AI-touched

  • Up from 36% a year ago
  • AI disproportionately covers specialist knowledge tasks
  • Most organisations haven't adjusted hiring or training

GOVERNANCE

Rules → reasoning

  • Anthropic rewrote Claude's constitution from rules to narrative
  • Released as public domain — anyone can use as template
  • Checklist-based AI policies are already outdated

APPLICATION

Domain > generic

  • Generic AI gets you to 90%
  • Industry-specific solutions get you to 99.999%
  • The gap between those numbers is where expertise creates value

TOOLS

Models leapfrog weekly

  • Leading models take turns outperforming each other
  • Picking the 'right' model is no longer a viable strategy
  • Infrastructure connecting AI to business systems is now production-ready

Signal Map

Each month we map the key signals to the four-tier framework from our AI capability page. Same structure, same tiers — so you always know where to place what you're reading. This month's deep dive follows below.

The Landscape

The territory before the strategy — what's available and what changed

Models leapfrog weekly

The leading models from OpenAI, Anthropic, and Google now take turns outperforming each other. Six months ago, picking the right model mattered. Today, what matters more is how your team uses whichever tools you have.

100x cost collapse

Jensen Huang put a number on it at CES: the same AI capability that cost thousands a year ago is now over 100x cheaper. Every major provider dropped prices in January. The direction is clear and accelerating.

Infrastructure maturing

The connections between AI tools and business systems are becoming standardised. AI assistants that read documents, search data, and draft from templates are now production-ready. The question is no longer 'does this work?' but 'are we set up to use it?'

The Foundation

What needs to be in place — governance, data readiness, and sustainable deployment

Governance grows up

Anthropic rewrote Claude's operating constitution from a checklist of rules to a narrative that teaches reasoning. Released as public domain. If your AI policy is still a list of do's and don'ts, it's already outdated.

Adoption is messier than headlines

CJ Desai (MongoDB) talks to 10+ large customers weekly. His read: AI writing tools deliver 'not great' value, coding tools are a breakthrough, chatbots are 'still tinkering'. Most organisations haven't done the groundwork to absorb AI.

49% of jobs AI-touched

Up from 36% just a year ago. AI disproportionately covers tasks that previously required specialist knowledge. Most organisations haven't adjusted their hiring, training, or team structures to account for it.

The Practice

How to work with AI effectively — context, coordination, and capability

Delegation erodes skills

Developers using AI assistants scored 17% lower on skills assessments. The skills hit hardest — finding errors, navigating unfamiliar systems, learning new technologies — are exactly the ones that matter most for independent problem-solving.

Inquiry preserves learning

Same tool, same people, completely different outcomes. Using AI to explore approaches, challenge reasoning, and explain problems preserved learning and captured most of the speed benefit. The interaction pattern is everything.

Purpose expands with judgment

Jensen Huang's CES argument: AI automates tasks but humans expand into purpose. Despite 100% of radiology images being AI-processed, the number of radiologists has grown. But only if people develop judgment to direct AI, not just accept its output.

The Application

Where AI meets your world — applied work, integration, and delivery

Domain expertise = value

Generic AI gets you to 90% accuracy. Industry-specific solutions get you to 99.999%. That gap is where domain experts and specialist advisors create value. If you're in sustainability, finance, legal, or any regulated field, this is directly relevant.

Non-technical teams building

AI coding tools have matured to the point where people without engineering backgrounds are creating working applications. The bottleneck has shifted from building to knowing what to build and verifying it works.

Agents need context, not power

An experiment staffed an entire company with AI agents. Their success depended almost entirely on setup — what information they were given, what boundaries they had, when they should stop. Setup determines outcomes, not raw capability.


Framework Check

DID ANYTHING CHANGE HOW WE THINK ABOUT AI CAPABILITY?

The four-tier framework holds. But January sharpened one area significantly:

Tier 3 (Practice) now has hard evidence for something we've long suspected – that how people interact with AI matters more than which AI they use. The delegation vs inquiry pattern should become a standard part of how you introduce AI tools to any team. It's the closest thing we have to a universal rule for getting real value from AI.


Deep Dive: The Delegation Gap

THE FINDING THAT SHOULD CHANGE HOW YOU THINK ABOUT AI TRAINING

In late January, Anthropic – one of the leading AI companies – published a study on their own product that most AI companies would never run. A rigorous, controlled study with software developers. The result: those using AI assistants scored 17% lower on skills assessments than those working without AI.

That's not a marginal difference. And the skills hit hardest – finding and fixing errors, navigating unfamiliar systems, learning new technologies – were exactly the ones that matter most for independent problem-solving.

Before you conclude "AI is bad for learning," here's the part that matters more.

It's not about using AI less. It's about using it differently.

The study found two distinct patterns of AI use:

Delegation

Hand the problem to the AI. Accept the answer. Move on. This is what most people do by default – it's fast and it's what produced the 17% gap.

Inquiry

Ask the AI to explain the problem. Explore approaches together. Challenge its reasoning. Then do the work yourself. This pattern preserved learning and captured most of the speed benefit.

Same tool. Same people. Completely different outcomes based on the interaction pattern.

Why this matters beyond coding

This isn't a story about developers. It's a story about every knowledge worker using AI right now. When you ask ChatGPT to draft an email and hit send without thinking, that's delegation. When you ask it to outline three approaches to a problem and then choose between them, that's inquiry.

The split Jensen Huang described at CES applies here too. He argued that AI automates tasks but humans expand into purpose. His example: despite 100% of radiology images being AI-processed, the number of radiologists has grown – because the purpose of the role expanded. But that expansion only happens if people develop the judgment to direct AI, not just accept its output.

Closing the delegation gap

Three moves to close the delegation gap

  1. 1Name the patterns. Make "delegation vs inquiry" part of your team's vocabulary. Awareness alone shifts behaviour.
  2. 2Protect learning moments. When someone is new to a skill, encourage the inquiry pattern even though delegation is faster. The speed difference is small. The learning difference is enormous.
  3. 3Evaluate outputs, not just speed. If AI is making your team faster but your junior staff aren't developing judgment, you're borrowing from the future.

Emerging Signals

EARLY PATTERNS — WORTH WATCHING

Not every signal has an obvious action attached. These are trends from January that don't have immediate business implications but are worth tracking. If any of them accelerate, they could reshape how AI affects your organisation.

AI chip geopolitics. The physical infrastructure behind AI – who makes the chips, who controls the supply – is becoming a geopolitical issue. If export controls tighten or supply chains shift, the cost and availability of AI tools could change quickly. Not directly about how your organisation uses AI, but it sets the ceiling on what's possible.

AI-native tools replacing, not just augmenting. There's a growing conversation about whether AI-powered tools will replace legacy business software entirely, rather than sitting on top of it. If that shift accelerates, the "right tools" question changes from "which AI assistant?" to "do we still need this entire software category?"

"Human-made" as a market signal. Some markets are seeing early demand for "human-verified" or "human-made" labels – the AI equivalent of organic certification. If this trend grows, it could affect how you position AI-assisted work to clients and stakeholders.

AI agents organising themselves. One experiment in January involved thousands of AI agents self-organising into something resembling an economy. It's too early to mean anything practical, but it's worth noting: most current thinking about AI assumes humans directing it. That assumption may not hold forever.

Enterprise slowness is structural, not just cultural. Jensen Huang's observation that enterprise is always the slowest adopter reframes expectations. If you're in a mid-market or enterprise environment and feel behind, that's normal. The question isn't how to move faster – it's how to move well.


What To Do This Month

Three actions for January

  1. 1Introduce "delegation vs inquiry" to your team. Name the two patterns. Run a single meeting where everyone tries the inquiry approach on a real task. Awareness alone shifts behaviour — and this is the closest thing we have to a universal rule for effective AI use.
  2. 2Revisit your AI cost assumptions. Whatever you budgeted for AI six months ago is wrong — costs dropped over 100x. The barrier is no longer price. It's whether your team knows how to use what you've already got.
  3. 3Audit your AI governance. If your AI policy is a checklist, it's outdated. Anthropic released their reasoning-based constitution as public domain — use it as a starting template for something that actually helps people make decisions, not just tick boxes.

AI Signal is published monthly by Pandion. We help organisations build real AI capability – the foundations, the practice, and the fluency that turn tools into results.

Have a question about something in this guide? Get in touch.

FAQs

What is the difference between delegation and inquiry when using AI?

Delegation means handing a problem to the AI and accepting the output. Inquiry means using the AI to explore, explain, and challenge – then doing the work yourself. Research shows delegation erodes skills while inquiry preserves learning and still captures most of the speed benefit.

Is there a growing AI skills gap in organisations?

Yes. Research from Anthropic shows AI assistants can reduce skills assessment scores by 17% when used in delegation mode. The gap isn't about AI capability – it's about how teams interact with the tools and whether learning moments are protected.

How often does the 'best' AI model change?

Roughly every few weeks. The leading models from OpenAI, Anthropic, and Google now take turns outperforming each other. This is why investing in team capability and workflows matters more than picking the 'right' model.

AI Signal – January 2026 | Pandion Studio