AI CAPABILITY • UNDERSTANDING

From Chat to Orchestration

Where are you on the AI adoption journey?

In 30 Seconds

Not everyone experiences AI the same way. A sustainability analyst at a large asset manager might have access to a single, locked-down chat interface. A consultant might use five different tools daily, choosing the right model for each task.

Understanding where you sit on this spectrum matters – it shapes what's possible, what frustrates you, and what you might be missing.

This page maps the different levels of AI access and capability – from “no AI allowed” to multi-model orchestration. Find where you are, and see what's possible beyond it.

The Enterprise AI Reality

When we discuss AI with professionals across financial services, sustainability, and consulting, we notice a pattern: people often don't know what they don't know.

Someone using a white-labeled enterprise AI tool might not realise they're using Claude Sonnet under the hood. They might not know that guardrails are blocking their queries, or that the same question would work fine with different settings.

They experience “AI” as a single thing – not as an ecosystem of models, tools, and configurations with wildly different capabilities.

Common observations from industry forums:

  • “Prompt libraries” discussed as cutting-edge practice
  • Frustration with guardrails blocking legitimate ESG queries
  • No awareness of model differences (Opus vs Sonnet vs Haiku)
  • “Never AI + AI, always human + AI” as a defensive posture

The Five Tiers of AI Access

Where does your organisation sit?

1

Locked Out

“We're not allowed to use AI”

What they experience:

  • • No officially sanctioned AI tools
  • • Maybe using ChatGPT on personal phone
  • • Curious but frustrated by restrictions

Common in:

  • • Conservative financial services
  • • Regulated sectors (legal, healthcare)
  • • Organisations “waiting for regulations”
2

Copilot Only

“AI helps me write emails”

What they experience:

  • • Microsoft 365 Copilot in Word, Excel, Teams
  • • AI as a feature, not a tool
  • • Often don't identify as “AI users”

Key insight:

They use AI daily without realising it. “Copilot summarised my meeting” feels like using Outlook, not using AI.

3

Controlled Chat

“The AI tool my firm gave me”

Most Common

What they experience:

  • • Internal portal branded as “Research Assistant”
  • • One chat interface, no settings
  • • Guardrails blocking queries without explanation
  • • No model selection or configuration

What they don't see:

  • • Which model powers it (Claude? GPT-4?)
  • • Why certain queries get blocked
  • • That other models exist with different strengths
  • • That settings could be adjusted

The ESG irony: Enterprise guardrails often flag “controversies”, “violations”, and “negative impacts” as harmful content – exactly the topics sustainability teams need to research.

4

Power User

“I choose the right model for each task”

What they experience:

  • • Multiple subscriptions (Claude, ChatGPT, Perplexity)
  • • Model selection based on task type
  • • Multi-tool workflows chained together
  • • Understanding of context windows, tokens, prompting

Key difference:

They see “AI” as an ecosystem, not a single tool. They know Opus reasons better than Sonnet, that Perplexity has live web access, that NotebookLM excels at document synthesis.

5

AI Engineer / Builder

“I'm building AI into products”

What they do:

  • • API access, development environments
  • • Building AI into applications
  • • Fine-tuning, vector databases, custom agents
  • • Technical model evaluation

Different skillset:

Engineering capability, not just application skill. Most sustainability professionals don't need this level – but understanding it exists helps frame what's possible.

The Spectrum

1
Locked Out
~40%
2
Copilot
~25%
3
Controlled
~15%
4
Power User
~2-3%
5
Builder
<1%
No AIFeatureToolToolkitBuilding

Many professionals in financial services and sustainability sit at Tier 3 or below. They experience AI through a single, controlled interface – and often assume that's all there is.

Inside the Enterprise AI Setup

Understanding why Tier 3 looks the way it does helps explain the frustrations.

The typical enterprise deployment:

1.
IT chooses a platform – usually AWS Bedrock or Azure OpenAI. These “hyperscalers” provide enterprise-grade AI services with security, logging, and compliance features.
2.
IT selects a model – typically Claude Sonnet or GPT-4o. Usually just one. End users don't get to choose.
3.
Compliance sets guardrails – safety filters that block certain content types. Usually set to “high” (safer for the firm).
4.
Users get a branded interface – “Insights AI” or “Research Assistant”. No settings, no model selection, no visibility into what's happening.

The guardrails problem

Guardrails are safety filters designed to prevent AI from generating harmful content. They scan queries and responses for sensitive topics and block accordingly.

The problem: guardrails designed for general enterprise use don't understand context. An ESG analyst researching “supply chain human rights violations” might get blocked – the guardrail sees “violations” and flags it as potentially harmful.

StakeholderPriority
Legal / Compliance“Lock it down – we can't have AI saying something wrong”
IT Security“Keep data inside our perimeter, log everything”
Sustainability Team“I need AI to help me research negative externalities”

Beyond White-Label: Private AI Models

Some organisations go further than white-labeling a foundation model. They build or configure AI systems trained on their own data – creating something truly internal.

You may have heard of:

  • Enterprise consultancy AI tools – trained on firm methodologies and client patterns
  • Law firm document systems – built on precedent libraries and case analysis
  • Healthcare / therapy AI – requiring strict confidentiality and regulatory compliance
  • Financial services models – trained on proprietary trading strategies or risk assessments
  • Domain-expert AI – built by specialists who've accumulated decades of knowledge in a niche field (e.g. FieldLark in regenerative agriculture, drawing on 20+ years of research, reports, and practitioner experience)

Why go private?

Confidentiality

Client data, therapy notes, legal privilege, trading strategies. Some information should never leave the organisation – not even to a trusted cloud provider.

Regulatory compliance

HIPAA for healthcare, legal professional privilege, GDPR data residency. Some regulations require data to stay within controlled environments.

Domain expertise

Foundation models know general information. A private model trained on 20 years of case files understands your firm's specific patterns and precedents.

Competitive advantage

Your methodologies, frameworks, and accumulated knowledge become embedded in a tool only your organisation can use.

The spectrum of “private”

“Private AI” means different things. Here's what the options actually involve:

1. White-labeled foundation model

What we described earlier. Claude or GPT-4 running on enterprise infrastructure, branded with your logo. The model itself is unchanged – it's just accessed privately.

Investment: Low | Control: Medium

2. RAG (Retrieval-Augmented Generation)

A foundation model that can “look up” your private documents. When you ask a question, it searches your knowledge base first, then uses that context to answer. Your data stays internal; the model just references it.

Investment: Medium | Control: High

3. Fine-tuned model

Taking a foundation model and training it further on your specific data. The model itself changes – it learns your terminology, patterns, and style. Large consultancies often use this approach.

Investment: High | Control: High

4. Fully private model

An AI model trained from scratch on your data, running entirely on your infrastructure. No external dependencies. Extremely rare – requires significant ML engineering capability and compute resources. Only feasible for the largest organisations.

Investment: Very high | Control: Complete

Reality check: Most “private AI” you hear about is option 2 or 3 – RAG systems or fine-tuned models. Fully private models built from scratch (option 4) remain rare outside of tech giants and well-funded AI labs.

When data can't leave your control

Some use cases require that data never touches external servers – or only touches infrastructure with specific privacy guarantees. Two approaches are emerging:

Local LLMs

Open-source models like Llama, Mistral, or Phi running on your own hardware. Data never leaves your machine or server.

Advantages:

  • • Complete data sovereignty – nothing leaves your infrastructure
  • • No per-query costs after initial setup
  • • Full control over model and configuration

Trade-offs:

  • • Requires technical setup and maintenance
  • • Hardware investment (GPU requirements)
  • • Smaller models = less capable than frontier models

Privacy-First Cloud AI

Services like Maple AI that offer encrypted, privacy-preserving access to AI models. Your data isn't used for training; queries are encrypted.

Advantages:

  • • Access to capable models without data exposure
  • • No hardware or maintenance burden
  • • Easier setup than local deployment

Trade-offs:

  • • Ongoing subscription costs
  • • Trust in provider's privacy guarantees
  • • Still involves external infrastructure

When to consider these options: Client-sensitive consulting work, legal matter analysis, healthcare data, proprietary research, or any context where data governance policies prohibit external AI services. The right choice depends on technical capability, budget, and specific compliance requirements.

The SME & Solo Reality

Not every organisation has an IT department deploying AI. For smaller organisations, the dynamic is completely different.

The Solo/SME Difference

Enterprise (Tier 3):

  • • IT decides what tools you get
  • • Guardrails set by compliance
  • • White-labeled interface
  • • No model selection
  • • Constraint: access

Solo/SME (often Tier 4):

  • • You decide what tools to use
  • • You configure your own setup
  • • Direct access to models
  • • Full model selection
  • • Constraint: time

When you ARE the IT department

For solo consultants, freelancers, and small firms, there's no IT team making decisions. The founder or team leads choose the tools, pay for the subscriptions, and figure out how to use them.

The Advantage

Speed and flexibility. You can try Claude today, switch to Gemini tomorrow, add Perplexity when you need it. No procurement process, no approval committees.

The Challenge

Time, not access. Knowing which tools to invest in learning. Avoiding subscription sprawl. Building workflows without dedicated support.

The Opportunity

You can operate at Tier 4 (power user) by necessity. Multi-model workflows, choosing tools by task. Competitive advantage through AI fluency.

Related: See Right-Sized AI Stack for specific tool recommendations by organisation type – from solo operators to enterprise.

Terms Worth Knowing

Hyperscalers

Microsoft Azure, Amazon AWS, Google Cloud. The massive cloud providers that host AI infrastructure. When your firm uses “enterprise AI”, it likely runs on one of these platforms.

Guardrails

Safety filters between users and AI models. They block content based on rules – PII, harmful topics, regulated advice. AWS Bedrock Guardrails is one specific implementation.

White-label AI

AI tools branded with company identity. You see “Insights AI”, not “Claude Sonnet”. The underlying model is hidden from end users.

Model selection

Choosing which AI model to use for a task. Opus for complex reasoning, Sonnet for speed, Perplexity for live research. Tier 3 users don't have this choice.

RAG

Retrieval-Augmented Generation. A system where AI can “look up” your documents before answering. The model stays general-purpose; your data stays private; answers draw on both.

Fine-tuning

Training an existing AI model further on your specific data. The model itself learns your terminology, patterns, and style – becoming specialized for your use case.

Data sovereignty

The principle that data is subject to the laws and governance of the location where it's stored or processed. In AI context: ensuring your data stays within your control and doesn't flow to external providers or jurisdictions.

Stress testing

From finance: testing how portfolios or systems perform under adverse scenarios. In AI context: testing model outputs under edge cases, challenging inputs, and adversarial conditions – critical for “investment grade” assurance.

Engagement (investor)

Not general client engagement. In investment: active dialogue between shareholders and companies on ESG issues, governance, or strategy. A core stewardship practice for asset managers seeking to influence corporate behaviour. AI example: investors engaging with portfolio companies on responsible AI governance, algorithmic bias, or workforce transition planning.

Stewardship

The broader framework for responsible ownership. Asset managers exercising stewardship vote on company decisions, engage on ESG issues, and hold boards accountable. Engagement is one tool within stewardship practice.

Foundation model

The large, general-purpose AI models that everything else builds on – Claude, GPT-4, Gemini, Llama. Trained on vast datasets to handle diverse tasks. RAG, fine-tuning, and white-labeling all start with a foundation model as the base.

Context window

How much text an AI model can “see” at once – measured in tokens (roughly words). Larger context windows allow longer documents, more conversation history, or bigger codebases. A key differentiator between models.

What This Means for You

The barrier has shifted. A year ago, cost and access were the main obstacles to AI adoption. In 2026, capability costs have dropped over 100x. The real barriers are now governance design, trust calibration, and organisational readiness – not whether you can afford the tools.

If you're at Tier 1-2

You're experiencing AI through a narrow window. The capabilities you've seen don't represent what's possible. The cost of AI tools has collapsed – the barrier keeping you here is likely governance and policy, not budget. Consider what tasks would benefit from more capable AI, and whether your organisation's restrictions still reflect the current risk landscape.

If you're at Tier 3

You have AI access, but with significant constraints. When queries get blocked or outputs disappoint, understand it may be configuration, not capability. The model behind your tool might be excellent – but guardrails and settings shape what you experience.

If you're at Tier 4+

You see AI as an ecosystem. You likely know more than most of your peers about what's possible. The gap between your experience and theirs is larger than you might realise – they're generalizing from a much more limited view.

Where Pandion operates

We work at Tier 4 daily – using multiple AI tools, choosing models by task, building multi-step workflows for sustainability and ESG work. This gives us perspective on what's genuinely possible versus what most organisations currently experience.

When we advise clients, we can translate between the AI capabilities that exist and the constrained access many teams have. We help organisations understand what to ask IT for, what's worth building internally, and where external expertise adds value.