AI CAPABILITY • FOUNDATION

Right-Sized AI Stack

What makes sense for your organisation type

In 30 Seconds

A solo consultant and a global bank have completely different AI needs. The solo consultant chooses their own tools and is their own IT department. The bank has committees, hyperscalers, guardrails, and governance frameworks.

This page maps AI stacks to organisation types – helping you see what's typical for organisations like yours, and what “right-sized” actually looks like.

How this differs from Implementation: The Implementation & Integration page covers how to deploy different technologies. This page covers what combination makes sense for your organisation type.

The Key Insight

Organisation size determines who makes AI decisions – and that shapes everything else.

Solo/Micro

Founder decides. Speed matters. You are the IT department.

SME

Leadership decides with some IT input. Balance agility and governance.

Enterprise

IT/Committees decide. Governance first. Users get what's deployed.

Model portfolios, not model dependencies. The leading AI model now changes every few weeks. Single-model strategies are increasingly risky. The right approach: match models to use cases – one provider for research, another for analysis, another for creative work. At every organisation size, this is replacing “pick the best model.”

AI Stack by Organisation Type

Find your type and see what “right-sized” looks like

Solo / Freelance

1 person

Examples: Independent consultants, freelancers, solo advisors

Who decides:

You. No approval needed. Speed is your advantage.

Primary constraint:

Time, not access. Knowing which tools to invest in learning.

Typical stack:

Claude Pro/TeamsPerplexity ProNotebookLMChatGPT PlusGemini Advanced

Power user additions:

Claude Code (CLI)API accessMCP serversFirecrawl

What good looks like: A model portfolio – different tools for different tasks. Perplexity for research, Claude for analysis, NotebookLM for document synthesis. You choose models by use case, not by default. No single-model dependency.

Micro

2-10 people

Examples: Small consultancies, boutique agencies, specialist firms

Who decides:

Founder/CEO, usually with team input. Fast decisions, light governance.

Primary constraint:

Coordination. Ensuring team uses tools consistently. Subscription sprawl.

Typical stack:

Claude TeamsChatGPT TeamNotion AIShared prompt libraries

Common additions:

Perplexity TeamShared workspacesLight usage policies

What good looks like: Shared team subscriptions with usage guidelines. Clear decisions on which tools for which purposes. Someone (often founder) stays current on AI developments and brings learnings to the team.

SME

10-50 people

Examples: Growing consultancies, regional firms, specialist agencies

Who decides:

Leadership team, often with IT/Ops input. Balancing agility with emerging governance needs.

Primary constraint:

Formalising without over-bureaucratising. Data handling policies become real.

Typical stack:

M365 + CopilotClaude/ChatGPT EnterpriseApproved tool listUsage policies

Governance layer:

Data classificationClient data policiesTraining requirements

What good looks like: A model portfolio, not a single-vendor lock-in. Clear approved tools list matched to use cases. Data handling guidelines that people actually follow. Regular review: are we using the right model for each type of work?

Mid-Market

50-500 people

Examples: Regional asset managers, mid-size consultancies, established firms

Who decides:

IT department with business input. Procurement processes. Security reviews.

Primary constraint:

Speed vs governance tension. IT capacity. Integration with existing systems.

Typical stack:

M365 Copilot (org-wide)Azure OpenAI / AWS BedrockCentrally managed

Enterprise infrastructure:

Hyperscaler platformsBasic guardrailsUsage loggingSSO integration

What good looks like: IT-managed platforms with multiple models available by use case. Guardrails configured appropriately (not just “high” by default). Clear escalation paths when tools block legitimate work. Power users as champions, helping teams match the right model to the task.

Enterprise

500+ people

Examples: Global banks, asset managers, multinational corporations

Who decides:

IT governance committees. Procurement. Legal. Risk. Multi-stakeholder approval.

Primary constraint:

Governance complexity. Regulatory requirements. Change management at scale.

Typical stack:

AWS BedrockAzure OpenAI ServiceGoogle Vertex AIWhite-labeled interfaces

Enterprise governance:

Hyperscaler infrastructureConfigurable guardrailsFull audit trailsData residency controlsModel selection (limited)

What good looks like: Governance that enables rather than just restricts. Guardrails tuned to actual use cases (ESG teams can research controversies). Model portfolio strategy: multiple models available, matched to department needs. Internal AI champions. Regular review of what's blocked and why.

Enterprise Infrastructure: Hyperscalers & Guardrails

Understanding what mid-market and enterprise organisations actually deploy

What are Hyperscalers?

The three dominant cloud providers that host enterprise AI:

  • AWS Bedrock – Amazon's managed AI service (Claude, Llama, Titan)
  • Azure OpenAI – Microsoft's managed AI service (GPT-4, etc.)
  • Google Vertex AI – Google's managed AI service (Gemini, PaLM)

“Hyperscaler” = they can scale infrastructure at unprecedented levels. Enterprise AI typically runs on one of these platforms.

What are Guardrails?

Safety filters that sit between users and AI models:

  • Block harmful/toxic content
  • Filter personally identifiable information (PII)
  • Prevent regulated topics (financial advice, medical)
  • Block competitor mentions or sensitive topics

AWS Bedrock Guardrails is one specific implementation. Compliance teams typically set these to “high” by default.

The ESG Guardrails Problem

Guardrails designed for general enterprise use often block legitimate ESG queries. Research into “controversies”, “violations”, or “negative impacts” gets flagged as potentially harmful content. The sustainability team needs exactly what the guardrail is designed to prevent.

The fix: Context-aware guardrail configuration. Not “high for everyone” but appropriate settings for each team's actual needs.

What Users Actually Experience

The same AI model feels completely different depending on how it's deployed.

Org TypeUser seesModel awarenessControl level
Solo/Microclaude.ai, ChatGPT, PerplexityHigh – choosing models by taskFull control
SMEMix of direct tools + team subscriptionsMedium – knows what's approvedGuided choice
Mid-Market“Research Assistant” portalLow – knows it's “AI”Limited
EnterpriseWhite-labeled internal toolNone – doesn't know which modelUse what's given

Related: See From Chat to Orchestration for how these deployment patterns create different user tiers.

Common Mistakes by Org Type

Solo/Micro: Tool hoarding

Subscribing to everything without learning any tool deeply. Better: master 2-3 tools that cover your actual workflows.

SME: No clear ownership

Everyone uses different tools differently. No shared knowledge. Better: designate an AI champion who coordinates and shares learnings.

Mid-Market: Over-restricting

Guardrails so tight the tools aren't useful. Shadow AI proliferates. Better: right-sized governance that enables legitimate use.

Enterprise: One-size-fits-all deployment

Same settings for every team. ESG blocked from researching controversies. Better: context-aware configuration. Different teams have different needs.

Where Pandion fits

We operate at the solo/micro level ourselves – direct subscriptions, multi-model workflows, choosing tools by task. This gives us practical, daily experience with what AI can actually do.

When we work with larger organisations, we bridge the gap: translating between what's possible (which we experience daily) and what their deployment allows. We help configure guardrails appropriately, identify which tools to request from IT, and show what “good” looks like at each scale.