AI CAPABILITY • FOUNDATION

Sustainable AI

AI has its own environmental footprint

Energy consumption, water usage, hardware lifecycle, and Scope 3 implications of compute at scale

In 30 Seconds

AI is increasingly positioned as a solution to environmental challenges - optimising energy grids, monitoring deforestation, predicting climate risks. But AI itself has a significant and growing environmental footprint that most organisations deploying it are not accounting for.

If you use AI extensively, your Scope 3 emissions now include the compute footprint of your AI providers. Are they disclosing that clearly- Are you-

The question: How do you use AI responsibly while acknowledging its environmental costs- This page explores what those costs are, how to think about them, and what practical steps organisations can take.

Looking for the other side- This page is about AI's footprint. For how AI can help with sustainability challenges, see AI in Sustainability →

The Environmental Footprint

AI's environmental impact spans four interconnected areas. Understanding each is essential for making informed decisions about AI deployment.

Energy Consumption

Data centres powering AI require enormous amounts of electricity - for compute, cooling, and infrastructure.

  • • Training large models can consume as much energy as 5 cars over their lifetime
  • • Inference (using trained models) accounts for ~90% of AI energy use
  • • Data centre electricity demand is projected to double by 2026
  • • AI workloads are growing faster than efficiency gains

Water Usage

Data centres use water for cooling systems - often in water-stressed regions.

  • • A single ChatGPT conversation can use 500ml of water
  • • Google's data centres used 5.6 billion gallons of water in 2022
  • • Microsoft's water consumption increased 34% year-over-year
  • • Many data centres located in water-stressed areas

Hardware Lifecycle

GPUs and specialised AI chips have significant embodied carbon and create e-waste.

  • • Manufacturing a GPU produces ~150-200 kg CO2
  • • AI hardware is often replaced every 2-3 years for performance gains
  • • Supply chain emissions (rare earth mining, chip fabrication)
  • • End-of-life disposal and recycling challenges

Scope 3 Implications

For organisations using AI, these impacts land in your Scope 3 (supply chain) emissions.

  • • Cloud AI services = Scope 3 Category 1 (purchased goods/services)
  • • Most AI providers don't provide granular emissions data
  • • Difficult to attribute impact to specific workloads
  • • Regulatory pressure increasing (CSRD, SEC climate rules)

The Scale Challenge

1,000x

Energy increase from GPT-2 to GPT-4 scale models

~2%

Global electricity consumption attributed to data centres (growing)

48%

Google's emissions increase 2019-2023 (much attributed to AI)

The uncomfortable truth: Big Tech companies have made net-zero commitments while simultaneously scaling AI infrastructure that is making those commitments harder to achieve. Google, Microsoft, and Amazon have all seen emissions increase despite efficiency improvements.

This creates a disclosure gap: if you rely on AI services from these providers, how do you account for your share of their growing AI-driven emissions-

The Accounting Question: Scope 2 vs Scope 3

Where do AI emissions belong in your carbon accounting- This is an active debate with implications for how organisations report and manage their AI footprint.

The Official Answer: Scope 3

Under GHG Protocol, AI services are purchased services - making them Scope 3 Category 1 (Purchased Goods & Services).

  • • You're buying a service, not electricity directly
  • • The AI provider reports the data centre electricity as their Scope 2
  • • Your use of their service sits in your Scope 3
  • • Same logic as any outsourced service

The Practical Reality: Feels Like Scope 2

Many practitioners argue AI functionally behaves more like Scope 2 than traditional purchased services.

  • Dependency pattern: AI is becoming as essential as electricity
  • Direct scaling: More AI use = more emissions, in near real-time
  • Operational link: Your Tuesday workload directly drives compute
  • Limited substitution: Can't easily switch or reduce without impact

Why This Matters

The Scope framework was designed when “purchased services” meant cleaning contracts and consultants - not infrastructure that scales with every API call.

Classic Scope 3 supply chain emissions feel distant - your supplier's factory doesn't scale with your Tuesday afternoon workload. AI does.

The practical outcome: Scope 3 Category 1 is where emissions go to be invisible. Most companies either don't report it, or report it so aggregated that AI impact can't be distinguished from office supplies.

What Would Help

  • Dedicated sub-category for compute/AI services in Scope 3 reporting
  • Provider disclosure of granular per-customer emissions data
  • Materiality threshold - if AI spend exceeds X%, require separate line item
  • Real-time tracking tools that link AI usage to emissions estimates

Why AI, Why Now-

Banking systems, payment networks, cloud computing, streaming video, gaming - all have significant environmental footprints. Why is AI getting scrutiny that these industries largely avoided-

Growth Rate

AI data centre capacity is growing faster than anything before. Not just big, but accelerating big. Banking infrastructure was built over decades; AI hyperscale is being built in years.

The Irony Narrative

AI is being sold as a climate solution - grid optimisation, monitoring, prediction. Its own footprint creates cognitive dissonance that makes headlines. Banking never claimed to save the planet.

Concentrated Visibility

A handful of companies (OpenAI/Microsoft, Google, Anthropic, Amazon) building massive dedicated AI facilities. Easier to point at than distributed banking infrastructure built over decades.

Quotable Numbers

“500ml of water per ChatGPT conversation” is a headline. No one ever said “a Visa transaction uses X watts” - existing infrastructure is just infrastructure.

Net-Zero Collision

Big Tech made public climate commitments, and AI is visibly breaking them. Google's emissions up 48% since 2019. Microsoft's water consumption up 34%. That's a story - and it landed at exactly the moment when ESG scrutiny is highest.

The Honest Assessment

The scrutiny is partly legitimate (AI's footprint is real and growing fast), partly inconsistent (other compute-heavy industries got normalised without this attention), and partly timing (AI arrived post-Paris Agreement, mid-ESG boom, with net-zero pledges on the line).

The global financial system, streaming video, gaming, crypto - all have huge footprints that got normalised. The accounting frameworks didn't suddenly improve; AI just became visible at the wrong moment.

The Net Impact Debate

Is using AI for sustainability hypocritical- You're creating emissions to reduce emissions. This is a real debate with valid points on both sides.

The Critical View

  • • Using AI for sustainability is hypocritical - creating problems to solve problems
  • • It's greenwashing with extra steps
  • • Big Tech uses “AI for good” as cover for scaling infrastructure
  • • Efficiency gains get eaten by increased usage (Jevons Paradox)
  • • We should question whether AI is necessary at all

The Proponent View

  • • All tools have footprints - the question is net impact
  • • AI enables things impossible manually (monitoring at scale, real-time optimization)
  • • The alternative isn't zero emissions - it's different emissions
  • • Social benefits (healthcare, education, accessibility) also count
  • • Honest accounting can justify responsible use

The “Compared to What” Question

This is often missing from the debate. Manual alternatives also have costs:

  • • Flying auditors to verify claims vs satellite monitoring
  • • Armies of analysts processing disclosures vs NLP at scale
  • • The status quo (no monitoring, no verification, no synthesis)

Our position: The debate is legitimate but often poorly framed. The question isn't “does AI have a footprint?” (yes) - it's “is the outcome worth the cost, and are we being honest about both sides?” Honest accounting, not denial or guilt, is the path forward.

AI and Social Sustainability

Sustainable AI isn't only about environmental footprint. The social dimension - human rights, just transition, equity, labour - is equally important and often overlooked in the carbon-focused debate.

Workforce & Just Transition

AI is transforming work - creating new roles while displacing others. A just transition means managing this change equitably, with retraining, support, and inclusive benefit-sharing. Who gains from AI productivity, and who bears the costs?

Bias & Human Rights

AI systems can encode and amplify existing biases - in hiring, lending, healthcare, criminal justice. Responsible AI requires active work on fairness, transparency, and accountability. Bias isn't just a technical problem; it's a human rights issue.

Access & Equity

AI capabilities are not evenly distributed. Large organisations and wealthy nations have more access to AI tools and their benefits. This risks widening existing inequalities. Who gets to use AI, and for what purposes?

Community & Infrastructure

Data centres don't exist in a vacuum. They're built in communities, use local water and power, and affect local environments. Responsible siting, community engagement, and benefit-sharing matter.

The Positive Potential

AI also has significant potential for social good - and this shouldn't be dismissed:

  • Healthcare: Diagnostics, drug discovery, personalized medicine, accessibility
  • Education: Personalized learning, translation, access to knowledge
  • Accessibility: Assistive technologies, communication aids, navigation
  • Crisis response: Disaster prediction, resource allocation, coordination

The connection: Social sustainability isn't separate from environmental sustainability - they're interlinked. You cannot achieve one without the other. The transition must be just, or it won't happen at all.

Explore Social Sustainability →

The S.A.F.E. Framework

Research in sustainable finance has proposed the S.A.F.E. framework for evaluating AI systems holistically - considering environmental, accuracy, fairness, and explainability dimensions together.

S - Sustainable

What is the environmental footprint of this AI system- Energy consumption, water usage, hardware lifecycle, and supply chain impacts.

A - Accurate

How reliable are the AI outputs- Prediction quality, validation processes, and confidence calibration.

F - Fair

Does the AI system treat different groups equitably- Bias detection, mitigation strategies, and outcome monitoring.

E - Explainable

Can we understand how the AI reaches its conclusions- Interpretability of decisions, audit trails, and human oversight.

Why this matters: Sustainability is one dimension of responsible AI, not a separate concern. An AI system that is highly accurate but environmentally devastating is not truly “good” AI. These dimensions need to be considered together.

What Can Organisations Do

Practical steps for managing the environmental impact of AI adoption.

1. Measure What You Can

  • Track AI usage: API calls, compute hours, model sizes used
  • Request provider data: Ask cloud providers for emissions estimates
  • Use estimation tools: ML CO2 Impact, CodeCarbon, MLCO2 calculators
  • Include in Scope 3: Document AI as part of purchased services emissions

2. Optimise Usage

  • Right-size models: Use smaller models when they're sufficient
  • Efficient prompting: Reduce token usage through better prompt design
  • Caching: Store and reuse common queries rather than regenerating
  • Batch processing: Run intensive workloads during low-carbon grid periods
  • Monitor waste: Identify and eliminate unnecessary AI calls

3. Choose Thoughtfully

  • Provider selection: Consider sustainability commitments when choosing AI providers
  • Region selection: Deploy in regions with cleaner energy grids where possible
  • Necessity test: Not everything needs AI - use it where it adds genuine value
  • Open models: Self-hosted models can offer more control (but also more responsibility)

4. Advocate for Transparency

  • Demand disclosure: Push providers to share per-customer emissions data
  • Support standards: Back initiatives for AI carbon accounting standards
  • Report honestly: Include AI impacts in your own sustainability reporting
  • Engage stewardship: If you're an investor, engage with Big Tech on AI emissions

For Investors: The Engagement Question

If you're an asset manager or investor engaging with companies on sustainability, AI creates both a new engagement topic and a new portfolio risk to understand.

Questions for Investee Companies

  • • How are you accounting for AI-related emissions-
  • • What proportion of your cloud spend goes to AI workloads-
  • • How do you measure efficiency vs environmental cost of AI-
  • • What due diligence do you do on AI provider sustainability-

Questions for Big Tech (AI Providers)

  • • Why have emissions increased despite efficiency gains-
  • • Can you provide per-customer AI emissions allocation-
  • • How does AI growth affect your net-zero pathway-
  • • What water stress assessments do you do for data centres-

The stewardship opportunity: Responsible AI governance is emerging as an engagement topic - but most stewardship teams are asking about AI ethics and bias, not AI environmental impact. Both matter.

How We Think About This

We use AI extensively in our work - it's core to how we operate. We're not purists who think AI should be avoided. But we think the environmental footprint should be acknowledged, measured where possible, and factored into decisions.

Efficiency First

We design AI workflows for efficiency - right-sized models, good prompt engineering, caching where sensible. Better for the environment and for our costs.

Necessity Test

Not every task needs AI. We use it where it genuinely adds value - synthesis, analysis, automation - not as a substitute for thinking.

Honest Accounting

We acknowledge that AI-intensive work has a carbon cost. We don't pretend otherwise. The tools don't exist yet for precise measurement, but the directional impact is clear.

Our view: AI's benefits - including its applications for sustainability - can outweigh its environmental costs. But that calculation requires honest accounting on both sides. The worst outcome is AI being positioned as an environmental solution while its own footprint is ignored.

Resources & Further Reading

Measurement Tools

  • CodeCarbon - Track emissions from ML experiments
  • ML CO2 Impact - Calculator for training runs
  • Cloud Carbon Footprint - Multi-cloud emissions tracking

Research & Reports

  • • IEA reports on data centre energy demand
  • • Big Tech sustainability reports (Google, Microsoft, Amazon)
  • • Academic research on AI carbon footprints

Building AI Capability Responsibly

Sustainable AI is one dimension of responsible AI deployment. We help organisations build AI capabilities that consider the full picture - effectiveness, governance, and environmental impact.