AI CAPABILITY • PRACTICE
Context Engineering
The right information, at the right time
In 30 Seconds
Most AI failures aren't model problems. They're context problems. Context engineering is the discipline of designing what AI knows, when it knows it, and how that knowledge is structured.
This is where strategy becomes capability. Without the right context architecture, even the best AI strategy remains a document on a shelf.
Our core expertise: This is Practice tier work— designing and implementing the context systems that make AI useful in your specific environment. Not one-off prompts, but persistent capability.
What We've Learned in Practice
Through implementing context systems across sustainability consulting, trading operations, and client delivery, we've validated several patterns:
Compression Compounds
Each context reduction makes the next easier. Start aggressive, refine based on what breaks.
Memory Health Is Maintenance
Context architecture isn't a one-time setup. Weekly maintenance protocols prevent gradual degradation.
Navigation Beats Structure
Folder hierarchies organise files. Topic-based navigation paths tell AI what to load and when. Both are needed.
Session Continuity Is Hard
The handoff between work sessions is the least discussed problem in AI—and often the most impactful to solve.
Results from our implementations: 3× faster session startup, 40% reduction in token costs, near-zero context drift.
DEEP DIVE
Memory Health Protocol: Why AI Forgets and How to Fix It
Comprehensive guide to AI memory architecture – understanding the forgetting problem, the four memory strategies, and patterns that make knowledge compound.
DEEP DIVE
Passive Context Architecture
Vercel's research found passive context achieved 100% accuracy vs 53% for on-demand retrieval. Why this matters and how to implement tiered context systems.
The Shift from Prompts to Systems
Prompt engineering asks: “How do I phrase this question?”
Context engineering asks: “What does AI need to know to answer well?”
Anthropic defines it as “designing dynamic systems that provide AI models with the right information at the right time.” It's the evolution from crafting individual queries to architecting information environments.
| Prompt Engineering | Context Engineering | |
|---|---|---|
| Focus | The question | The knowledge |
| Scope | Single query | Entire system |
| Approach | Craft better prompts | Design information flow |
| Result | Better answers | Consistent capability |
The Context Problem
More context doesn't mean better performance.
Research shows input length alone can reduce AI accuracy by 14-85% – even when all information is relevant.
Lost in the Middle
Models favour information at the start and end of context, missing what's in between.
Context Rot
Quality degrades gradually as context accumulates and ages without maintenance.
Signal Dilution
Important information drowns in noise when everything is loaded indiscriminately.
Many teams dump everything into context. That's like answering every question by reading the encyclopaedia aloud.
The issue is not just volume; it's missing decision lineage. Context graphs help by recording approvals and exceptions so AI can retrieve the relevant rationale without loading everything.
THE INDUSTRY'S UNSOLVED PROBLEM
Context Drift
Academic research (2025-2026) identifies context drift—the gradual degradation of context quality across sessions and time—as the central unsolved challenge in AI memory systems. Most tools handle single-session context well. Multi-session, multi-day continuity remains hard. This is where operational rhythm systems (session handoffs, weekly coordination) become critical.
Four Strategies for Managing Context
Based on Anthropic's framework for effective AI systems
1. Write
Persist externally
Store information outside the context window for later retrieval. Files, databases, knowledge bases – anything that persists beyond the session.
2. Select
Load only what's relevant
Retrieve context based on the task at hand, not everything available. Dynamic retrieval, semantic search, just-in-time loading.
3. Compress
Summarise, don't accumulate
Keep context lean through intelligent summarisation. Archive old content, preserve decisions, trim the unnecessary.
4. Isolate
Separate contexts for separate concerns
Don't let different workstreams pollute each other. Multi-agent architectures, session boundaries, role-specific loading.
The Temporal Dimension
The solution to context drift
Most context engineering focuses on structure—how information is organised. We've found the rhythm matters just as much. This is how you solve context drift: not just better architecture, but better cadence.
Session Handoffs
How does context pass between work sessions? What gets carried forward, what gets compressed, what gets archived? Explicit handoff protocols prevent the “starting from scratch” problem.
Weekly Coordination
How do strategic priorities flow into daily work? How does work roll up into weekly synthesis? Coordination bridges connect strategy to execution without context overload.
Memory Maintenance
Context degrades over time. Scheduled compression, archiving cadences, and health checks keep context fresh. Without maintenance, even good architecture accumulates noise.
Why this matters: Academic research identifies context drift as the central unsolved challenge in AI memory systems. Most tools handle single sessions well. Multi-session, multi-day continuity is where operational rhythm becomes critical.
The architecture is the skeleton. The rhythm is the heartbeat.
Tiered Context Architecture
We use a budget-based approach to context layers. Each tier has a token budget and update frequency—this prevents context bloat while ensuring AI has what it needs.
| Tier | Purpose | Token Budget | Load Frequency |
|---|---|---|---|
| Tier 0 | Compressed state | ~300 tokens | Always |
| Tier 1 | Active context + navigation | ~1,000 tokens | Session start |
| Tier 2 | Domain knowledge | On-demand | When needed |
| Tier 3 | Archive | Rarely | Historical only |
Key insight: Most teams overload Tier 0-1 and underuse Tier 2-3. The result is context bloat, slower reasoning, and higher costs.
THE LOAD PATTERN
Context Graphs: Decision Lineage
Systems of record capture what happened. Context graphs capture why.
When AI needs to make a decision, it shouldn't just know the rule—it should know the precedents, exceptions, and reasoning that shaped it.
Approvals & Exceptions
Why was this approved? What precedent does it set? Context graphs make the reasoning retrievable.
Policy Evolution
How did we get here? What changed and why? Decision traces show the path, not just the destination.
Audit Trails
What informed this decision? Who signed off? Context graphs support governance and compliance.
We design context graphs that turn scattered decisions into searchable precedent— making institutional knowledge available to AI without loading everything.
Who Benefits from Context Engineering?
Individuals
- • Consistent AI results across sessions
- • Build on previous work, not from scratch
- • Reduce time re-explaining context
Teams
- • Reduce hallucinations through better knowledge
- • Enable handoffs between human and AI
- • Shared context across team members
Organisations
- • Multi-agent coordination without pollution
- • Governance and compliance controls
- • Scalable knowledge management
Our Approach Is Informed By
Anthropic's context engineering guidance
Karpathy's “RAM management” framing
Vercel's AGENTS.md evaluation research
Mei & Yao survey (1,400+ academic papers)
MemAgents architecture research
Validated through our own implementations
Context Engineering in Practice
We design and implement context systems for organisations building serious AI capability. From individual productivity to enterprise-scale agent orchestration.
Typical engagements include:
- • Decision-trace capture pilots for high-risk workflows (discounting, approvals, renewals)
- • Context graph design that turns exceptions into searchable precedent
- • Knowledge architecture that keeps agents aligned to canonical truth