Insights & Articles

Field notes from the work. Sovereign AI, agent grounding, compliance architecture, and the things we learned the hard way.

April 13, 2026 | Agent Grounding

Save-On-End Is Broken: Why AI Sessions Should Persist Turn by Turn

Key Takeaways The clean-shutdown fallacy: save-on-end persistence breaks the moment a real-world event (crash, force-quit, network drop) bypasses the shutdown hook. The trickle pattern writes the assistant’s most recent turn to disk before the next turn fires. Worst-case loss: one in-flight turn. A periodic re-summariser turns raw turn-by-turn transcripts into topical chunks (decisions, blockers, progress, […]

April 10, 2026 | Agent Grounding

The Repo Matrix: A Living Document That Stops AI Agents Hallucinating Your Codebase

Key Takeaways The repo matrix is a structured snapshot of the codebase: endpoints, schema, dependencies, env vars, framework. Mechanically scanned, not LLM-generated. Cheap, fast, deterministic, reproducible. The same scan over the same tree gives the same answer every time. AI agents query the matrix before suggesting code. New endpoint suggestions get checked against what already […]

April 8, 2026 | Agent Grounding

Why Your Team’s AI Memory Should Ship as MCP Tools, Not Just an API

Key Takeaways MCP exposes server capabilities as discoverable tools that any compatible AI agent can call without custom integration code. The five tools every team-context memory should expose: `memory_search`, `memory_ask`, `memory_expand_ref`, `memory_session_list`, `record_adr`. Adoption cost on the agent side: zero. The agent runtime already knows how to discover and call MCP tools. No SDK, no […]