Most AI workflows break after a few sessions. Context drifts. Decisions get lost. Trust erodes. Living Framework provides the governance infrastructure to make long-horizon collaboration actually work.
Everyone who works seriously with AI hits the same wall. The failures aren't about capability — they're about governance.
The AI "forgets" what you agreed on. Decisions made last week disappear. You're constantly re-explaining the basics.
Multiple files with different versions of the truth. You reference one document, the AI references another.
Numbers drift silently. The AI "reconstructs" instead of references. Same calculation gives different results.
Work from one domain bleeds into another. Finance decisions start affecting research. Risk levels get confused.
Small failures accumulate. You start second-guessing everything. The partnership that felt powerful now feels fragile.
When things break, there's no systematic way to fix them. You patch and hope. Same failures keep recurring.
A lightweight governance framework that makes long-horizon AI collaboration reliable. Not about restricting the AI — about giving it structure to be trustworthy.
Core insight: Reliability comes from governance, not capability. A well-structured collaboration with a standard model outperforms an unstructured one with a frontier model.
Prioritise verification over throughput
Enforce canonical lookup for all numeric values
Guarantee one live file and version traceability
Forbid speculative or incomplete content in finals
Validate numerics and logic before publication
Keep Strategy ↔ Canonical consistency verified
Detect anomalies early, revert, and annotate causes
Gate material changes through explicit consent
Summarise long histories into concise digests
Freeze, log, and archive artefacts for traceability
Six practical protocols that turn the controls into daily practice.
A continuously updated file preserving decisions, rules, and corrections across sessions. Read at every session start.
Break complex tasks into numbered steps. Execute one at a time. Pause for confirmation before proceeding.
When something feels wrong: Stop → Question → AI explains → Decide to proceed, modify, or abort.
When things break: Stop immediately → Diagnose → Rollback to stable state → Note the failure.
After milestones: Are we still aligned? Has drift crept in? One improvement before continuing?
One canonical file per domain, controlled updates, and no parallel drafts. Prevents file fragmentation that causes contradictions and unreliable decisions.
Five papers documenting 18+ months of empirical human-AI collaboration. Everything emerged from real work, not theory.
The foundation. Introduces the Control Stack (A1-A10) and canonical information pipeline.
The manual. Running Documents, Step Mode, Challenge Protocol — day-to-day protocols.
The diagnostic. Taxonomy of how AI collaborations break — and repair patterns to fix them.
The philosophy. What it means to live with an AI under governance — relational dynamics and ethics.
The proof. Written entirely by the AI system — a first-person account of constraint, drift, and repair.
Answer 10 questions to discover where your AI workflow is vulnerable. Takes 3 minutes.
Everything you need to implement LC-OS. No email required.
The core of external memory. Track decisions, rules, corrections across sessions.
View Template →Single source of numeric truth. Reference, don't reconstruct.
View Template →Regular check-ins on collaboration health. Detect drift before it breaks.
View Template →Governance templates optimized for Claude Cowork filesystem access.
View Templates →I'm exploring ways to help others implement what I've learned. If you're working on AI reliability, let's talk.
Help setting up LC-OS in your workflows. Work through the frameworks together and adapt them to your specific needs.
Walk through the framework and how I use it. Share what worked, what didn't, and lessons from 18 months of iteration.
I'm still learning — happy to exchange insights. Your use cases help refine the framework for everyone.
This work exists because I ran into the same problem everyone does — AI collaboration that breaks after the first week.
Instead of accepting it as a limitation, I spent 18 months systematically developing solutions. Working with a frontier language model across finance, research, writing, and planning, I documented every failure mode and every repair pattern.
What emerged wasn't theory — it was a practical operating system for making AI collaboration actually reliable. The Control Stack. Running Documents. Step Mode. Error Recovery. All of it came from real breakdowns and real fixes.
Core belief: Stability is not the absence of failure. It's the capacity for visible, structured repair.
Whether you're struggling with AI reliability, interested in implementing LC-OS, or want to discuss the research — I'd love to hear from you.