Most AI workflows break after a few sessions. Context drifts. Decisions get lost. Trust erodes. Living Framework provides the governance infrastructure to make long-horizon collaboration actually work.
Everyone who works seriously with AI hits the same wall. The failures aren't about capability — they're about governance.
The AI "forgets" what you agreed on. Decisions made last week disappear. You're constantly re-explaining the basics.
Multiple files with different versions of the truth. You reference one document, the AI references another.
Numbers drift silently. The AI "reconstructs" instead of references. Same calculation gives different results.
Work from one domain bleeds into another. Finance decisions start affecting research. Risk levels get confused.
Small failures accumulate. You start second-guessing everything. The partnership that felt powerful now feels fragile.
When things break, there's no systematic way to fix them. You patch and hope. Same failures keep recurring.
A lightweight governance framework that makes long-horizon AI collaboration reliable. Not about restricting the AI — about giving it structure to be trustworthy.
Core insight: Reliability comes from governance, not capability. A well-structured collaboration with a standard model outperforms an unstructured one with a frontier model.
Prioritize verification over quick responses
One canonical location for each data type
Strict rules for file creation and versioning
Clear boundaries on what can be changed
Log all significant decisions
Structured disagreement process
Stop → Diagnose → Rollback → Note
Regular check-ins on collaboration health
Separate domains with different risk levels
Final decisions rest with the human
Six practical protocols that turn the controls into daily practice.
A continuously updated file preserving decisions, rules, and corrections across sessions. Read at every session start.
Break complex tasks into numbered steps. Execute one at a time. Pause for confirmation before proceeding.
When something feels wrong: Stop → Question → AI explains → Decide to proceed, modify, or abort.
When things break: Stop immediately → Diagnose → Rollback to stable state → Note the failure.
After milestones: Are we still aligned? Has drift crept in? One improvement before continuing?
All numbers in one authoritative sheet. Reference, don't reconstruct. Non-canonical = can't drive decisions.
Five papers documenting 18+ months of empirical human-AI collaboration. Everything emerged from real work, not theory.
The foundation. Introduces the Control Stack (A1-A10) and canonical information pipeline.
The manual. Running Documents, Step Mode, Challenge Protocol — day-to-day protocols.
The diagnostic. Taxonomy of how AI collaborations break — and repair patterns to fix them.
The philosophy. What it means to live with an AI under governance — relational dynamics and ethics.
The proof. Written entirely by the AI system — a first-person account of constraint, drift, and repair.
Answer 10 questions to discover where your AI workflow is vulnerable. Takes 3 minutes.
Everything you need to implement LC-OS. No email required.
The core of external memory. Track decisions, rules, corrections across sessions.
View Template →Single source of numeric truth. Reference, don't reconstruct.
View Template →Regular check-ins on collaboration health. Detect drift before it breaks.
View Template →Governance templates optimized for Claude Cowork filesystem access.
View Templates →If you're serious about making AI collaboration reliable, I can help implement what I've learned.
4-6 week engagement
Comprehensive assessment of your AI workflows using the failure taxonomy. Identify vulnerabilities and get specific repair protocols.
1-2 day program
Intensive training on implementing LC-OS. Hands-on work with Running Documents, the Control Stack, and error recovery.
Retainer basis
Fractional governance support for organizations deploying long-horizon AI systems. Expert guidance as needs evolve.
This work exists because I ran into the same problem everyone does — AI collaboration that breaks after the first week.
Instead of accepting it as a limitation, I spent 18 months systematically developing solutions. Working with a frontier language model across finance, research, writing, and planning, I documented every failure mode and every repair pattern.
What emerged wasn't theory — it was a practical operating system for making AI collaboration actually reliable. The Control Stack. Running Documents. Step Mode. Error Recovery. All of it came from real breakdowns and real fixes.
Core belief: Stability is not the absence of failure. It's the capacity for visible, structured repair.
Whether you're struggling with AI reliability, interested in implementing LC-OS, or want to discuss the research — I'd love to hear from you.