AI Governance Research

Make AI collaboration reliable over weeks, not minutes

Most AI workflows break after a few sessions. Context drifts. Decisions get lost. Trust erodes. Living Framework provides the governance infrastructure to make long-horizon collaboration actually work.

5 Research Papers
18+ Months Tested
3,500+ Downloads

Week one feels like magic. Week three, you're untangling a mess.

Everyone who works seriously with AI hits the same wall. The failures aren't about capability — they're about governance.

Context Drift

The AI "forgets" what you agreed on. Decisions made last week disappear. You're constantly re-explaining the basics.

Version Chaos

Multiple files with different versions of the truth. You reference one document, the AI references another.

Numeric Errors

Numbers drift silently. The AI "reconstructs" instead of references. Same calculation gives different results.

Boundary Blur

Work from one domain bleeds into another. Finance decisions start affecting research. Risk levels get confused.

Trust Erosion

Small failures accumulate. You start second-guessing everything. The partnership that felt powerful now feels fragile.

No Repair Path

When things break, there's no systematic way to fix them. You patch and hope. Same failures keep recurring.

LC-OS: The Lean Collaboration Operating System

A lightweight governance framework that makes long-horizon AI collaboration reliable. Not about restricting the AI — about giving it structure to be trustworthy.

Core insight: Reliability comes from governance, not capability. A well-structured collaboration with a standard model outperforms an unstructured one with a frontier model.

The 10 Controls

A1

Accuracy Over Speed

Prioritize verification over quick responses

A2

Single Source of Truth

One canonical location for each data type

A3

File Governance

Strict rules for file creation and versioning

A4

Explicit Permissions

Clear boundaries on what can be changed

A5

Audit Trail

Log all significant decisions

A6

Challenge Protocol

Structured disagreement process

A7

Error Recovery

Stop → Diagnose → Rollback → Note

A8

Stability Ping

Regular check-ins on collaboration health

A9

Pillar Boundaries

Separate domains with different risk levels

A10

Human Authority

Final decisions rest with the human

Core Protocols

Six practical protocols that turn the controls into daily practice.

Running Documents

External Memory

A continuously updated file preserving decisions, rules, and corrections across sessions. Read at every session start.

Step Mode

Paced Reasoning

Break complex tasks into numbered steps. Execute one at a time. Pause for confirmation before proceeding.

Challenge Protocol

Structured Disagreement

When something feels wrong: Stop → Question → AI explains → Decide to proceed, modify, or abort.

Error Recovery

Systematic Repair

When things break: Stop immediately → Diagnose → Rollback to stable state → Note the failure.

Stability Ping

Drift Detection

After milestones: Are we still aligned? Has drift crept in? One improvement before continuing?

Canonical Numbers

Numeric Truth

All numbers in one authoritative sheet. Reference, don't reconstruct. Non-canonical = can't drive decisions.

The Research

Five papers documenting 18+ months of empirical human-AI collaboration. Everything emerged from real work, not theory.

01

Context-Engineered Human-AI Collaboration

The foundation. Introduces the Control Stack (A1-A10) and canonical information pipeline.

View Paper →
02

The Lean Collaboration Operating System

The manual. Running Documents, Step Mode, Challenge Protocol — day-to-day protocols.

View Paper →
03

Failure and Repair in Long-Horizon Collaboration

The diagnostic. Taxonomy of how AI collaborations break — and repair patterns to fix them.

View Paper →
04

The Living Framework

The philosophy. What it means to live with an AI under governance — relational dynamics and ethics.

View Paper →
05

The Mahdi Ledger

The proof. Written entirely by the AI system — a first-person account of constraint, drift, and repair.

View Paper →

AI Collaboration Readiness

Answer 10 questions to discover where your AI workflow is vulnerable. Takes 3 minutes.

0 out of 100

Your Readiness Level

⚠ Priority Gaps to Address

Free Templates

Everything you need to implement LC-OS. No email required.

Running Document

The core of external memory. Track decisions, rules, corrections across sessions.

View Template →

Canonical Numbers Sheet

Single source of numeric truth. Reference, don't reconstruct.

View Template →

Failure Log Template

Track what breaks, how you fixed it, what changed.

View Template →

Stability Ping

Regular check-ins on collaboration health. Detect drift before it breaks.

View Template →

Claude Cowork Templates

Governance templates optimized for Claude Cowork filesystem access.

View Templates →

Full LC-OS Project

Complete toolkit with worked examples and detailed guides.

Explore Toolkit →

Work With Me

If you're serious about making AI collaboration reliable, I can help implement what I've learned.

Reliability Audit

4-6 week engagement

Comprehensive assessment of your AI workflows using the failure taxonomy. Identify vulnerabilities and get specific repair protocols.

LC-OS Workshop

1-2 day program

Intensive training on implementing LC-OS. Hands-on work with Running Documents, the Control Stack, and error recovery.

Ongoing Advisory

Retainer basis

Fractional governance support for organizations deploying long-horizon AI systems. Expert guidance as needs evolve.

Living Framework

This work exists because I ran into the same problem everyone does — AI collaboration that breaks after the first week.

Instead of accepting it as a limitation, I spent 18 months systematically developing solutions. Working with a frontier language model across finance, research, writing, and planning, I documented every failure mode and every repair pattern.

What emerged wasn't theory — it was a practical operating system for making AI collaboration actually reliable. The Control Stack. Running Documents. Step Mode. Error Recovery. All of it came from real breakdowns and real fixes.

Core belief: Stability is not the absence of failure. It's the capacity for visible, structured repair.

Get in Touch

Whether you're struggling with AI reliability, interested in implementing LC-OS, or want to discuss the research — I'd love to hear from you.