AI Governance Research

Make AI collaboration reliable over weeks, not minutes

Most AI workflows break after a few sessions. Context drifts. Decisions get lost. Trust erodes. Living Framework provides the governance infrastructure to make long-horizon collaboration actually work.

5 Research Papers
18+ Months Tested
3,500+ Downloads

Week one feels like magic. Week three, you're untangling a mess.

Everyone who works seriously with AI hits the same wall. The failures aren't about capability — they're about governance.

Context Drift

The AI "forgets" what you agreed on. Decisions made last week disappear. You're constantly re-explaining the basics.

Version Chaos

Multiple files with different versions of the truth. You reference one document, the AI references another.

Numeric Errors

Numbers drift silently. The AI "reconstructs" instead of references. Same calculation gives different results.

Boundary Blur

Work from one domain bleeds into another. Finance decisions start affecting research. Risk levels get confused.

Trust Erosion

Small failures accumulate. You start second-guessing everything. The partnership that felt powerful now feels fragile.

No Repair Path

When things break, there's no systematic way to fix them. You patch and hope. Same failures keep recurring.

LC-OS: The Lean Collaboration Operating System

A lightweight governance framework that makes long-horizon AI collaboration reliable. Not about restricting the AI — about giving it structure to be trustworthy.

Core insight: Reliability comes from governance, not capability. A well-structured collaboration with a standard model outperforms an unstructured one with a frontier model.

The 10 Controls

A1

Accuracy > Speed

Prioritise verification over throughput

A2

Single Source of Truth

Enforce canonical lookup for all numeric values

A3

File Registry & Checksums

Guarantee one live file and version traceability

A4

No Placeholders in Outputs

Forbid speculative or incomplete content in finals

A5

Sanity Checks & Unit Tests

Validate numerics and logic before publication

A6

Cross-Document Reconciliation

Keep Strategy ↔ Canonical consistency verified

A7

Drift Diagnostics & Rollback

Detect anomalies early, revert, and annotate causes

A8

Permissioned Actions & Approvals

Gate material changes through explicit consent

A9

Compaction & State Notes

Summarise long histories into concise digests

A10

Audit Trail & Release Process

Freeze, log, and archive artefacts for traceability

Core Protocols

Six practical protocols that turn the controls into daily practice.

Running Documents

External Memory

A continuously updated file preserving decisions, rules, and corrections across sessions. Read at every session start.

Step Mode

Paced Reasoning

Break complex tasks into numbered steps. Execute one at a time. Pause for confirmation before proceeding.

Challenge Protocol

Structured Disagreement

When something feels wrong: Stop → Question → AI explains → Decide to proceed, modify, or abort.

Error Recovery

Systematic Repair

When things break: Stop immediately → Diagnose → Rollback to stable state → Note the failure.

Stability Ping

Drift Detection

After milestones: Are we still aligned? Has drift crept in? One improvement before continuing?

File and Version Governance

Version Control

One canonical file per domain, controlled updates, and no parallel drafts. Prevents file fragmentation that causes contradictions and unreliable decisions.

The Research

Five papers documenting 18+ months of empirical human-AI collaboration. Everything emerged from real work, not theory.

01

Context-Engineered Human-AI Collaboration

The foundation. Introduces the Control Stack (A1-A10) and canonical information pipeline.

View Paper →
02

The Lean Collaboration Operating System

The manual. Running Documents, Step Mode, Challenge Protocol — day-to-day protocols.

View Paper →
03

Failure and Repair in Long-Horizon Collaboration

The diagnostic. Taxonomy of how AI collaborations break — and repair patterns to fix them.

View Paper →
04

The Living Framework

The philosophy. What it means to live with an AI under governance — relational dynamics and ethics.

View Paper →
05

The Mahdi Ledger

The proof. Written entirely by the AI system — a first-person account of constraint, drift, and repair.

View Paper →

AI Collaboration Readiness

Answer 10 questions to discover where your AI workflow is vulnerable. Takes 3 minutes.

0 out of 100

Your Readiness Level

⚠ Priority Gaps to Address

Free Templates

Everything you need to implement LC-OS. No email required.

Running Document

The core of external memory. Track decisions, rules, corrections across sessions.

View Template →

Canonical Numbers Sheet

Single source of numeric truth. Reference, don't reconstruct.

View Template →

Failure Log Template

Track what breaks, how you fixed it, what changed.

View Template →

Stability Ping

Regular check-ins on collaboration health. Detect drift before it breaks.

View Template →

Claude Cowork Templates

Governance templates optimized for Claude Cowork filesystem access.

View Templates →

Full LC-OS Project

Complete toolkit with worked examples and detailed guides.

Explore Toolkit →

Interested in Collaboration?

I'm exploring ways to help others implement what I've learned. If you're working on AI reliability, let's talk.

Implementation Support

Help setting up LC-OS in your workflows. Work through the frameworks together and adapt them to your specific needs.

Knowledge Sharing

Walk through the framework and how I use it. Share what worked, what didn't, and lessons from 18 months of iteration.

Feedback & Iteration

I'm still learning — happy to exchange insights. Your use cases help refine the framework for everyone.

Living Framework

This work exists because I ran into the same problem everyone does — AI collaboration that breaks after the first week.

Instead of accepting it as a limitation, I spent 18 months systematically developing solutions. Working with a frontier language model across finance, research, writing, and planning, I documented every failure mode and every repair pattern.

What emerged wasn't theory — it was a practical operating system for making AI collaboration actually reliable. The Control Stack. Running Documents. Step Mode. Error Recovery. All of it came from real breakdowns and real fixes.

Core belief: Stability is not the absence of failure. It's the capacity for visible, structured repair.

Get in Touch

Whether you're struggling with AI reliability, interested in implementing LC-OS, or want to discuss the research — I'd love to hear from you.

Ready to connect?

Send me an email and I'll get back to you within 24 hours.

Email Me →