Log every mistake for 30 days with date, event, and conditions — no analysis, just raw data for pattern detection
Maintain an error log for 30 days that records date, what happened, and conditions present for every mistake, without analysis or interpretation, to create raw data for pattern detection.
Why This Is a Rule
The "without analysis or interpretation" constraint is the most counterintuitive and most important part of this rule. When you make a mistake and immediately analyze it, you produce a single-instance explanation shaped by the emotional context of the error. "I missed the deadline because the client changed requirements" — reasonable, but shaped by frustration with the client. After 30 days of logging errors without analyzing them, you have raw data that reveals patterns invisible to single-instance analysis.
Maybe 8 of your 15 logged errors occurred on Mondays. Maybe 12 of 15 occurred when you'd had less than 7 hours of sleep. Maybe every deadline miss shared the same condition: the project had no written scope document. These patterns emerge from aggregate data but are invisible in the moment because each error feels like a unique event with unique causes.
The 30-day duration is the minimum for meaningful pattern emergence: shorter periods don't accumulate enough data points, longer periods risk data collection fatigue. Three recording fields — date, what happened, conditions — capture the minimum viable dataset for pattern detection while being quick enough to maintain for a full month (Accountability reporting must be near-zero effort — a checkbox or emoji, not a paragraph — because reporting friction kills the whole system — near-zero reporting effort).
When This Fires
- When you want to understand your personal error patterns but don't yet have data
- When the same types of errors seem to recur but you can't articulate the common factor
- Before designing error-correction systems (Classify errors as execution, knowledge, or judgment failures before correcting — each type needs a fundamentally different fix-500) — you need pattern data first
- As the raw data source for After 10+ post-action reviews, analyze in aggregate — patterns across unrelated tasks reveal systemic tendencies, not isolated errors (aggregate review analysis)
Common Failure Mode
Analyzing each error as it's logged: "I made this mistake because..." This defeats the purpose. The analysis is premature (you have one data point, not a pattern), biased (the emotional context shapes the explanation), and prevents future pattern detection (you've already "explained" each error individually, making it harder to see the aggregate pattern). Log the facts — date, event, conditions — and analyze only after 30 days of accumulated data.
The Protocol
(1) For 30 consecutive days, log every mistake, error, or failure. Set a daily reminder to ensure completeness. (2) For each entry, record three things only: Date/time: when it occurred. What happened: a factual, non-judgmental description. "Sent the email to the wrong recipient" not "I was careless again." Conditions: what was true at the time? Energy level, sleep quality, time of day, workload, emotional state, environment. (3) Do NOT analyze. Do not write "because" or "the reason was." Just facts and conditions. (4) After 30 days: read the entire log in one sitting. Look for patterns across entries: recurring conditions, clustering by time/day/energy, common situational factors. (5) The patterns that emerge from 30 days of raw data will be more accurate and more actionable than any single-instance analysis could produce.