Before optimizing a positive pattern, test causation by removing the suspected factor
Before optimizing around a perceived positive pattern, verify through deliberate removal tests whether the pattern persists when suspected causal factors are absent.
Why This Is a Rule
Positive patterns are the most dangerous to optimize without verification because they feel too good to question. "Morning exercise makes me more productive" — so you restructure your entire schedule around morning exercise. But was it the exercise, the early wake time, the break from screens, or the coincidence that your exercise weeks also had fewer meetings? Without a removal test, you're optimizing around a correlation that may not be causal.
The removal test is simple: deliberately remove the suspected causal factor for one cycle and observe whether the positive outcome persists. If productivity remains high during a week without morning exercise, the exercise wasn't the cause — and optimizing around it would be wasted effort. If productivity drops measurably, the causal link is supported (though not proven — it could be expectation effects).
People skip verification for positive patterns because it feels like sabotaging something that works. "Why would I stop doing something that's working?" Because you might be doing something unnecessary while the actual cause is something else — and the unnecessary thing is consuming time and energy.
When This Fires
- Before restructuring your schedule around a perceived productivity pattern
- When investing significant resources (time, money, effort) into maintaining a pattern
- When someone says "I always do X and it always leads to Y" as justification for X
- Before recommending a personal pattern to others as a best practice
Common Failure Mode
Refusing to test because "it's obviously working." The refusal is itself a signal: if you're unwilling to remove the factor for even one cycle, you might be attached to the ritual rather than to the outcome. Genuine causal factors survive removal tests. Superstitions don't — but they also resist testing.
The Protocol
Before investing in optimizing a positive pattern: (1) Identify the suspected causal factor: "I think [X] causes [positive outcome]." (2) Remove X for one full cycle (one week, one sprint — whatever the natural period is). (3) Measure the outcome: does [positive outcome] persist without X? (4) If outcome persists → X was not the cause. Investigate what actually drives the outcome. (5) If outcome drops → X is likely causal. Now you can optimize with confidence.