Test success principles for survivorship bias: "Would the same approach under slightly different conditions produce the same result?" — luck masquerades as skill
Test success principles against survivorship bias by asking 'If I did exactly this again under slightly different conditions, would I expect the same result?' before treating them as robust patterns.
Why This Is a Rule
Success is a biased teacher. When something works, you naturally extract a "principle": "I succeeded because I started early / took risks / followed my gut." But the principle might be wrong — it might capture a coincidental feature of a specific success rather than a causal mechanism. The early start might have coincided with favorable market conditions. The risk-taking might have coincided with a lucky break. The gut feeling might have been right this time but wrong on the four occasions you don't remember because they didn't produce memorable successes.
This is survivorship bias applied to personal experience: you learn from your successes (which survived) but not from your failures with the same approach (which didn't survive to teach you). "Taking risks leads to success" is derived from the risks that paid off; the risks that didn't are forgotten or rationalized away.
The counterfactual test — "If I did exactly this again under slightly different conditions, would I expect the same result?" — probes whether the principle is robust or context-dependent. A robust principle works across varying conditions. A context-dependent principle works only under the specific conditions of the original success. If the answer is "probably not" or "it depends on factors I can't control," the principle isn't a principle — it's a circumstance report.
When This Fires
- After extracting a "lesson" or "principle" from a successful outcome
- When building personal operating rules based on what has worked in the past
- When tempted to generalize from a single success to a universal approach
- Complements Three-pass pattern spotting: (1) mark recurrences without interpretation, (2) cluster into pattern types, (3) check against counterexamples before naming (counterexample checking for patterns) with the success-specific survivorship filter
Common Failure Mode
Over-generalizing from one success: "My presentation went great because I didn't prepare slides — so I should never use slides." The presentation succeeded because the topic was conversational, the audience was informal, and you knew the material deeply. Under different conditions (technical audience, formal setting, unfamiliar material), no slides would fail. The principle ("no slides = great presentation") was extracted from one data point without testing robustness.
The Protocol
(1) When you extract a principle from a success, write it as a testable claim: "Starting projects on Mondays leads to better outcomes." (2) Apply the counterfactual: "If I started a project on Monday but the project scope was different / the team was different / the deadline was different, would I still expect a better outcome?" (3) If yes across multiple condition variations → the principle is robust. It's a genuine causal mechanism worth preserving. (4) If no / uncertain → the principle is context-dependent. It worked this time because of specific conditions, not because of the principle itself. Note the conditions under which it worked rather than extracting a universal rule. (5) For every success principle you keep, also identify the conditions under which it would fail. This prevents blind application to contexts where the principle doesn't hold.