Evaluate decision process separately from outcome — 'was the reasoning sound given what was knowable?' not 'did it work out?'
When reviewing decision outcomes, evaluate process quality independently from result quality by asking 'given what was knowable at decision time, was the reasoning sound?' rather than 'did it work out?'
Why This Is a Rule
Annie Duke's "resulting" — judging decision quality by outcome quality — is the most common and most destructive error in decision review. A good decision can produce a bad outcome (you invested wisely based on available evidence, but an unpredictable event caused losses). A bad decision can produce a good outcome (you drove drunk and arrived safely). Conflating process quality with outcome quality means you learn the wrong lessons: abandoning sound processes after unlucky outcomes and reinforcing reckless ones after lucky outcomes.
The separation is straightforward but requires discipline. Process quality answers: "Given what was knowable at decision time, did you gather appropriate information, consider relevant alternatives, weight criteria thoughtfully, and account for key risks?" Outcome quality answers: "Did the result match your expectations?" These are independent variables. A 2×2 matrix emerges: good process/good outcome (deserved success), good process/bad outcome (bad luck), bad process/good outcome (good luck), bad process/bad outcome (deserved failure). Only the process dimension is improvable.
If you reinforce good outcomes regardless of process (resulting), you're training yourself on noise. If you reinforce good process regardless of outcome, you're training yourself on the only variable you control — and over time, good process produces more good outcomes than bad process does.
When This Fires
- During any decision journal review (Review decisions in three steps: re-read reasoning blind, predict outcome, then compare — this sequence defeats hindsight bias)
- When a decision produces a surprising outcome — good or bad — and you want to learn from it
- When someone says "that was a good/bad decision" based purely on the outcome
- When evaluating your own or others' judgment for hiring, promotion, or accountability purposes
Common Failure Mode
Abandoning a sound strategy after one bad outcome: "Our hiring process produced a bad hire, so we need to overhaul the process." Maybe — but maybe the process was sound and this hire was in the inevitable error band. The question isn't "did this hire work out?" but "given everything we knew during the interview process, was our evaluation methodology sound?" If the answer is yes, the process doesn't need overhaul — it needs continued application.
The Protocol
(1) When reviewing a decision outcome, deliberately separate the two evaluations. (2) Process evaluation: "Given only what was knowable at decision time: did I gather sufficient information? Did I consider the right alternatives? Did I weight criteria appropriately? Did I account for major risks?" Score process quality 1-5. (3) Outcome evaluation: "Did the result match expectations? Was the outcome good, neutral, or bad?" Score outcome quality 1-5. (4) Plot on the 2×2: good process/good outcome → reinforces method. Good process/bad outcome → bad luck; don't change the process. Bad process/good outcome → good luck; fix the process despite the outcome. Bad process/bad outcome → fix the process. (5) Only improve process. Outcomes are informative but not controllable.