Add 'Did I use the right framework?' to every decision review — track mismatches to build your personal routing table
In post-decision review, explicitly add the meta-question 'Did I use the right framework for this decision?' and note framework-decision mismatches (comprehensive analysis on trivial reversible choices, satisficing on irreversible high-stakes decisions) to build your personal routing table.
Why This Is a Rule
Most decision reviews focus on content quality: "Was the reasoning sound? Did we consider the right factors?" But framework selection errors are a distinct failure class that content review doesn't catch. You can have excellent reasoning within a framework that was wrong for the decision — like producing a beautifully crafted weighted matrix to decide where to eat lunch. The reasoning was impeccable; the framework was absurdly over-powered for the decision's stakes.
Two common mismatches produce opposite errors. Over-analysis: comprehensive frameworks applied to trivial reversible decisions. This wastes time and creates decision fatigue. Under-analysis: quick heuristics applied to irreversible high-stakes decisions. This produces insufficiently examined commitments. Both mismatches have characteristic signatures in review: over-analysis shows as "I spent 3 hours on a decision I could have made in 5 minutes." Under-analysis shows as "I committed to something irreversible without considering key factors."
Tracking these mismatches over time builds a personal routing table — a learned mapping between decision types and appropriate frameworks. After reviewing 20 decisions, you develop intuition for which decisions deserve which process, calibrated to your specific decision patterns rather than generic advice.
When This Fires
- During every post-decision review (Review decisions in three steps: re-read reasoning blind, predict outcome, then compare — this sequence defeats hindsight bias, Score process and outcome independently on a 2x2 — deserved success, bad luck, dumb luck, and deserved failure are four different things)
- When you notice that your decision reviews focus only on content, never on process selection
- When building your personal decision-making practice over months and years
- When a decision took far too long or was made far too quickly relative to its stakes
Common Failure Mode
Reviewing framework fit only after bad outcomes: "That went badly — maybe I should have used a different process." But you should also review after good outcomes to catch dumb luck (Score process and outcome independently on a 2x2 — deserved success, bad luck, dumb luck, and deserved failure are four different things) produced by an inappropriate framework. A good outcome from an under-analyzed decision doesn't mean the framework was right — it means you got lucky despite using the wrong one.
The Protocol
(1) Add one question to every decision review: "Did I use the right framework for this decision?" (2) Check for over-analysis: "Was the framework more thorough than the decision warranted? Did I spend hours on something reversible and low-stakes?" If yes → note the mismatch. For similar future decisions, route to a faster framework. (3) Check for under-analysis: "Was the framework less thorough than the decision warranted? Did I use gut instinct for something irreversible?" If yes → note the mismatch. For similar future decisions, route to a more structured framework. (4) Record the mismatch pattern: "Decision type X → I tend to over/under-analyze." (5) Over time, these records become your personal routing table: a calibrated mapping between decision types and the framework class that actually fits.