After 30+ journal entries, calculate your calibration — do your 70% predictions come true 70% of the time?
Across 30+ decision journal entries, calculate your calibration by grouping decisions by stated confidence level (e.g., all 70% predictions) and checking whether that percentage actually occurred—use this ratio to adjust future confidence statements.
Why This Is a Rule
Calibration — the match between your stated confidence and actual accuracy — is the meta-skill of decision-making. A perfectly calibrated person's 70% predictions come true 70% of the time, their 90% predictions come true 90% of the time. Most people are systematically overconfident: their 90% predictions come true only 70% of the time. This overconfidence produces underestimation of risk, insufficient contingency planning, and surprised-Pikachu responses to predictable failures.
The calibration calculation requires accumulated data — individual decisions tell you about individual outcomes, but calibration is a statistical property of your prediction system as a whole. Thirty entries is the minimum for meaningful grouping: with fewer, each confidence bucket has too few entries for the percentage to stabilize.
The corrective is simple once the data exists: if your 90% predictions come true only 70% of the time, you know to treat your future "90% confident" feeling as 70% confidence. This recalibration transforms overconfident certainty into appropriately uncertain estimation — which, paradoxically, produces better decisions because you prepare for the 30% failure rate your feelings told you was only 10%.
When This Fires
- After accumulating 30+ decision journal entries (Decision journal entries need six fields captured before outcomes: date, decision, reasoning, prediction, confidence %, and your current state) with confidence percentages
- During quarterly calibration reviews when you have enough data to group meaningfully
- When you suspect you're systematically overconfident or underconfident
- When building a personal calibration curve that improves over years of decision-tracking
Common Failure Mode
Skipping the confidence percentage during journaling (Decision journal entries need six fields captured before outcomes: date, decision, reasoning, prediction, confidence %, and your current state) because "I don't know what number to put." Any number is better than none. The accuracy of the initial number doesn't matter — what matters is that the number exists so it can be calibrated over time. Your first 30 entries will have poorly calibrated percentages; your next 30 will be better because you've seen your calibration curve.
The Protocol
(1) After 30+ journal entries, group entries by stated confidence level. Typical buckets: 50-59%, 60-69%, 70-79%, 80-89%, 90-100%. (2) For each bucket, calculate: what percentage of predictions in this bucket actually came true? (3) Compare each bucket's actual rate to its stated confidence. Perfect calibration: 70% bucket → 70% actual. (4) Identify your calibration pattern: overconfident (actual < stated) is most common. Underconfident (actual > stated) is rarer but possible. (5) Apply the correction: if your 80% predictions come true 65% of the time, your personal "80% confident" maps to ~65% real probability. Adjust future decisions accordingly — build more contingency, gather more information, or reduce commitment size for the actual confidence level. (6) Re-calculate every 30 entries. Calibration improves with practice — watching your calibration curve tighten over time is itself motivating.