When the leading indicator improves but the outcome does not follow — the metric is broken
When a leading indicator improves but its paired lagging outcome does not follow within the expected timeframe, treat the leading indicator as broken (gamed, confounded, or non-predictive) and replace it.
Why This Is a Rule
Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure." A leading indicator that was once predictive can break in three ways: gamed (you've optimized the metric without improving the underlying reality — writing more words per day without improving writing quality), confounded (the leading indicator correlates with the outcome for a reason that no longer holds), or non-predictive (the original link was coincidental and never actually causal).
The diagnostic is divergence: the leading indicator improves but the lagging outcome doesn't follow within the expected timeframe. If you're tracking "deep work hours" (leading) and "projects completed" (lagging), and deep work hours increase for 6 weeks while project completion doesn't improve, the leading indicator is broken — you're measuring something that doesn't actually predict the outcome anymore.
The response is replacement, not repair. A broken leading indicator can't be fixed by measuring it harder. You need to find a new upstream behavior that actually predicts the outcome, then pair and validate the new indicator.
When This Fires
- When a tracked leading indicator trends upward but the paired outcome is flat
- During monthly metric reviews when validating indicator pairs
- When you feel productive (leading indicators look great) but outcomes aren't materializing
- Any measurement system where activity metrics and outcome metrics diverge
Common Failure Mode
Doubling down on the leading indicator: "I just need to push deep work hours even higher." But the divergence means more of the leading indicator won't produce more of the outcome — the link is broken. Increasing a broken indicator produces more activity without more results, compounding the waste.
The Protocol
When leading and lagging indicators diverge: (1) Confirm the expected timeframe has elapsed (some outcomes lag by weeks or months). (2) If divergence persists past the expected lag → diagnose: is the leading indicator gamed (you're optimizing the metric, not the behavior it was supposed to measure)? Confounded (conditions changed)? Non-predictive (the link was never causal)? (3) Replace the leading indicator with a new upstream behavior you believe is more directly causal. (4) Re-validate the new pair over 4-6 weeks (Pair every important outcome with 1-2 leading indicators — track both to validate the link). Indicator pairs need periodic validation — they're hypotheses about causation, not permanent truths.