Build measurement systems when you design the strategy, not after problems appear — crisis instrumentation measures symptoms, not causes
Build measurement dashboards and leading indicators at the same time you design the strategy they measure, not after problems appear, because instrumentation designed during crisis measures symptoms rather than causes.
Why This Is a Rule
Measurement systems designed during strategy creation capture what the strategy intended to achieve — the causal levers that the strategy is pulling and the outcomes those levers should produce. Measurement systems designed during crisis capture what went wrong — symptoms visible after failure, which may or may not point to actual causes.
The asymmetry exists because strategy design and crisis response engage different cognitive modes. Strategy design is deliberative: you're thinking about mechanisms, causal chains, and leading indicators. Crisis response is reactive: you're looking at whatever metrics are immediately visible and alarming. The metrics you grab during a crisis are the ones that are loudest, not the ones that are most diagnostic.
This is the measurement equivalent of Design pre-commitments when calm to constrain behavior when stressed — never make rules in hot states (design pre-commitments during cold states): your measurement system is designed best when you're thinking clearly about what matters, not when you're scrambling to understand what went wrong. Concurrent design means your metrics reflect your strategic model. Retroactive design means your metrics reflect your panic model.
When This Fires
- When launching any new initiative, project, or strategy
- When building a new process or workflow — add measurement at the design stage
- When you realize a current initiative has no measurement system — retrofit is better than nothing, but note the limitation
- During project kickoffs when scope and success criteria are being defined
Common Failure Mode
"We'll figure out how to measure it later." Later arrives when something goes wrong and you have no baseline data, no leading indicators, and no causal metrics — only lagging symptoms. You spend weeks building instrumentation retroactively, and the metrics you choose are biased by the specific failure that prompted the crisis rather than the strategic model that should govern measurement.
The Protocol
(1) When designing any strategy or initiative, simultaneously design its measurement system. Treat measurement as a deliverable of the design phase, not a follow-up task. (2) Define: what are the causal levers this strategy pulls? What leading indicators will show whether those levers are working? What lagging indicators will confirm overall success or failure? (3) Set baselines before the strategy launches — you need "before" data to evaluate "after." (4) Build the dashboard or tracking system and verify it produces data before the strategy begins executing. (5) If you're joining an initiative already in progress with no measurement → retrofit. But acknowledge that the metrics you choose now are influenced by current conditions rather than original strategic intent. Review them with the original strategy document to check alignment.