Your decisions deserve an autopsy
You make hundreds of decisions every month. Some are small -- what to eat, which email to answer first. Some are significant -- who to hire, which market to enter, whether to end a relationship, how to allocate your time for the next quarter. Of all these decisions, how many do you formally review after the outcome is known?
For most people, the answer is zero.
This is a structural failure in your cognitive infrastructure. You have a decision-making system -- everyone does, whether they have named it or not. But you almost certainly have no systematic way to evaluate whether that system is working. You ship decisions into the world and never check whether the manufacturing process was sound. You are running a factory with no quality control.
Post-decision review is the practice of examining your decisions after outcomes are known -- not to judge yourself, but to judge your process. It is the mechanism by which your decision frameworks actually improve over time. Without it, you are stuck repeating the same reasoning patterns forever, mistaking luck for skill and bad luck for bad judgment.
The military invented this: after-action reviews
The most rigorous post-decision review system in widespread use was developed by the U.S. Army. The After-Action Review (AAR) was introduced in the mid-1970s, originating from work by military historian S.L.A. Marshall, who pioneered techniques for capturing lessons from combat during World War II. By the 1980s, AARs were embedded in major training exercises like REFORGER (Return of Forces to Germany), and by the 1990s they had become formal Army doctrine.
The Army defines an AAR as "a professional discussion of an event, focused on performance standards, that enables Soldiers to improve on weaknesses." The structure is deliberately simple. Four questions:
- What was supposed to happen? (The plan, the intent, the expected outcome.)
- What actually happened? (The facts, as objectively as possible.)
- Why was there a difference? (Root cause analysis of the gap.)
- What will we do differently next time? (Concrete adjustments.)
What makes AARs powerful is not the questions -- those are obvious. It is the culture surrounding them. The Army's AAR doctrine explicitly flattens hierarchy during the review. A Corporal's observation carries the same weight as a Captain's. The review happens as soon as possible after the event, while memory is fresh and before narrative smoothing has a chance to rewrite what actually occurred. And the output is not blame -- it is adjustment. The AAR produces specific changes to doctrine, training, or standard operating procedures.
This last point matters most. A review that produces emotional catharsis but no procedural change is theater. The purpose of post-decision review is not to feel better or worse about what happened. It is to update your operating system.
The trap you will fall into: resulting
Here is where most people go wrong when they try to review their own decisions. They look at the outcome and work backward to judge the decision. If the outcome was good, the decision was good. If the outcome was bad, the decision was bad.
Annie Duke, a former professional poker player and decision strategist, calls this resulting -- the tendency to evaluate the quality of a decision based on the quality of the outcome. In Thinking in Bets (2018), Duke argues that this is one of the most pervasive and damaging cognitive errors in human reasoning.
Consider: you can make an excellent decision that produces a terrible outcome. You research a stock thoroughly, the fundamentals are strong, the valuation is reasonable, you size the position appropriately -- and then a global pandemic crashes the market three weeks later. Your decision process was sound. The outcome was bad. If you evaluate the decision by the outcome, you learn the wrong lesson: "I should not trust my analysis." The correct lesson might be: "My analysis was fine; I need a better model for tail risks."
The reverse is equally dangerous and far more insidious. You can make a terrible decision that produces a wonderful outcome. You skip due diligence on an investment, you go with your gut, you ignore contradicting evidence -- and it works out anyway because the market was in a bubble that lifted everything. If you judge the decision by the outcome, you learn a catastrophically wrong lesson: "My instincts are reliable." Some of your worst decision habits are hiding behind lucky outcomes that never got examined.
Duke's core principle is direct: "What makes a decision great is not that it has a great outcome. A great decision is the result of a good process, and that process must include an attempt to accurately represent our own state of knowledge."
This means your post-decision review must evaluate two things separately:
- Decision quality: Was your process sound? Did you use the right framework? Did you consider the relevant information? Did you account for uncertainty? Were your assumptions explicit?
- Outcome quality: Did things turn out well or poorly?
These two dimensions create a 2x2 matrix:
| | Good outcome | Bad outcome | | ---------------- | -------------------------------------------------------- | --------------------------------------------------------- | | Good process | Deserved success -- reinforce this process | Bad luck -- maintain the process, update your risk models | | Bad process | Dumb luck -- do not reinforce this, you got away with it | Deserved failure -- fix the process |
The most dangerous quadrant is bad process / good outcome. That is where you build false confidence in broken reasoning. And you will never discover it unless you review decisions that worked out, not just the ones that failed.
Hindsight bias will sabotage your reviews
Even when you commit to reviewing decisions, your own memory will conspire against you. Daniel Kahneman's research on hindsight bias -- the "I knew it all along" effect -- demonstrates that once you know an outcome, your brain retroactively adjusts your memory of what you believed before the outcome was known. You genuinely remember being more confident in the result that actually occurred than you were at the time.
Phil Rosenzweig, in The Halo Effect (2007), documented how this plays out in business analysis at scale. When a company is performing well, observers rate its strategy, leadership, culture, and execution as excellent. When the same company's performance declines, they retrospectively judge those same attributes as flawed -- even when nothing actually changed except the financial results. The outcome creates a halo (or horn) that distorts evaluation of everything that preceded it.
This is why decision journals -- which you encountered in L-0449 -- are not just a nice-to-have. They are essential infrastructure for honest post-decision review. When you write down your reasoning, your confidence level, your expected outcomes, and your key assumptions before you know the result, you create a record that your hindsight-biased brain cannot rewrite. Your future self can then compare what you actually thought against what happened, rather than comparing a reconstructed memory against what happened.
Kahneman and his co-authors Olivier Sibony and Cass Sunstein argue in Noise (2021) that organizations should conduct "decision audits" -- systematic reviews of past judgments to identify both bias (systematic errors in one direction) and noise (unwanted variability in decisions that should be consistent). This is post-decision review at institutional scale: not just asking "was this one decision good?" but asking "is our decision-making system producing reliable outputs?"
Blameless postmortems: the engineering parallel
Software engineering has developed its own version of post-decision review that addresses one of the hardest problems in the practice: psychological safety.
Google's Site Reliability Engineering team popularized the concept of the blameless postmortem -- a structured review of an incident that explicitly focuses on systems and processes rather than individual fault. The core principle, documented in Google's SRE handbook, is that the postmortem must "focus on identifying the contributing causes of the incident without indicting any individual or team for bad or inappropriate behavior."
This is not soft. It is strategic. Amy Edmondson's research on psychological safety at Harvard, further validated by Google's Project Aristotle study, demonstrated that teams where members feel safe to admit mistakes and surface problems consistently outperform teams where fear of blame suppresses information. When people are afraid of being punished for bad outcomes, they stop reporting problems, stop volunteering observations, and start hiding errors. The organization's ability to learn from its decisions collapses.
The blameless postmortem asks the same structural questions as the military AAR: What happened? What did we expect? Why the gap? What changes do we make? But it adds a critical cultural layer: we assume that everyone involved had good intentions and acted reasonably given the information available to them. The failure is in the system, not the person.
Apply this to your personal post-decision reviews. When you review a decision that went badly, your instinct will be self-blame: "I should have known better." That framing kills learning. The productive framing is: "Given what I knew at the time, my process produced this decision. What about the process needs to change?" You are debugging your decision system, not prosecuting yourself.
The Deming cycle: review as continuous improvement
W. Edwards Deming formalized the idea that review is not an event but a phase in a continuous cycle. His Plan-Do-Check-Act (PDCA) framework, developed in the 1950s, treats every action as an experiment and every review as input to the next iteration. You plan an action, you execute it, you check the results against your expectations, and you adjust your approach based on what you learned.
Agile software development adopted this directly. Each Sprint in Scrum is a PDCA cycle: Sprint Planning (Plan), Sprint Execution (Do), Sprint Review and Retrospective (Check), and incorporating lessons into the next Sprint (Act). The retrospective -- "what went well, what did not, what do we change?" -- is not optional. It is architecturally embedded in every cycle.
The lesson for your personal decision-making is structural. Post-decision review should not be something you do occasionally when a decision goes spectacularly wrong. It should be a recurring phase in your decision cycle, as automatic as the decision itself. Every significant decision you make should have a review date built into it from the start. When you decide, you simultaneously decide when and how you will evaluate that decision.
The AI parallel: model evaluation never stops
If the PDCA cycle and blameless postmortems feel abstract, consider how machine learning systems handle the same problem. No serious ML deployment ends at launch. Post-deployment monitoring is a core discipline, and it mirrors post-decision review almost exactly.
When a machine learning model goes into production, engineers track performance metrics continuously. They watch for data drift -- shifts in the distribution of inputs that may cause a model trained on historical data to degrade on new data. They run A/B tests -- deploying competing models to subsets of users and comparing real-world performance, not just theoretical accuracy. They monitor for concept drift -- when the relationship between inputs and outputs changes because the world itself has changed.
The insight is that even a well-trained model with strong offline metrics can fail in production. The gap between how a model performs on historical test data and how it performs on live data is significant and unpredictable. The only way to close that gap is to monitor, evaluate, and retrain. Continuously.
Your decision frameworks are models. They were trained on your past experience. They perform reasonably well on the kinds of problems you have encountered before. But the world drifts. New contexts arise. The relationships between your inputs (information, constraints, values) and optimal outputs (decisions) change over time. If you are not running post-decision reviews, you are deploying a model you trained years ago and assuming it still works. You are a machine learning system with no monitoring pipeline.
The A/B testing parallel is particularly instructive. In ML deployment, you do not simply replace an old model with a new one and hope for the best. You run both simultaneously, measure which performs better on real data, and gradually shift traffic to the winner. When you adjust your decision frameworks based on a post-decision review, you can do the same thing: try the updated framework on low-stakes decisions first, compare results against your old approach, and adopt the change only when evidence supports it.
How to run a post-decision review
Here is a practical protocol you can use immediately. It combines elements from military AARs, Annie Duke's decision quality framework, and blameless postmortem culture.
Step 1: Gather the record. Pull out your decision journal entry from when you made the decision. If you did not journal the decision, reconstruct as best you can -- but recognize that your reconstruction is contaminated by hindsight bias. This is why decision journals exist.
Step 2: Restate what you decided and why. Write down the decision, the framework you used, the alternatives you considered, your key assumptions, and your confidence level. Do this from the journal, not from memory.
Step 3: Document what actually happened. Be specific and factual. Separate observations from interpretations. "Revenue declined 15% in Q3" is an observation. "The strategy failed" is an interpretation.
Step 4: Separate process from outcome. Score your decision quality and your outcome quality independently. Was the process good or bad? Was the outcome good or bad? Place the decision in the 2x2 matrix above.
Step 5: Identify the gap. If there is a gap between what you expected and what happened, ask: Was this gap due to a process failure (wrong framework, missing information, flawed reasoning) or an outcome factor (bad luck, unforeseeable events, new information that emerged after the decision)?
Step 6: Extract framework updates. What specific changes to your decision process does this review suggest? Not vague intentions ("be more careful") but concrete adjustments ("add a pre-mortem step for decisions involving more than $10,000" or "extend my research period from 2 days to 5 days for hiring decisions").
Step 7: Schedule the next review. For ongoing decisions, set a date for your next check-in. Decision review is not a one-time event -- it is a monitoring system.
The meta-lesson
Post-decision review is the mechanism by which your decision frameworks evolve. Without it, you are a static system in a dynamic environment. Your frameworks were built on past experience and past assumptions. The world changes. Your values change. Your information changes. Review is how your system stays calibrated.
Gary Klein, the psychologist who developed the pre-mortem method, understood both sides of this temporal equation. The pre-mortem asks "what could go wrong?" before you commit. The post-decision review asks "what did go wrong, and why?" after results are in. Together, they form a complete feedback loop: anticipate failure, act, observe, adjust, anticipate better next time.
The decision speed lesson (L-0457) taught you that sometimes speed matters more than optimality. This lesson adds the necessary counterbalance: speed without review is just recklessness. Decide fast when the situation demands it -- but always review afterward. The review is where the learning lives. Without it, fast decisions stay permanently uncalibrated.
And as you will see in the next lesson (L-0459), this review process itself raises a meta-question: how do you choose which framework to apply to a given decision? That choice -- framework selection -- is itself a decision that deserves its own review. The recursion is the point. Your decision system improves only when every layer of it is subject to examination.