Core Primitive
Understanding what you did right is as valuable as understanding what went wrong.
The team that never studied its wins
In 2003, a software company in Seattle shipped a product that became the fastest-adopted tool in the company's history. Within six weeks, customer engagement metrics exceeded projections by four hundred percent. The product manager received an award. The engineering team was praised in the all-hands meeting. And then — nothing. No one studied why it worked.
The team moved on to the next project, which underperformed. Then the next, which also underperformed. In the post-mortems that followed each disappointing launch, the team meticulously dissected their mistakes: the spec was ambiguous, the timeline was too aggressive, the user research was insufficient. They produced action items, tracked follow-ups, implemented process changes. They became experts at understanding failure.
But when someone asked "Why did that first product succeed?" the answers were vague. "We had good chemistry." "The market was ready." "We got lucky." Three years of post-mortems had produced a detailed taxonomy of what goes wrong. Zero success reviews had produced nothing about what goes right. The team had a failure library and a success mystery.
This pattern is not unusual. It is the default. Most individuals, teams, and organizations treat success as something to celebrate and failure as something to study. The assumption is that failure contains the lessons and success is simply the absence of failure. Get this assumption lodged in your review practice and you will spend years becoming an expert on your weaknesses while remaining a stranger to your strengths.
The asymmetry you have probably never noticed
The previous lesson established that honest reflection requires psychological safety — the ability to look at your failures without self-punishment. That is half the picture. The other half is equally important and far less discussed: you must also be able to look at your successes without dismissal.
Understanding what you did right is as valuable as understanding what went wrong. In many cases, it is more valuable, because knowledge of what works gives you a template for future action, while knowledge of what failed only tells you what to avoid. Knowing that a particular bridge design collapsed under load does not tell you how to build a bridge that stands. Knowing what made a successful bridge stand — the specific engineering choices, materials, load distributions, and environmental factors — gives you a blueprint.
Yet the bias runs deep. Psychological research consistently shows that negative events receive more cognitive processing than positive ones. Roy Baumeister and colleagues documented this in their influential 2001 paper "Bad Is Stronger Than Good," demonstrating that across domains — emotions, relationships, learning, memory — negative experiences command more attention, more analysis, and more behavioral response than positive ones. This negativity bias served our ancestors well: the cost of ignoring a threat was death, while the cost of ignoring an opportunity was merely a missed meal. But in the context of deliberate reflection, this bias creates a systematic distortion. You automatically study your failures and automatically skip your successes. The result is a review practice that is half-blind by design.
What the research says about studying success
Several independent research traditions converge on the same conclusion: studying what works is at least as productive as studying what breaks.
Appreciative Inquiry, developed by David Cooperrider and Suresh Srivastva at Case Western Reserve University in the 1980s, is an organizational development methodology built entirely on this insight. Traditional organizational change models begin with a problem diagnosis: what is broken, why is it broken, how do we fix it? Appreciative Inquiry inverts this. It begins by identifying what is already working — the "positive core" of the organization — and then asks how to amplify those strengths. The method moves through four phases: Discovery (what gives life to the organization at its best), Dream (what the world is calling the organization to become), Design (what the ideal organization would look like), and Destiny (how to sustain the change).
Cooperrider's foundational insight was that organizations — and individuals — already contain the seeds of their best performance. Those seeds are visible in the moments when things go exceptionally well. But because those moments are celebrated rather than studied, the seeds remain invisible. Appreciative Inquiry makes them visible by asking: "Tell me about a time when this organization was at its absolute best. What was happening? What conditions were present? What did people do differently?" The answers to these questions contain more actionable information than a hundred problem-focused post-mortems.
Positive psychology, pioneered by Martin Seligman at the University of Pennsylvania, makes a parallel argument at the individual level. Traditional clinical psychology studied pathology: depression, anxiety, personality disorders, cognitive dysfunction. Seligman argued that this produced a discipline that understood illness but not health — that could describe why people suffer but not why people flourish. His research program shifted the lens to strengths, virtues, and the conditions that produce optimal functioning. The finding that emerged repeatedly across studies was that people who deliberately identify and leverage their strengths outperform people who focus primarily on remediating their weaknesses. Not because weaknesses do not matter, but because strength-based development has a higher ceiling. You can remediate a weakness from terrible to adequate. You can develop a strength from good to exceptional.
Marcus Buckingham, drawing on Gallup's massive database of workplace performance data, made this operational in "First, Break All the Rules" and "Now, Discover Your Strengths." His central finding: the world's best managers do not try to fix their employees' weaknesses. They identify what each person does naturally well and then design the role around those strengths. This is not motivational platitude — it is empirical finding derived from interviews with over eighty thousand managers across industries. The highest-performing teams are not the ones with the fewest weaknesses. They are the ones that have identified and amplified their strengths.
The U.S. Army's After Action Review (AAR) — one of the most effective organizational learning tools ever developed — explicitly addresses both sides. The AAR protocol asks four questions: What was supposed to happen? What actually happened? Why was there a difference? What can we do about it? Crucially, the third question applies to positive differences as well as negative ones. If the mission went better than planned, the AAR asks why — not to celebrate, but to understand. The Army learned through decades of operational experience that understanding why a mission succeeded was as tactically valuable as understanding why one failed. Both types of understanding feed the same goal: improving future performance.
Why success is harder to analyze than failure
If studying success is so valuable, why does almost everyone default to studying failure instead?
Part of the answer is the negativity bias already mentioned. But there is a deeper structural reason: failure announces itself, and success stays quiet.
When something goes wrong, the evidence is unmistakable. The project missed its deadline. The customer complained. The revenue fell short. The system crashed. Failure produces clear signals that demand explanation. Your brain automatically asks "Why?" because the negative outcome violates your expectations and triggers the cognitive machinery of causal attribution.
When something goes right, there is no such trigger. The project shipped on time. The customer was satisfied. The revenue met targets. These outcomes match your expectations, so your brain does not flag them for analysis. Success feels like the natural order being maintained. Failure feels like an anomaly that needs explanation. This asymmetry is not rational — success is no more "natural" than failure — but it is deeply ingrained.
There is also an attribution problem. The psychologist Fritz Heider, and later Bernard Weiner, documented systematic patterns in how people explain their successes and failures. The tendencies vary by person and context, but two common distortions are relevant here.
The first is self-serving bias in reverse: some people — particularly those prone to impostor syndrome — attribute their successes to external factors (luck, timing, other people's help) and their failures to internal factors (lack of ability, poor preparation). This pattern makes failure seem analyzable ("I need to work harder") and success seem unanalyzable ("I just got lucky"). If you believe your success was luck, there is nothing to study.
The second is the opposite pattern: attributing success to fixed internal traits ("I'm talented") rather than to specific behaviors and decisions ("I prepared thoroughly, structured the presentation around the audience's concerns, and rehearsed the difficult sections three times"). The trait attribution feels good but produces no actionable insight. "Be more talented" is not a strategy. "Rehearse difficult sections three times" is.
A rigorous success review cuts through both distortions. It insists on behavioral specificity: not "I got lucky" or "I'm good at this," but "Here are the specific decisions, preparations, and actions that contributed to this outcome, and here is why they mattered."
The survivorship bias caution
There is one important caveat to the study of success, and it comes from the statistician Abraham Wald.
During World War II, the U.S. military asked Wald to analyze the bullet hole patterns on bombers returning from missions. The goal was to determine where to add armor. The intuitive answer was to armor the areas with the most bullet holes. Wald's answer was the opposite: armor the areas with no bullet holes. The planes returning with holes in the wings and fuselage had survived despite those hits. The planes that were hit in the engines and cockpit never came back. The data from returning planes was biased by survivorship — it only represented the cases that succeeded (in the sense of making it home).
The lesson for success reviews is this: when you study your successes, you must also consider the near-misses and the survivors. A strategy that worked once does not automatically become a good strategy. It might be a risky strategy that happened to pay off this time. A thorough success review asks not only "Why did this work?" but also "Could this have easily failed? What had to go right that was not under my control? Would this approach succeed again under slightly different conditions?"
Dean Simonton's equal-odds rule reinforces this caution. Simonton, studying creative output across fields from science to music, found that the probability of producing a hit is essentially constant across a creator's career. Prolific creators do not have a higher hit rate — they just produce more total work, which generates more total hits. The implication for success analysis is that you need to study your hits and your misses together. Understanding why a particular project succeeded requires understanding it in the context of similar projects that did not. Otherwise, you risk building a "success theory" that is actually a survivorship artifact — a just-so story that explains one outcome but predicts nothing.
How to conduct a success review
With the research and caveats in place, here is a practical framework for reviewing your successes with the same rigor you bring to reviewing your failures.
Step 1: Select a clear success. Not "things are going pretty well" — a specific outcome that exceeded your expectations or met a meaningful goal. The more concrete the success, the more analyzable it is. "I got promoted" is less useful than "My presentation to the executive team led to the project being funded."
Step 2: Separate what happened from why it happened. Write down the observable facts of the success first, before generating any explanations. What specifically occurred? What were the measurable results? What did other people say or do in response? This step prevents premature narrative construction — the tendency to skip straight to a convenient explanation without first establishing what actually needs to be explained.
Step 3: Identify contributing factors at multiple levels. This is where most success reviews fail. They stop at the first plausible explanation. Push past it. Use these prompts:
- Decisions: What specific choices did you make that contributed? What did you decide not to do that turned out to matter?
- Preparation: What groundwork, research, or practice preceded the success? How far back does the causal chain extend?
- Timing: Were there temporal factors — sequences, deadlines, windows of opportunity — that mattered?
- Relationships: Whose involvement, support, feedback, or introduction was critical?
- Environment: What contextual conditions — market, organizational, personal — enabled this outcome?
- Habits and systems: Which of your ongoing practices contributed without you consciously deploying them?
Step 4: Classify each factor as replicable or contingent. Some factors are within your control and can be deliberately reproduced: preparation practices, decision frameworks, relationship investments, habitual behaviors. Others are contingent: market conditions, other people's decisions, timing coincidences. The replicable factors are the gold of the success review — they are the raw material for improved future performance. The contingent factors are still worth noting because you can sometimes engineer the conditions that make them more likely.
Step 5: Extract the principle. From the replicable factors, distill a single principle or pattern that captures the success's core logic. "When I give myself two weeks of preparation time instead of three days, the quality of my presentations increases dramatically because I have time to rehearse and incorporate feedback." This principle is now a design constraint for future work: protect preparation time for high-stakes presentations. It is specific, actionable, and derived from evidence rather than theory.
Step 6: Test against the survivorship bias. Ask: "If I did exactly the same thing again under slightly different conditions, would I expect the same result?" If the answer is "not necessarily," identify what else would need to be true. This is the difference between a robust success pattern and a one-time fluke.
Building a success pattern library
One success review produces an anecdote. Ten success reviews produce a pattern library.
Over time, as you conduct success reviews alongside your failure post-mortems, you will begin to see recurring themes — the conditions, behaviors, and decisions that appear in your wins again and again. These patterns are your personal operating principles, derived empirically from your own experience rather than borrowed from someone else's advice.
A pattern library might contain entries like:
- "Projects where I define the success criteria before starting outperform projects where I figure out what success looks like along the way."
- "Creative work produced in morning blocks before email is consistently stronger than creative work produced in afternoon slots after meetings."
- "Decisions I make after sleeping on them are better than decisions I make under time pressure, even when the time pressure is self-imposed."
- "Collaborations where I explicitly negotiate roles in the first meeting produce less friction than collaborations where roles emerge organically."
None of these patterns are universal truths. They are your truths — empirical findings about how you, specifically, operate at your best. They represent the kind of self-knowledge that no book or course can provide because it emerges only from sustained, honest observation of your own performance.
The Third Brain: AI for success pattern analysis
AI is particularly well-suited to success analysis because it can process volumes of data that would be impractical for manual review.
Pattern extraction from journals and reviews. If you maintain a journal, daily review notes, or after-action reviews, you can periodically feed a batch of entries to an AI assistant with the prompt: "Analyze these entries and identify recurring themes in the entries that describe positive outcomes. What conditions, behaviors, or decisions appear repeatedly when things go well?" The AI can surface patterns across months of data that you would never notice in any single entry because the pattern only emerges at scale.
Attribution analysis. After writing your initial success review, share it with an AI and ask: "What am I attributing to luck that might actually be a pattern? What am I attributing to personal effort that might actually be situational? Where is my causal reasoning weakest?" The AI serves as an attribution auditor — challenging the explanations that feel most comfortable and probing the ones you might be avoiding.
Cross-domain pattern matching. If you have conducted success reviews across different areas of your life — work projects, personal goals, relationship milestones, health achievements — an AI can identify structural similarities that span domains. "Your most successful work projects and your most effective fitness periods share three common features: a clear single metric, a weekly check-in rhythm, and a built-in accountability partner." This kind of cross-domain synthesis is extraordinarily difficult for humans because we tend to keep our mental models domain-specific. AI does not have that constraint.
Counterfactual generation. Ask the AI: "Given these success factors, what is the most likely way this could have failed even with the same preparation?" This generates the survivorship-bias check automatically — surfacing the contingent factors that had to go right and the failure modes that were narrowly avoided.
Success principle refinement. Share your draft success principle with an AI and ask it to stress-test the logic: "Under what conditions would this principle not apply? What are the boundary conditions? When would following this principle actually produce a worse outcome?" This prevents over-generalization — the tendency to take a principle that worked in one context and apply it universally without considering the conditions that made it effective.
The balance between success and failure analysis
This lesson is not arguing that you should stop analyzing failures. Failure analysis is essential. The argument is that you should do both — and that most people are dramatically underweight on the success side.
A useful target ratio: for every failure you analyze, analyze at least one success with equal rigor. This does not mean equal time — a catastrophic failure may warrant deeper investigation than a routine success. But it means equal systematic attention. Every review session should include the question "What went well and why?" with the same seriousness as "What went wrong and why?"
The two types of analysis serve different functions. Failure analysis is defensive: it identifies what to stop doing, what to avoid, what guardrails to install. Success analysis is generative: it identifies what to start doing, what to amplify, what patterns to deliberately reproduce. A review practice that includes both is simultaneously closing failure modes and opening success pathways. Either one alone is incomplete.
Seligman's research on learned helplessness is instructive here. Organisms that experience repeated failure with no understanding of success develop a global pessimism — a belief that outcomes are uncontrollable. The antidote is not avoiding failure but understanding success: recognizing the specific conditions under which your actions produce positive outcomes. Success analysis is not naive optimism. It is the empirical basis for agency — the evidence-grounded belief that what you do matters and that you can, through deliberate choices, influence your results.
The connection forward
You have now established that honest reflection requires safety (Honest reflection requires safety) and that reflection must encompass successes as well as failures. But there is a dimension of experience that most review practices ignore entirely: the data carried by your energy levels and emotional states during the events you are reviewing.
The next lesson — energy and emotion in reviews — addresses this gap. Your energy was high during the successful project and depleted during the failed one. Your emotional state was confident in one meeting and anxious in another. These are not background noise. They are data — information about what conditions allow you to perform at your best and what conditions degrade your performance. A review practice that ignores energy and emotion is like an engineering analysis that ignores temperature: you have the structural data but not the environmental data, and both are necessary to understand the system's behavior.
Success analysis gives you the what. Energy and emotion analysis gives you the when, where, and under what internal conditions. Together, they produce a complete picture of your performance landscape.
Sources:
- Cooperrider, D. L., & Srivastva, S. (1987). "Appreciative Inquiry in Organizational Life." In R. Woodman & W. Pasmore (Eds.), Research in Organizational Change and Development, Vol. 1. JAI Press.
- Seligman, M. E. P. (2011). Flourish: A Visionary New Understanding of Happiness and Well-Being. Free Press.
- Buckingham, M., & Coffman, C. (1999). First, Break All the Rules: What the World's Greatest Managers Do Differently. Simon & Schuster.
- Baumeister, R. F., Bratslavsky, E., Finkenauer, C., & Vohs, K. D. (2001). "Bad Is Stronger Than Good." Review of General Psychology, 5(4), 323-370.
- U.S. Army. (1993). A Leader's Guide to After-Action Reviews. Training Circular 25-20. Department of the Army.
- Wald, A. (1943). "A Method of Estimating Plane Vulnerability Based on Damage of Survivors." Statistical Research Group, Columbia University. Reprinted in 1980 by the Center for Naval Analyses.
- Simonton, D. K. (1997). "Creative Productivity: A Predictive and Explanatory Model of Career Trajectories and Landmarks." Psychological Review, 104(1), 66-89.
- Weiner, B. (1985). "An Attributional Theory of Achievement Motivation and Emotion." Psychological Review, 92(4), 548-573.
- Heider, F. (1958). The Psychology of Interpersonal Relations. Wiley.
- Seligman, M. E. P. (1972). "Learned Helplessness." Annual Review of Medicine, 23(1), 407-412.
Frequently Asked Questions