Core Primitive
Most organizational outcomes — both successes and failures — are products of system design, not individual effort or individual failure. When an organization consistently produces a particular outcome (delayed projects, quality defects, innovation, customer satisfaction), the outcome is a system property, not a personnel property. Blaming individuals for systemic outcomes is not only unfair — it is ineffective, because replacing the individual without changing the system produces the same outcome with a different person. Understanding this shifts the change question from "Who is responsible?" to "What system is producing this outcome?"
The attribution error
When a project fails, the first question is almost always: "Who was responsible?" When a team underperforms, the instinct is to find the underperformer. When quality degrades, the search begins for the person who let standards slip. This instinct — the drive to locate individual causes for organizational outcomes — is deeply human and profoundly misleading.
Social psychologists call it the fundamental attribution error: the tendency to attribute others' behavior to their character rather than their situation. Lee Ross demonstrated that observers consistently overestimate the role of personal disposition and underestimate the role of situational factors in explaining behavior — even when the situational factors are obvious and powerful (Ross, 1977). In organizations, this error produces a persistent and costly misdiagnosis: outcomes that are products of system design are attributed to individual performance, and the resulting interventions (coaching, replacement, punishment) fail because they address the wrong cause.
W. Edwards Deming, the quality management pioneer who transformed Japanese manufacturing, estimated that 94% of organizational problems are attributable to the system, not to the individual workers within it. He called the attribution of systemic problems to individuals "tampering" — making adjustments to individual components of a system based on individual variation that is actually produced by the system itself. Tampering does not improve the system; it destabilizes it, introducing new variation without addressing the structural causes of the original variation (Deming, 1986).
The system is the cause
A system, in organizational terms, is the collection of structures, processes, incentives, information flows, and constraints that shape how work gets done. Every organizational outcome is produced by the interaction of people operating within a system. The system determines the range of outcomes that are likely — individual effort determines where within that range a particular outcome falls.
Consider a manufacturing line that produces defects at a 5% rate. Replacing the workers on the line might shift the rate to 4.5% or 5.5%, but it will not produce a 0.5% defect rate — because the 5% rate is a system property produced by the machinery, the materials, the process design, the quality control procedures, and the training protocols. To achieve a 0.5% rate, you must change the system. The workers are not the cause of the defects; the system is.
This principle extends far beyond manufacturing. A software team that consistently delivers late is probably not staffed by lazy engineers — it is operating within a system where requirements change mid-sprint, estimation is done without historical data, dependencies are not tracked, and scope is negotiated by salespeople who are incentivized by deal size rather than delivery feasibility. An organization with high turnover probably does not have a recruitment problem — it has a system that makes staying less attractive than leaving: unclear career paths, inconsistent management, compensation that does not match the market, or workloads that produce burnout.
James Reason's research on organizational accidents formalized this insight in the "Swiss cheese model": accidents occur not because of a single individual failure but because multiple system defenses fail simultaneously. Each defense has holes (like a slice of Swiss cheese), and when the holes align across multiple layers, the failure propagates through the system. The solution is not to blame the person at the final layer — it is to strengthen the system's defenses at every layer (Reason, 1990).
Reading the system
If systems create outcomes, then the ability to read the system — to see the structures, processes, and incentives that are producing the current outcomes — is the most important diagnostic skill a leader can develop.
Follow the incentives. People respond to incentives — not the incentives the organization says it values, but the incentives the system actually delivers. If the organization says it values collaboration but promotes individuals based on individual metrics, the system incentivizes competition regardless of the stated values. If the organization says it values quality but measures teams on delivery speed, the system incentivizes cutting corners. The incentive structure is the most powerful predictor of behavior — more powerful than culture talks, values posters, or leadership speeches.
Map the information flows. Decisions are only as good as the information available to the decision-maker. If critical information is siloed — if the people making decisions do not have access to the information they need — the system is designed to produce poor decisions. If feedback loops are slow — if the consequences of a decision are not visible to the decision-maker until months later — the system is designed to produce uncorrected errors.
Identify the constraints. Every system has bottlenecks — points where the flow of work narrows and slows. Eliyahu Goldratt's Theory of Constraints demonstrated that the performance of any system is determined by its tightest bottleneck: improving any part of the system other than the bottleneck produces no improvement in overall performance (Goldratt, 1984). The constraint is not a person — it is a structural limitation that must be addressed structurally.
Trace the feedback loops. Systems are maintained by feedback loops — reinforcing loops that amplify certain behaviors and balancing loops that constrain others. A system where success is rewarded and failure is punished has a reinforcing loop that produces conservative behavior (because the downside of failure is larger than the upside of success). A system where experimentation is rewarded regardless of outcome has a reinforcing loop that produces innovative behavior (because the upside of trying is positive regardless of result).
The personnel trap
The most common and most costly organizational mistake is the personnel response to a systemic problem: replacing the person instead of fixing the system. This response is seductive for three reasons.
First, it is visible. Firing a manager, restructuring a team, or hiring a new leader produces a visible change that signals action. Changing an incentive structure, redesigning an information flow, or adjusting a process constraint is invisible to most stakeholders. Leaders under pressure to "do something" gravitate toward the visible response even when the invisible response would be more effective.
Second, it is fast. Personnel changes can be made in days. System changes take months to design, implement, and verify. Organizations that are impatient for results choose the fast response even when the slow response would be more durable.
Third, it satisfies the attribution error. Replacing the "responsible" individual confirms the intuitive diagnosis that the problem was a people problem. The confirmation feels right even when it is wrong — and the rightness is only challenged when the replacement produces the same outcome, by which time the organization has moved on to blaming the replacement.
The personnel trap is self-perpetuating. Each replacement produces a brief honeymoon period where the new person's energy and attention temporarily improves outcomes — confirming the diagnosis. But the improvement fades as the system reasserts its structural influence, and the cycle begins again: blame, replace, brief improvement, reversion, blame.
Systems thinking in practice
Shifting from individual attribution to systems thinking requires a disciplined change in diagnostic habits.
When an outcome is bad, ask "What system produced this?" before asking "Who caused this?" This is not about eliminating accountability — it is about locating accountability correctly. If the system produced the outcome, the accountability belongs to whoever designed, maintains, or tolerates the system.
When an outcome is good, ask "What system produced this?" before asking "Who delivered this?" This is equally important but less intuitive. If a team delivers exceptional results, understand the system that enabled those results before attributing the success to individual heroism. Individual heroism is not scalable; system design is.
When you want to change an outcome, ask "What system change would make the desired outcome the default?" rather than "Who should I motivate or replace?" The most powerful system changes make the desired behavior easier and the undesired behavior harder — without relying on individual motivation, which is variable and exhaustible.
The Third Brain
Your AI system can help you see systems that are invisible to the people operating within them. Describe an organizational outcome you want to understand — a pattern of behavior, a recurring problem, a persistent gap between intention and result — and ask: "Map the system that is producing this outcome. What are the structural elements (roles, reporting lines, resource allocation), process elements (workflows, approval chains, handoff points), incentive elements (metrics, rewards, consequences), and information elements (who knows what, when, and how) that combine to produce this result? Which elements are the strongest drivers of the outcome?"
The AI can also help you test system-change hypotheses: "If we changed [specific system element], what would be the likely effect on the outcome? What unintended consequences should we anticipate? What other system elements might need to change simultaneously to support the intended effect?"
From attribution to design
The shift from "Who caused this?" to "What system produced this?" is not merely a diagnostic improvement. It is a fundamental change in how the organization relates to its own outcomes. Individual attribution produces a culture of blame, defensive behavior, and risk aversion. Systems thinking produces a culture of curiosity, structural improvement, and adaptive capacity.
The next lesson, Change the system to change the outcomes, takes the practical step: if systems create outcomes, then changing the system is how you change the outcomes. Not by asking people to try harder, not by replacing people, not by adding layers of oversight — but by redesigning the system so that the desired outcome is the natural product of the system's operation.
Sources:
- Ross, L. (1977). "The Intuitive Psychologist and His Shortcomings: Distortions in the Attribution Process." Advances in Experimental Social Psychology, 10, 173-220.
- Deming, W. E. (1986). Out of the Crisis. MIT Press.
- Reason, J. (1990). Human Error. Cambridge University Press.
- Goldratt, E. M. (1984). The Goal: A Process of Ongoing Improvement. North River Press.
Frequently Asked Questions