The same event, four incompatible stories
In 1950, Akira Kurosawa released Rashomon, a film in which a murder is recounted by four witnesses. Each account is internally consistent, vividly detailed, and flatly incompatible with the others. The woodcutter, the bandit, the wife, and the dead man (through a medium) all describe the same event — yet the stories cannot be reconciled into a single narrative.
The film shook audiences because it violated a deep assumption: that if you gather enough testimony, a single truth will emerge. It doesn't. Not because anyone is lying (though some may be), but because each witness occupied a different position — physical, emotional, social — and their position determined what they could see.
Social scientists later named this the Rashomon effect: the phenomenon where observers of the same event produce substantially different but equally plausible accounts, with no external evidence sufficient to elevate one above the others. The term has since migrated into epistemology, anthropology, psychology, legal studies, and communication theory — anywhere that multiple valid reports of the same reality coexist without resolution.
The previous lessons in this phase introduced three ways to dissolve apparent contradictions: scope disambiguation (the claims operate in different contexts), level disambiguation (the claims operate at different levels of abstraction), and time disambiguation (the claims were true at different moments). This lesson adds the fourth and often the most consequential: perspective disambiguation — the claims are both accurate, observed from different vantage points.
Perspectives are not opinions
There's an important distinction to make immediately. A perspective is not a preference or an opinion. It's a structural position from which observation occurs. Where you stand determines what you can see, what remains occluded, and which features of reality are salient.
The parable of the blind men and the elephant — originating in ancient Indian philosophy and appearing in Buddhist, Hindu, and Jain texts — illustrates this precisely. One man touches the trunk and declares the elephant is like a snake. Another touches the leg and says it's like a tree. A third feels the side and says it's like a wall. Each report is accurate. Each is also radically incomplete. The contradiction between "snake," "tree," and "wall" is not a disagreement about the elephant. It's a consequence of partial access.
The Jain philosophical tradition formalized this insight as anekantavada — the doctrine of "many-sidedness." Reality is too complex for any single perspective to capture entirely. What looks like contradiction is often the predictable result of different observers having access to different facets of the same object.
This is not relativism. The elephant is real. Its shape is determinate. But no single touch-point reveals the whole. Perspective disambiguation is the practice of identifying which part of the elephant each observer is touching before deciding whether they actually disagree.
The philosophical foundation: there is no view from nowhere
Friedrich Nietzsche articulated the strongest version of this principle. In his posthumous notes, he wrote: "There are no facts, only interpretations." This is often read as nihilism — a denial that truth exists. But Nietzsche's actual position is more precise and more useful.
Nietzsche's perspectivism doesn't deny the existence of truth. It denies the existence of a perspective-free vantage point from which truth could be observed in its entirety. Every observation is made from somewhere, by someone, shaped by particular drives, needs, and contextual constraints. There is no "god's-eye view" — no position outside all positions from which the raw, uninterpreted facts present themselves.
The practical implication is not despair but rigor. Nietzsche writes in On the Genealogy of Morals: "The more affects we allow to speak about a matter, the more eyes, different eyes, we know how to bring to bear on one and the same matter, that much more complete will our 'concept' of this matter, our 'objectivity' be." Objectivity, for Nietzsche, isn't the elimination of perspective. It's the multiplication of perspectives — gathering more vantage points, not pretending you have none.
Donna Haraway reached a compatible conclusion from a completely different direction. In her 1988 essay "Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective," she attacked what she called the god trick — "seeing everything from nowhere." The pretense of a neutral, disembodied observer who surveys reality without position or interest. Haraway argued that this pretense doesn't produce objectivity; it conceals the perspective that's actually operating. Her alternative: "Feminist objectivity means quite simply situated knowledges" — knowledge that is honest about where it comes from.
The combined insight from Nietzsche and Haraway is operationally identical: you get closer to truth by acknowledging and multiplying perspectives, not by pretending you've transcended them.
How social position shapes what you can know
Standpoint epistemology, developed by Sandra Harding, Patricia Hill Collins, and others, extends perspective disambiguation into the social domain. The core claim: knowledge is socially situated, and your position in social structures determines what you can observe and what remains invisible to you.
Patricia Hill Collins, in Black Feminist Thought (1990), described Black women as "outsiders within" — individuals who have experienced enough of dominant institutions to understand their inner workings, while maintaining enough distance to see patterns that insiders cannot. This dual position generates knowledge that is unavailable to someone fully embedded in the dominant perspective.
Harding formalized this as strong objectivity — the argument that starting inquiry from the experiences of marginalized groups doesn't introduce bias; it corrects for the invisible biases baked into the dominant perspective. The conventional view ("we're being objective") is itself a perspective — it just doesn't recognize itself as one.
For epistemic infrastructure, the lesson is concrete: when you encounter contradictory claims about how a system works, how a team functions, or how a decision landed, the people closest to the impact often see things that the people who made the decision cannot. Not because one group is smarter, but because they occupy a different structural position. The contradiction between "this policy works well" and "this policy is causing harm" may dissolve when you ask: works well according to whom?
Theory of mind: the cognitive prerequisite
Perspective disambiguation requires a cognitive capacity that humans develop in childhood and refine (or fail to refine) throughout life: theory of mind — the ability to understand that other people have mental states, beliefs, and observations that differ from your own.
Wimmer and Perner's 1983 false belief task demonstrated when this capacity emerges. A child watches a puppet named Maxi place chocolate in a green cupboard. Maxi leaves. His mother moves the chocolate to a blue cupboard. Maxi returns. The question: where will Maxi look for the chocolate?
Children under four typically answer "the blue cupboard" — where the chocolate actually is. They cannot separate their own knowledge from Maxi's. They know the chocolate moved, so they assume Maxi knows too. Around age four, children begin to pass the task: Maxi will look in the green cupboard, because that's where he believes the chocolate is. The child can now model someone else's perspective as distinct from their own.
This is the cognitive foundation for perspective disambiguation. If you can't represent another person's vantage point as different from yours, you can't recognize that their contradictory observation might be valid from where they stand. You'll default to "they're wrong" because you can only see what you see.
The developmental research reveals something uncomfortable: the basic capacity emerges at four, but the sophisticated application of it — especially under cognitive load, emotional stress, or status differentials — degrades throughout life. Adam Galinsky's research on power and perspective-taking (2006) found that power systematically reduces perspective-taking. Across four experiments, individuals primed with power were less likely to consider what others see, think, and feel. The more authority you have, the harder it becomes to model perspectives other than your own.
This creates a predictable failure pattern in organizations: the people with the most decision-making power are the least likely to recognize when a contradiction is a perspective artifact. They don't need to resolve the contradiction — they can simply override the perspective they don't share.
Perspective disambiguation in practice: the four questions
When you encounter two claims that appear to contradict each other, run them through the perspective disambiguation protocol before deciding who's right:
1. Who is observing? Identify the specific person or group making each claim. Not "people say" or "some believe" — who, specifically, is reporting this?
2. Where are they standing? What is their structural position? What role do they occupy? What's their proximity to the thing being described? What's their history with it?
3. What can they see from there? Given their position, what features of reality are salient to them? What data do they have direct access to?
4. What can't they see from there? Every position has blind spots. The frontend developer can't see the API response time. The backend developer can't see the render latency. Neither is wrong. Both are incomplete.
If both vantage points are real — if both observers are positioned to see what they report — then the contradiction is not between truth and falsehood. It's between partial truths. And the productive response is not to pick a winner but to integrate: what does the world look like when both observations are placed into the same map?
The multi-agent AI parallel
This pattern is now being replicated in artificial intelligence. Multi-agent AI systems — architectures where multiple independent language models deliberate on the same problem — have demonstrated that productive disagreement between agents improves accuracy and robustness.
Research from MIT (2023) on multi-AI collaboration showed that when multiple AI agents debate and cross-examine each other's reasoning, the resulting answers are more accurate than any single agent's output. The mechanism mirrors perspective disambiguation: each agent has been trained on (or prompted with) different assumptions, emphases, or reasoning strategies. They "see" different aspects of the problem. Their disagreement isn't noise — it's signal about the problem's multi-faceted structure.
The design insight is striking: even when engineers could train a single, maximally capable model, they sometimes get better results from multiple models that disagree and reconcile. Diversity of vantage point isn't a bug to be eliminated. It's an architectural feature that produces more robust conclusions.
The parallel to human perspective disambiguation is direct. When your team disagrees, you have two options: flatten the disagreement by declaring a winner, or treat it as multi-agent deliberation where each person's perspective contains information that the others lack. The second approach is harder. It's also how you find solutions that no single perspective could generate alone.
The failure mode: collapsing into relativism
Perspective disambiguation has a well-known failure mode, and it needs to be named directly: the slide from "all perspectives are partial" to "all perspectives are equal" to "nothing can be evaluated."
This is the trap that critics of both Nietzsche and standpoint epistemology regularly identify. If every perspective is situated, if there's no view from nowhere, then how do you ever say that one perspective is better than another?
The answer: partial doesn't mean arbitrary. The blind man touching the elephant's trunk is giving you real data about a real part of the elephant. The blind man who hasn't touched the elephant at all is giving you nothing. The person standing in a room claims it's warm — that's their perspectival observation. The person outside the room who guesses it's cold based on no evidence has no perspective to offer.
Perspective disambiguation doesn't eliminate judgment. It changes what you're judging. Instead of asking "who's right?" you ask "what is each person positioned to see, and how do their observations combine?" Some perspectives are better positioned than others. Some have more relevant access. Some are more careful in their observation. The point is not to suspend evaluation — it's to evaluate accurately, which requires understanding what each observer actually has access to.
The connection to steel-manning
This lesson sets up the next one directly. Once you've identified that a contradiction may arise from different perspectives rather than from one side being wrong, the next move is steel-manning — constructing the strongest possible version of each side's case.
Perspective disambiguation tells you why both sides might be valid. Steel-manning (L-0371) gives you the method for testing that hypothesis: build the best version of each claim, from each vantage point, and see whether the contradiction survives in its strongest form.
Together, they form the core of productive contradiction resolution: don't pick a winner prematurely. Understand the positions, build the best cases, and then — and only then — look for integration, synthesis, or the rare genuine contradiction that demands a choice.
The person who disambiguates and the person who doesn't
They encounter the same disagreements. One hears two contradictory claims and asks "who's right?" — then picks the claim that matches their existing beliefs, their authority position, or the louder voice. The other hears two contradictory claims and asks "what is each person positioned to see?" — then integrates. Over time, the first person's model of reality gets narrower. They surround themselves with people who agree, dismiss perspectives that challenge theirs, and grow more confident in an increasingly partial view.
The second person's model gets richer. They see more of the elephant. Not because they're smarter, but because they've learned to treat contradiction as an invitation to multiply perspectives rather than eliminate them.
The question isn't whether you encounter contradictions. You will — daily. The question is whether you treat them as problems to solve by picking a winner, or as data about the multi-perspectival structure of reality. The disambiguation question is always the same: from where is each person observing?