You assume everyone wants what you want. They don't.
You value honesty. Radical, uncomfortable, unfiltered honesty. So when a colleague softens critical feedback to protect a relationship, you read it as cowardice or political maneuvering. You don't see what's actually happening: they value harmony. Not because they're weak or dishonest, but because their hierarchy of values genuinely places relational care above blunt truth-telling. They're not failing at your value system. They're succeeding at theirs.
This misread happens constantly — in teams, in families, in organizations, in entire political systems. And it persists not because people are stupid or selfish, but because of a specific cognitive distortion: the assumption that your values are the default, and anyone who deviates from them is either uninformed, irrational, or acting in bad faith.
The previous lessons in this phase established that values can be identified through reflection, tested through trade-offs, and distinguished from each other by priority. But there's a critical piece that all of that self-knowledge can miss: the values you've identified in yourself are not universal. Other people have done the same work — consciously or not — and arrived at different answers. Understanding this isn't a nicety. It is the difference between collaboration and chronic misunderstanding.
The false consensus effect: you think you're normal
In 1977, Lee Ross, David Greene, and Pamela House published a study that introduced one of the most replicated findings in social psychology: the false consensus effect. Across four experiments, they demonstrated that people consistently overestimate how many others share their choices, attitudes, and values — and perceive those who disagree as more extreme, more unusual, and more revealing of some underlying character flaw.
In one experiment, participants were asked whether they'd be willing to walk around campus wearing a large sandwich board sign reading "EAT AT JOE'S." Those who agreed estimated that 62% of their peers would also agree. Those who refused estimated that only 33% would agree. Same population. Same question. Completely different assumptions about what's "normal" — driven entirely by what the individual would do themselves.
The mechanism is straightforward: your own values and preferences are your most accessible reference point. When you reason about what others would do, you anchor on yourself and adjust insufficiently. The result is a systematic illusion that the world shares your priorities more than it does.
Ross and colleagues found something darker beneath the overestimation: people judged those who made the opposite choice as having more extreme personality traits. You don't just assume others share your values — you pathologize those who don't.
Naive realism: why the disagreement feels like their problem
The false consensus effect is a statistical error — you overestimate agreement. Lee Ross identified a deeper mechanism beneath it: naive realism, which he characterized as "a dangerous but unavoidable conviction about perception and reality."
Ross and Andrew Ward formalized naive realism into three tenets:
- I see the world as it is. My perceptions reflect objective reality, not a subjective construction shaped by my particular history, culture, and values.
- Others will agree with me if they're rational. Anyone exposed to the same information who processes it reasonably should reach the same conclusions I have.
- Those who disagree must be deficient. If someone sees the same situation and reaches a different conclusion, they are either uninformed (they don't have the facts), irrational (they can't process the facts), or biased (they're distorted by ideology, self-interest, or emotion).
Notice the structure: naive realism doesn't just predict that you'll expect agreement. It provides a ready-made explanation for every disagreement that absolves you of needing to update your own position. The other person is ignorant, stupid, or corrupted. The possibility that they are reasoning from a different but equally legitimate set of values never enters the model.
This is why values disagreements feel so different from factual disagreements. If someone says the Earth is flat, you can point to evidence. But if someone says loyalty to their team matters more than publishing an uncomfortable truth, there's no fact that resolves the dispute. You're in value-space, not fact-space. And naive realism keeps you from recognizing that you've crossed the boundary.
The Schwartz map: values are universal in kind, different in priority
If people's values varied randomly, collaboration across differences would be nearly impossible. But the empirical picture is more structured than that — and therefore more navigable.
Shalom Schwartz, working across more than 80 countries with over 70,000 participants, identified ten basic value types that appear across every culture studied: Self-Direction, Stimulation, Hedonism, Achievement, Power, Security, Conformity, Tradition, Benevolence, and Universalism. These ten values are organized in a circular structure where adjacent values are compatible (security and conformity tend to go together) and opposing values are in tension (self-direction opposes conformity; benevolence opposes power).
The critical finding is not the list itself but the structure. The same tensions show up everywhere. A person who prioritizes self-direction (independence, creativity, freedom) will predictably find friction with someone who prioritizes conformity (obedience, self-discipline, politeness) — not because either is wrong, but because these values genuinely pull in opposite motivational directions.
Schwartz's data showed that while the structure of values is universal — every culture recognizes the same basic motivational types and the same conflicts between them — the priorities vary enormously between individuals and between cultures. In some cultures, security and conformity rank at the top. In others, self-direction and stimulation dominate. Neither ranking is objectively correct. They reflect different but coherent answers to the same fundamental human questions: What matters? What should I optimize for? What am I willing to sacrifice?
This is the practical tool. When someone's behavior baffles you, you're probably looking at a different position on the Schwartz circle, not a moral failure. The engineer who insists on following established procedures isn't "bureaucratic" — they're high on security and conformity. The designer who keeps pushing unconventional solutions isn't "difficult" — they're high on self-direction and stimulation. Mapping the difference doesn't require you to adopt their priorities. It requires you to see that their priorities are priorities, not defects.
Moral foundations: why good people disagree about what "good" means
Jonathan Haidt and Jesse Graham's moral foundations theory extends the Schwartz framework into the domain of moral judgment specifically. Across four studies published in the Journal of Personality and Social Psychology (2009), they demonstrated that people don't just differ in what they value — they differ in what they consider morally relevant in the first place.
Haidt identified five (later six) moral foundations:
- Care/Harm — sensitivity to suffering, compassion
- Fairness/Cheating — reciprocity, justice, proportionality
- Loyalty/Betrayal — devotion to in-group, self-sacrifice for the team
- Authority/Subversion — respect for hierarchy, social order, tradition
- Sanctity/Degradation — purity, disgust, the sacred
The key finding: liberals consistently prioritize Care and Fairness above the other three foundations, while conservatives weight all five more equally. This isn't a difference in intelligence, education, or information. It's a difference in which moral inputs register as morally relevant. When a conservative says that disrespecting a flag is wrong, they're not failing to be logical — they're responding to a Sanctity/Authority signal that a liberal's moral system doesn't amplify. When a liberal says that any amount of inequality is a moral emergency, they're not being naive — they're responding to a Care/Fairness signal that a conservative's system counterbalances with Loyalty and Authority concerns.
Haidt's research reveals the deepest version of the projection problem. It's not just that other people rank the same values differently. It's that other people have additional moral channels that you may not even perceive as moral. You can't understand why someone cares about loyalty to their institution until you recognize that loyalty is, for them, a genuine moral concern — not a rationalization for something else.
The cost of projection: what breaks when you assume shared values
The practical damage of value projection is not abstract. It shows up in specific, predictable failure modes.
In teams: A leader who values autonomy gives their reports maximum freedom — and interprets the team's requests for guidance as weakness or dependency. A leader who values security creates detailed processes and checklists — and interprets pushback from autonomous team members as insubordination. Both leaders are projecting their own hierarchy onto people who didn't sign up for it.
In feedback: You tell a colleague that their presentation lacked "intellectual rigor," expecting this to motivate improvement. For you, intellectual challenge is energizing — you value achievement and stimulation. For them, the feedback lands as a personal attack because they value benevolence and harmony. The same words mean different things in different value systems.
In organizations: A startup founder who values speed and risk-taking hires a senior leader who values thoroughness and reliability. Both are competent. Both are acting with integrity. The founder calls the new hire "slow." The new hire calls the founder "reckless." Neither is wrong — they're operating from genuinely different value hierarchies, and neither has made those hierarchies explicit.
In relationships: You value growth and constant self-improvement. Your partner values stability and contentment. You interpret their contentment as complacency. They interpret your restlessness as dissatisfaction with the relationship. You're both reading the other's behavior through your own value system and finding a flaw that doesn't exist.
Perspective-taking: the skill that makes differences navigable
Recognizing that values differ is necessary but not sufficient. The operational skill is perspective-taking — the cognitive capacity to construct a model of another person's viewpoint, including the values that generate it.
Galinsky, Maddux, Gilin, and White (2008) published a study in Psychological Science that distinguishes perspective-taking from empathy and demonstrates why the distinction matters practically. Across three experiments using negotiation scenarios, they found that perspective-taking — actively considering what the other party thinks and wants — increased negotiators' ability to discover hidden agreements and to create value for both sides. Empathy alone — connecting emotionally with the other person — did not produce the same advantage and sometimes produced worse outcomes.
The mechanism: empathy makes you feel what the other person feels, which can lead to excessive concession or emotional overwhelm. Perspective-taking makes you think about what the other person thinks, which gives you a cognitive map of their priorities without surrendering your own. You can understand that your counterpart values job security above compensation without deciding that you should value the same thing. The model of their values is a tool you hold, not an identity you adopt.
This maps directly to value differences. When you encounter someone whose behavior seems irrational, the perspective-taking move is: "What would this behavior look like if it were rational — just driven by a different set of priorities than mine?" Almost always, you can construct a coherent answer. They aren't irrational. They're optimizing for something you aren't.
From tolerance to genuine collaboration
There is a qualitative difference between tolerating value differences and leveraging them.
Isaiah Berlin's concept of value pluralism provides the philosophical foundation: human values are genuinely plural, sometimes incommensurable, and cannot be reduced to a single ultimate value that resolves all conflicts. There is no meta-value that tells you whether freedom is objectively more important than equality, or whether individual achievement matters more than collective harmony. These are legitimate tensions, not problems with known solutions.
Berlin's pluralism is not relativism. Relativism says no values are better than any others. Pluralism says multiple values are genuinely good, they sometimes conflict, and navigating those conflicts requires judgment rather than formula. You can acknowledge that someone else's values are legitimate without concluding that all value systems are equally valid in all contexts. A team needs both the person who values thoroughness and the person who values speed — not because all values are equal, but because the task itself requires both capacities.
The practical shift is moving from "they're wrong" through "they're different" to "what can their difference contribute that my values alone can't?" A team where everyone values the same things has no internal corrective mechanism. It will optimize brilliantly for its shared values and be blind to everything those values don't measure. A team with genuine value diversity — where the autonomy-oriented people and the security-oriented people can name their differences and negotiate them explicitly — has access to a wider range of solutions.
But this only works if the values are surfaced. The previous lessons in this phase gave you the tools to identify, test, and articulate your own values. This lesson adds the recognition that those same tools need to account for the fact that other people's outputs will look different — and that the difference is the feature, not the bug.
AI as a values-difference simulator
One of the most practical applications of understanding value differences is using AI as a perspective-taking tool. When you're stuck in a disagreement or can't understand someone's position, you can explicitly ask an AI to model the value system you're struggling to comprehend.
"I value radical honesty and can't understand why my colleague softens critical feedback. Model their perspective assuming they place relational harmony above directness. What do they see when they look at my behavior?"
The AI doesn't have values of its own, but it can simulate coherent reasoning from a specified value hierarchy. This makes it a low-stakes practice environment for perspective-taking: you can explore a value system that's foreign to you without the emotional charge of doing it in a live conflict.
The deeper application connects to the epistemic infrastructure this curriculum is building. If you've externalized your own values (Lessons 0621-0633), you can feed them to an AI alongside your model of another person's values and ask it to identify the specific points of tension. The AI becomes a values-conflict map generator — showing you where your priorities and someone else's genuinely diverge, so you can negotiate the real disagreement instead of the phantom one created by naive realism.
This doesn't replace the human skill of perspective-taking. It augments it. The AI gives you a first draft of the other person's value system that you can then test against their actual behavior. Over time, you get better at generating these models yourself. The tool trains the skill it eventually makes less necessary.
The bridge to values-based decisions
Recognizing that others' values differ from yours is not an endpoint — it's a prerequisite. Until you can distinguish your own values from the values you've been projecting onto others, your decision-making is contaminated by a false model of the world.
L-0633 taught you to test your own values through hypothetical trade-offs. This lesson adds a second test: can you describe a situation where someone else's values would produce a better outcome than yours? Not a situation where your values are wrong — a situation where a different priority ordering would navigate the specific constraints more effectively. If you can't generate even one such scenario, you haven't yet separated your values from your identity. You're still in the naive realist position where your values are reality and all alternatives are errors.
L-0635 introduces values-based decision-making — using your clarified values as a compass for difficult choices. But that compass only works if you've accounted for the magnetic interference created by assuming everyone shares your north. When you make a values-based decision that affects other people, you need to know which values are yours and which you've unconsciously attributed to everyone in the room. The difference between a sovereign decision and a projected one is precisely this distinction.
Other people's values are not wrong. They are different. The sooner you build that into your operating model, the sooner your decisions — and your relationships — stop being distorted by a consensus that never existed.