Ask 'why do these experts disagree' not 'who is right' — the structural source of disagreement is more informative than either conclusion
When experts disagree, ask 'why do they disagree' rather than 'who is right' to identify structural sources like different methodologies, populations, or outcome measures.
Why This Is a Rule
"Who is right?" frames expert disagreement as a jury trial where you must pick a winner. "Why do they disagree?" frames it as a diagnostic investigation where the disagreement structure itself is the most valuable output. The question shift is small but the information yield is dramatically different.
Expert disagreements almost always have structural sources: different methodologies (randomized trials vs. observational studies), different populations (Western undergraduates vs. global samples), different outcome measures (short-term symptom relief vs. long-term wellbeing), different theoretical priors (Bayesian vs. frequentist), or different time horizons. Identifying which structural factor drives the disagreement tells you more than either expert's conclusion alone — it tells you the conditions under which each conclusion holds.
This is the complement to When credentialed experts contradict each other, treat it as a map of genuine uncertainty — not a problem requiring you to pick a winner (treating disagreement as uncertainty mapping). When credentialed experts contradict each other, treat it as a map of genuine uncertainty — not a problem requiring you to pick a winner says "don't pick a winner." This rule says "instead, diagnose the disagreement's structure" — giving you a constructive action rather than just a restraint.
When This Fires
- When encountering contradictory expert opinions on the same question (extends When credentialed experts contradict each other, treat it as a map of genuine uncertainty — not a problem requiring you to pick a winner)
- When reading competing research findings and wanting to understand rather than judge
- When someone asks "which expert should I believe?" — reframe the question
- During evidence review when two credible sources reach opposite conclusions
Common Failure Mode
Diagnosing the disagreement as "one of them is biased" and stopping there. Bias attribution feels like structural analysis but usually functions as a way to dismiss the expert you disagree with. Genuine structural investigation asks: "Even if both experts are acting in good faith with competent methodology, why would they reach different conclusions?" The answer is almost always in the methodology, population, or measures — not in character flaws.
The Protocol
(1) When encountering expert disagreement, resist the urge to evaluate who's right. (2) Instead, ask: "What structural factors could produce this disagreement between competent, good-faith experts?" (3) Check the common sources: different methodologies? Different populations studied? Different outcome measures? Different time horizons? Different theoretical frameworks? (4) For each candidate source, ask: "If this is the driver, when would Expert A be right and when would Expert B be right?" (5) You now have a conditional understanding rather than a binary judgment: "Expert A's conclusion holds when [conditions], Expert B's holds when [different conditions]." This is almost always more useful than picking a winner.