If a belief flips three times without converging, you are reacting to noise
When a belief revises three or more times in a short period without converging, treat this as a diagnostic signal that you are reacting to surface events rather than updating a deeper model.
Why This Is a Rule
Genuine Bayesian updating converges: each new piece of evidence moves your belief closer to the truth, and the movements get smaller as your model improves. If your belief about a topic flips back and forth — "the project will succeed" → "it's going to fail" → "actually it'll be fine" → "no, it's doomed" — the oscillation signals that you're not updating a deep model at all. You're reacting to whichever surface-level event happened most recently.
Three revisions without convergence is the diagnostic threshold. One revision is updating. Two might be legitimate evidence-driven shifts. But three in a short period, with the belief not settling closer to either pole, means the "evidence" you're responding to is noise — individual events that feel significant but don't actually contain enough information to justify the magnitude of belief change they're producing.
The underlying problem is usually base rate neglect: each new event gets treated as highly diagnostic when it's actually low-diagnostic (you'd see this event whether the belief is true or false). The fix isn't to stop updating — it's to notice that your updates aren't converging and investigate why.
When This Fires
- Your assessment of a project, person, or strategy keeps flipping between positive and negative
- You notice yourself saying "wait, actually..." for the third time about the same topic
- A belief about a high-stakes situation changes dramatically after each meeting or report
- Your confidence oscillates rather than gradually settling
Common Failure Mode
Treating each flip as a thoughtful revision rather than an oscillation. "I'm just being responsive to new information." Genuine responsiveness converges. If each piece of new information reverses your position rather than refining it, you're being jerked around by individual events rather than building an increasingly accurate model.
The Protocol
When a belief has revised 3+ times in a short period: (1) Stop updating temporarily. (2) Write down the belief trajectory: what you believed at each point and what triggered the change. (3) For each trigger, ask: "Was this genuinely diagnostic? Would I have seen this event regardless of whether my belief is true or false?" (4) If the triggers are low-diagnostic (you'd see them either way), your oscillation is base-rate neglect. Return to the base rate and update only for genuinely diagnostic evidence. (5) If the triggers are high-diagnostic, you may have a genuinely uncertain situation — in which case, hold the uncertainty explicitly rather than oscillating between false certainty in both directions.