Principlev1
When large language models express 90%+ linguistic
When large language models express 90%+ linguistic confidence, independently verify claims in consequential domains, because the gap between expressed confidence and actual accuracy represents systematic miscalibration that human trust heuristics cannot detect.
Why This Is a Principle
Derives from Systematic Overconfidence Taxonomy (systematic overconfidence), Knowledge of cognitive biases does not reduce susceptibility (knowledge of bias doesn't reduce susceptibility), and Bias Blind Spot Asymmetry (bias blind spot). Prescribes verification behavior for AI outputs. Specific enough to be actionable, general enough to apply across AI systems.