When AI outputs contradict your examined analysis, evaluate
When AI outputs contradict your examined analysis, evaluate them by asking whether they present unconsidered evidence or identify verifiable reasoning errors, treating model fluency as orthogonal to epistemic authority.
Why This Is a Principle
Derives from Meaning as Receiver Construction (meaning constructed by receivers), The performance of an agent is bounded by the accuracy of (agent performance bounded by world model accuracy), and No external entity has more right to direct your thinking (no external entity has more right to direct your thinking). The principle prescribes specific criteria for evaluating AI outputs: evidence or error-identification, not fluency. This is the modern test of calibrated self-trust.
Source Lessons
Self-authority requires self-trust
You cannot exercise authority over your thinking if you do not trust your own cognitive processes. Self-trust is the emotional foundation of self-authority.
Sovereign thinking is the foundation of a self-directed life
Everything that follows in this curriculum — values, boundaries, commitments, priorities, purpose — depends on the foundational claim that you have the right and responsibility to direct your own mind. Sovereign thinking is not the end. It is the beginning of self-directed living.