Verify AI-detected patterns with the same rigor as your own: counterexamples, 3+ instances, and narrative-imposition testing
When AI identifies patterns in your reflective archive, treat AI-generated patterns with the same verification rigor as your own: check counterexamples, count instances, demand 3+ independent occurrences, and test for narrative imposition on random variation.
Why This Is a Rule
AI language models are pattern-completion machines that will confidently present narratives even when the underlying data is noise. "I notice a pattern of self-sabotage when you're close to achieving goals" sounds insightful — but it might be a narrative the AI imposed on two unrelated entries that mentioned setbacks near milestones. The AI's pattern is as likely to be projection as detection, and its confident tone makes false patterns feel more credible than they deserve.
The equal-rigor requirement prevents a double standard where AI-detected patterns bypass the verification you'd apply to your own pattern claims. Three-pass pattern spotting: (1) mark recurrences without interpretation, (2) cluster into pattern types, (3) check against counterexamples before naming's three-pass method demands counterexample checking and 3+ independent instances for human-detected patterns; AI-detected patterns deserve exactly the same scrutiny. The AI's advantage is search breadth (it can scan your entire archive faster); its disadvantage is that it'll pattern-match on anything, including noise.
The "narrative imposition" test is AI-specific: language models are trained on human narratives and will naturally construct story arcs (rising action, conflict, resolution) even from random journal entries. Ask: "If I scrambled the dates on these entries, would this pattern still appear?" If the pattern depends on temporal ordering that the AI assumed but wasn't in the data, it's narrative imposition.
When This Fires
- After Use AI to analyze routine execution logs for deviation-failure correlations — find structural fragility points invisible from inside the experience or Share completed reflective writing with AI and ask "What assumptions am I making that I haven't examined?" — surface premises treated as facts when AI has analyzed your reflective data and reported patterns
- When an AI-identified pattern feels compelling and you're tempted to act on it immediately
- When AI analysis of personal data produces "insights" you hadn't considered
- Complements Three-pass pattern spotting: (1) mark recurrences without interpretation, (2) cluster into pattern types, (3) check against counterexamples before naming (three-pass pattern analysis) with the AI-specific verification layer
Common Failure Mode
Uncritical AI acceptance: "The AI found a pattern of conflict avoidance — it must be true since it analyzed 200 entries." The AI found a narrative it could construct from the data. Whether it's a real pattern or a well-told story requires the same verification you'd apply to your own initial impressions.
The Protocol
(1) When AI reports a pattern, write it down as a hypothesis, not a finding. (2) Count instances: how many independent entries support this pattern? Demand 3+ genuinely separate occurrences (not the same event described in multiple entries). (3) Check counterexamples: search for entries that contradict the pattern. If counterexamples exist, refine or discard the pattern. (4) Test for narrative imposition: "Is the AI constructing a story arc that my data doesn't actually contain?" Check whether the pattern depends on temporal ordering or causal connections the AI assumed. (5) Only promote to "validated pattern" if it survives all three checks. An AI-detected pattern that survives verification is valuable; one that doesn't was a compelling-sounding false positive.