Keep a separate anomaly log with three fields: expected, actual, and which schema predicted wrong
Create a dedicated anomaly log separate from regular notes where each entry records what you expected, what happened, and which schema generated the expectation.
Why This Is a Rule
Anomalies mixed into regular notes become invisible — they're scattered across journal entries, meeting notes, and project documents with no way to analyze them as a set. A dedicated anomaly log concentrates prediction errors in one place, enabling the cross-entry pattern detection (Analyze validation records as a set — recurring failure patterns reveal systematic blind spots) that produces the highest-leverage insights.
The three required fields make each entry diagnostic: Expected (what your schema predicted — forces you to articulate the prediction rather than vaguely noting surprise), Actual (what happened — the reality that contradicted the prediction), and Schema (which model generated the expectation — traces the failure to a specific belief rather than leaving it free-floating).
The schema field is the most important and most commonly omitted. Without it, the anomaly log is a list of surprises with no traceability to the belief that was wrong. With it, you can count anomalies per schema — and the schemas accumulating the most anomalies are the ones most in need of review.
When This Fires
- Setting up an epistemic practice for tracking prediction accuracy
- When anomalies are currently scattered across various notes and journals
- After establishing schemas that generate regular predictions
- Complements Accumulate anomalies on a running list — trigger schema review at the count threshold, not at each one (anomaly accumulation with threshold) by providing the logging structure
Common Failure Mode
Logging anomalies without the schema field: "Expected the meeting to go well, it didn't." Which schema predicted "meeting goes well"? Was it your model of this particular client? Your model of meeting structure? Your model of the topic's difficulty? Without tracing to a schema, the anomaly improves nothing specific.
The Protocol
Create a dedicated anomaly log (separate document, note, or database): (1) For each anomaly, record three fields: Expected: "Schema [X] predicted [specific prediction]." Actual: "What actually happened was [specific outcome]." Schema: "The failing schema is [name or statement of the specific model that generated the prediction]." (2) Review the log periodically (monthly) to detect per-schema accumulation (Accumulate anomalies on a running list — trigger schema review at the count threshold, not at each one) and cross-schema patterns (Analyze validation records as a set — recurring failure patterns reveal systematic blind spots). The log's value compounds over time as the entry count grows.