The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Track classification decisions as hypotheses with version history rather than as immutable facts, enabling systematic improvement of category definitions over time.
Layer facets, tags, or cross-references on top of hierarchical taxonomies to handle items that legitimately span multiple branches, using hierarchy for primary navigation and facets for multi-dimensional access.
Assign types to fields and objects to restrict valid inputs and operations before execution, making entire categories of error structurally impossible rather than detectable after the fact.
Track status as a state within a defined lifecycle rather than as a categorical property, defining valid state transitions and triggers to convert status tracking into executable workflow.
Type items by priority during batch classification sessions rather than at moment of action, converting continuous judgment into categorical lookup that preserves cognitive resources for execution.
Define priority types with one-sentence decision rules that specify membership criteria, not descriptions, ensuring that classification can occur without deliberation about tier assignment.
Work through priority types sequentially rather than scanning the full list, consulting only the highest-priority tier that contains items and never reviewing lower tiers until higher tiers are empty.
Type along both urgency and importance dimensions simultaneously to counteract the mere urgency effect, creating explicit categories for important-but-not-urgent items that would otherwise be chronically deprioritized.
Balance role type distribution across teams rather than optimizing for individual capability, because team performance depends on covering necessary functions not maximizing any single dimension.
Audit classification systems for duplicates, dead categories, overstuffed catch-alls, and ambiguous labels on a regular schedule to prevent compound interest on classification debt.
Audit your classification system by identifying what's prominent (operational values), what's absent (neglected values), and what's miscellaneous (blind spots), because category structure reveals value structure.
For high-stakes classifications, explicitly name adjacent categories and assess asymmetric costs of each error direction before committing to action protocols.
Set review triggers for important classifications—time-based, evidence-based, or outcome-based—because categories that persist without verification inherit compounding error from initial misclassification.
Calculate your actual prediction accuracy across documented tests rather than relying on felt sense of how often you're right, because subjective confidence systematically exceeds objective accuracy.
Reclassify when boundary cases become the majority, when two categories collapse into functional synonyms, or when a single category becomes an undifferentiated catchall, because these signal that current categories no longer carve reality at its joints.
Add categories for values you hold but haven't operationalized, and remove categories that exist by inertia, because classification systems calcify around past values and make un-encoded values invisible.
When using AI tools that classify on your behalf, explicitly examine whose values are embedded in the AI's classification categories, because automated classification at scale makes embedded values operationally invisible.
Design categories around prototypical central examples rather than exhaustive definitional rules, as prototype-based classification matches cognitive processing speed and flexibility better than rule-checking.
Test category systems with boundary cases that resist clean classification, as items at category edges reveal missing dimensions, vague boundaries, and incorrect framing more reliably than central members.
Log items that resist classification rather than forcing them into nearest categories, as accumulation of misfits reveals systematic problems in category structure that individual cases obscure.
Decompose classification into independent facets when items genuinely belong to multiple categories simultaneously, rather than forcing single-parent assignment.
Test each candidate classification dimension by asking whether it enables a question you actually ask and cannot currently answer - dimensions that don't resolve real queries add maintenance cost without retrieval benefit.
Audit what your classification system discards by explicitly listing the information lost in each category compression, then check whether any discarded dimensions matter for decisions the system supports.
Adjust classification granularity when categories either collapse distinctions needed for decisions (over-compression) or preserve distinctions never used for decisions (under-compression).