Automate pattern-based error detection before manual review — reserve human attention for contextual judgment tools can't handle
Deploy automated grammar checkers, linters, or mechanical validation tools before manual review to catch pattern-based errors, reserving human attention for contextual judgment that tools cannot provide.
Why This Is a Rule
Human attention is a scarce, depletable resource. Spending it on errors that machines can detect — spelling mistakes, syntax errors, formatting inconsistencies, threshold violations — wastes the resource on tasks where human judgment adds no value. Worse, attention spent on mechanical errors is attention unavailable for contextual errors that only human judgment can catch: logical inconsistencies, inappropriate tone, strategic misalignment, or subtle reasoning flaws.
The sequencing matters: automated checks before manual review. If the human reviewer encounters mechanical errors first, they consume attention on fixable pattern-based issues and arrive at the contextual review with depleted cognitive resources. If automated tools catch the mechanical errors first, the human reviewer encounters clean material and can focus entirely on the judgment-requiring issues that tools can't assess.
This is the division of labor between System 1 tools (pattern matching, rule application — perfect for machines) and System 2 capacity (contextual reasoning, judgment — requires humans). Don't use your System 2 where a machine's System 1 suffices.
When This Fires
- When designing any quality assurance or review process
- When manual reviewers are spending time on errors that follow detectable patterns
- When code reviews, writing reviews, or process audits consistently catch mechanical errors
- When reviewer fatigue is degrading the quality of contextual judgment in reviews
Common Failure Mode
Relying entirely on manual review for all error types: "I'll catch everything when I review it." You won't — you'll catch the first few mechanical errors, deplete attention, and miss the contextual error that matters most because your cognitive resources were consumed by pattern-based issues a linter could have caught.
The Protocol
(1) For each review process, classify errors into two categories: Pattern-based (detectable by rules: spelling, syntax, formatting, threshold violations, naming conventions). Contextual (requiring judgment: logical soundness, appropriateness, strategic alignment, quality of reasoning). (2) For pattern-based errors, deploy automated detection: linters, spell checkers, format validators, automated tests, threshold monitors. Run these before any human touches the output. (3) For contextual errors, preserve human attention: the reviewer sees only the output that has already passed automated checks. (4) The human reviewer's sole job is contextual judgment — the highest-value use of limited attention. (5) Periodically audit: are there contextual errors that could be converted to pattern-based detection? As you identify patterns in contextual errors, codify them into automated rules.