For decades, Daniel Kahneman's dual-process model has been the dominant framework for understanding human judgment. System 1 is fast and intuitive. System 2 is slow and deliberate. Together, they explain how we think.
New research from the Wharton School argues that model is now incomplete.
System 3: Artificial Cognition
Researchers Steven Shaw and Gideon Nave have proposed what they call "Tri-System Theory." The addition: System 3, defined as artificial cognition that operates outside the brain but functions as part of how we reason.
This isn't AI as a tool you pick up and put down. It's AI as a cognitive system you think with. And increasingly, one that thinks for you.
The researchers conducted three preregistered experiments with 1,372 participants across nearly 10,000 reasoning trials. Participants solved problems from the Cognitive Reflection Test, a well-validated set of questions designed to distinguish intuitive snap-judgments from careful deliberation.
The classic example: A bat and a ball cost $1.10 in total. The bat costs $1 more than the ball. How much does the ball cost? The intuitive but wrong answer is 10 cents. The correct answer is 5 cents.
In the experiments, participants could optionally consult ChatGPT embedded in the survey. The researchers manipulated whether the AI provided correct or incorrect answers.
The Rise of Cognitive Surrender
The results were striking.
Participants used AI on more than half of all trials. When the AI was accurate, their performance improved substantially. When the AI was wrong, their performance dropped below where they would have been without any AI at all.
But here's the troubling part: confidence increased either way. Even when they got the wrong answer because they followed faulty AI, they felt more confident about it.
Shaw and Nave call this phenomenon "cognitive surrender": adopting AI outputs with minimal scrutiny, bypassing both intuition and deliberation.
This is distinct from cognitive offloading, which is healthy. Cognitive offloading means delegating a task to AI but still evaluating the output. Cognitive surrender means stopping evaluation altogether. The AI's answer becomes your answer.
"Unlike cognitive offloading, which is typically strategic and task-specific, cognitive surrender entails a deeper transfer of agency. It reflects not merely the use of external assistance, but a relinquishing of cognitive control."
Who Surrenders Most
The research identified individual differences in susceptibility to cognitive surrender.
People with higher trust in AI showed greater surrender. So did people with lower "need for cognition," a psychological construct measuring how much someone enjoys effortful thinking. Lower fluid intelligence also predicted greater surrender.
Critically, situational factors like time pressure and incentives shifted baseline performance but didn't eliminate the pattern. When AI was accurate, it buffered time-pressure costs and amplified incentive gains. When AI was faulty, it consistently reduced accuracy regardless of context.
"In cases of cognitive surrender, the user does not just follow System 3: they stop deliberative thinking altogether."
What This Means for AI in Hiring
This research has direct implications for how AI should be used in talent acquisition.
Consider the typical AI screening tool. It ingests resumes, evaluates candidates, and outputs scores or recommendations. A recruiter reviews the output and makes a "decision."
But is that actually a decision? Or is it cognitive surrender with extra steps?
If recruiters follow AI recommendations roughly 90% of the time, even when the AI is severely biased, as the University of Washington research found, then the human isn't providing oversight. They're providing a rubber stamp. The AI decided. The human just clicked approve.
The Shaw and Nave research explains the mechanism. When AI provides an answer that's fast, fluent, and sounds authoritative, people stop engaging their own reasoning. They surrender.
This doesn't mean AI is bad for hiring. It means AI that decides for you is bad for hiring.
Offloading vs. Surrender
The paper's distinction between cognitive offloading and cognitive surrender maps directly onto how AI tools should be designed.
Cognitive offloading (healthy):
- AI collects and organizes information
- Human evaluates the information
- Human makes the decision
- AI made the human's job easier without replacing their judgment
Cognitive surrender (problematic):
- AI evaluates candidates and generates recommendations
- Human sees the recommendation
- Human follows the recommendation without critical evaluation
- AI made the decision; human provided the approval
The difference isn't whether AI is involved. The difference is whether the human is still thinking.
Human-in-the-Loop as a Design Choice
"Human-in-the-loop" has become a compliance checkbox. The assumption is that having a human somewhere in the process provides meaningful oversight.
This research suggests that assumption is wrong. If the AI presents its output in a way that triggers cognitive surrender, the human becomes ceremonial. Present but not thinking.
True human-in-the-loop isn't about having a human click a button. It's about designing AI systems that preserve human judgment rather than replace it.
That means AI systems that:
- Don't score candidates
- Don't rank applicants
- Don't generate hiring recommendations
- Don't label people as qualified or unqualified
Instead, AI systems that:
- Collect information from multiple sources
- Organize that information clearly
- Surface discrepancies and patterns
- Present findings without evaluation
The human then looks at organized information and decides what it means. That's genuine human judgment. That's cognitive offloading, not cognitive surrender.
How We Think About This
We built Virvell around the principle that AI should inform decisions, not make them.
Our AI conducts pre-screen interviews, reference check conversations, and coordinates background verification. It collects what candidates say, what references report, and what records reveal. It organizes this information and flags discrepancies across sources.
What it doesn't do is score candidates, rank applicants, or tell you who to hire.
When you review a Virvell report, you're seeing organized information, not algorithmic conclusions. The decision about what that information means belongs to you.
This isn't a limitation of our technology. Based on the research, it's the appropriate role for AI in employment decisions.
The Question We Should Be Asking
The Shaw and Nave research ends with a provocation: "The question facing businesses, regulators, and individuals alike is not whether to engage System 3. That ship has sailed. The question is whether, in doing so, we are doing so as active partners or as willing passengers."
In hiring, that question becomes concrete. Is your AI making your team better at evaluating candidates? Or is it making their judgment irrelevant?
The answer depends entirely on what the AI is designed to do. AI that presents information supports human reasoning. AI that presents conclusions replaces it.
One leads to better decisions. The other leads to cognitive surrender with a compliance audit trail.
See how information-first AI screening works in practice
Virvell automates pre-screen interviews, reference checks, and background verification in one governed platform. Our AI collects information. Your team makes decisions.
Book a DemoVirvell automates pre-screen interviews, reference checks, and background verification in one governed platform. Our AI collects information. Your team makes decisions. Learn more at virvell.ai.