arrow_back Insights

March 2026

AI Deployment

John Koblinsky

Read on Substack open_in_new

The Cognitive Transformation Gap

By John Koblinsky

Two Gartner predictions. One blind spot nobody's talking about.

Gartner projects that 30% of enterprises will face declining decision-making quality from AI overreliance by 2030 — timed to follow mass AI deployment into high-stakes decisions. John Koblinsky at Marsh Island Group names the gap between those predictions the Cognitive Transformation Gap: the distance between the pace of AI deployment and the pace of human cognitive readiness to operate within it.

Why the AI Governance Playbook Leaves the Human Problem Unsolved

In August 2025, Gartner forecast that 70% of finance functions would use AI for real-time decision-making on operational costs and cash flow management by 2028. Five months later, a separate Gartner team forecast that 30% of enterprises would face declining decision-making quality due to AI overreliance by 2030. The gap between those two predictions is roughly two years. The gap between their underlying assumptions is enormous.

The first prediction describes a technological deployment. The second describes a human consequence. What neither prediction addresses is the bridge between them: what has to change inside the humans who will operate these AI systems, and at what pace, to avoid the trajectory the second forecast describes.

The dominant AI adoption narrative treats that bridge as though it doesn't exist. Industry roadmaps specify technologies to implement, processes to redesign, and governance structures to build. They mandate human-in-the-loop oversight and call for reskilling. They almost never specify what those humans need to become — cognitively and behaviorally — to actually fulfill the roles these frameworks assign them.

The governance models that do address the human element are accountability structures, not cognitive ones. "Human-in-the-loop" defines where a person sits in a workflow. It does not address whether that person can detect a subtle AI error under time pressure, maintain vigilance across hours of monitoring, or override a confident-sounding recommendation that contradicts their own judgment. Research documents that all three of those capabilities degrade without deliberate training and support. They do not come standard with the software license.

What Research Shows About AI's Effect on Human Cognitive Performance

AI Intensifies Cognitive Work Rather Than Reducing It

The organizational assumption underlying most AI deployments is that the technology reduces workload, freeing human attention for higher-order judgment. A February 2026 field study published in Harvard Business Review, conducted by Ranganathan and Ye over eight months inside a technology company, found the opposite. "Workers expanded their scope and worked longer hours rather than reducing effort," Ranganathan and Ye wrote, documenting a pattern in which efficiency gains were converted into more work, not more breathing room.

The cognitive residue that remained was harder, not lighter. AI automation of routine tasks concentrated the judgment calls, exception handling, and oversight decisions that require the deepest engagement. The humans left in the loop were making more complex decisions per hour, with less recovery time, using skills they were exercising less frequently. Organizations registered this as productivity. The research identified it as accumulating cognitive strain.

Automation Bias Degrades Oversight Even Among Experienced Professionals

Research on automation bias documents that even experienced professionals defer to incorrect AI recommendations roughly half the time — not because of carelessness, but because of how the brain processes information from systems it perceives as expert. Confident, well-formatted output from an authoritative-seeming source triggers deference that bypasses critical evaluation. The cognitive shortcut is a feature of human cognition, not a flaw, but it becomes a liability when the authoritative-seeming source is systematically wrong.

Horowitz and Kahn (2024), writing in International Studies Quarterly, found that automation bias follows an inverted curve across AI experience levels. The risk peaks in the middle of the experience curve — not at the beginning or the end. Novices maintain healthy skepticism. Deep experts can identify failure modes. The vast majority of organizational AI users currently occupy the moderate-exposure zone, where they trust the tool more than a skeptic would, without the specific knowledge of its failures that a builder would have.

The Skills That Matter Most Are the First to Atrophy

The capabilities AI-augmented work requires most — sustained vigilance, calibrated trust, and the confidence to override a machine — are precisely the skills that degrade when they go unpracticed. Macnamara and colleagues (2024), writing in Cognitive Research: Principles and Implications, found that "AI assistance accelerates skill decay in ways performers don't perceive," because the disuse operates at the level of cognitive skill engagement rather than task engagement. Workers remain busy and feel capable while the underlying verification capacity quietly erodes.

The people designated to catch AI errors may feel fully equipped to do so while losing the specific capability that would make that review meaningful. The organization doesn't know what it's lost. The reviewer doesn't know what they've lost. The work continues to flow through a check that is no longer performing as a check. This is the mechanism that makes Cognitive Transformation Gap failure invisible from the inside — and why how the oversight layer breaks down in practice tends to surface as a performance incident rather than a systemic warning.

What This Means for Leaders Deploying AI at Scale

Gartner's two predictions aren't contradictory. They describe the same organizational trajectory from different vantage points — one team seeing the technology rolling forward, another seeing the human consequences arriving behind it. John Koblinsky's analysis at Marsh Island Group identifies the work between those two forecasts as the missing strategic investment: the deliberate, structured preparation of human cognition for a working environment that is changing faster than organizations are equipped to track.

The governance playbook for AI deployment is mature. Risk frameworks, audit structures, ethical guidelines, and compliance requirements are well-developed and widely available. What doesn't exist at the organizational level is the equivalent playbook for cognitive transformation: what skills to build, how to measure their degradation under AI use, how to preserve verification capacity as AI handles more of the baseline work, and how to structure AI workflows so that human oversight remains substantive rather than ceremonial.

The immediate practical question is not "how much AI should we deploy?" The immediate practical question is: of the people whose judgment we are counting on to evaluate AI output, how many can currently tell us specifically where that AI is most likely to fail — and when did they last catch it doing so? Most organizations cannot answer that cleanly. The absence of an answer is itself the risk indicator.

Organizations that deploy AI at scale without a corresponding investment in cognitive transformation are not managing risk. They are manufacturing it — building systems that require human cognitive capabilities they are simultaneously eroding. The technology transformation has a playbook. The cognitive transformation is what Marsh Island Group is building.

FAQ

Frequently Asked Questions

What is the Cognitive Transformation Gap in AI deployment? expand_more

The Cognitive Transformation Gap is the growing distance between the pace at which organizations deploy AI into decision-critical work and the pace at which human cognitive readiness develops to operate meaningfully within those systems. John Koblinsky at Marsh Island Group named the concept in response to two Gartner forecasts: one projecting mass AI deployment into high-stakes decisions by 2028, the other projecting declining decision quality in 30% of enterprises by 2030.

Why does deploying AI at scale cause declining organizational decision quality? expand_more

AI deployment concentrates the judgment calls and exception handling that automation can't do — requiring humans to make harder decisions per hour with less practice time for the skills those decisions require. Research on automation bias documents that experienced professionals defer to incorrect AI recommendations roughly half the time. And the oversight capabilities most needed — sustained vigilance, calibrated trust, willingness to override — are precisely the skills that atrophy under sustained AI use.

How does automation bias affect experienced professionals in AI-augmented environments? expand_more

Horowitz and Kahn (2024) found automation bias follows an inverted curve: novices maintain skepticism, deep experts can identify failure modes, but moderate AI users — where most organizational professionals currently sit — experience peak overconfidence. The people most likely to be put in the AI oversight chain are, right now, statistically the most likely to trust AI output without questioning it.

What did the HBR field study find about AI's effect on cognitive workload? expand_more

A February 2026 field study by Ranganathan and Ye, conducted over eight months inside a technology company, found that AI access led workers to expand scope and work longer hours rather than reducing effort. The efficiency gains were converted into more work, not recovery time. The cognitive residue — the harder judgment calls remaining after AI automation — intensified rather than diminished.

Why do human-in-the-loop mandates fail to prevent AI-driven decision quality decline? expand_more

"Human-in-the-loop" defines where a person sits in a workflow. It does not address whether that person can detect a subtle AI error under time pressure, maintain vigilance across hours of monitoring, or override a confident recommendation that contradicts their own judgment. Research by Macnamara and colleagues (2024) documents that all three capabilities degrade under sustained AI use without deliberate support — making governance that looks structural actually ceremonial.

What is the difference between AI reskilling and cognitive transformation? expand_more

Reskilling programs teach workers what AI tools do and how to use them. Cognitive transformation addresses what happens to human judgment under sustained AI use — the atrophying of verification skills, the drift into automation bias, and the specific capability shifts required to provide meaningful oversight rather than rubber-stamp review. The two are not interchangeable. Reskilling is a tool adoption program. Cognitive transformation is a judgment-preservation program.

How can executives measure whether their teams are cognitively ready for AI deployment? expand_more

Useful starting indicators: how often does the designated AI oversight layer catch errors versus approve output without modification? Can reviewers explain specifically where the AI being reviewed is most likely to fail, with examples? And how many errors have reviewers documented catching in the last 90 days? An oversight process that cannot answer those questions is functioning as a formality, not a safeguard.

What should organizations invest in alongside AI deployment to close the cognitive gap? expand_more

Alongside technology deployment, organizations need structured investment in the capabilities AI erodes: deliberate practice of verification and judgment tasks that AI now handles automatically, structured challenge protocols that require reviewers to identify problems rather than confirm outputs, and measurement of decision quality over time rather than output volume. The technology transformation has a mature playbook. Building the cognitive transformation equivalent is the work Marsh Island Group is doing.

MIG Override

Get the next piece before it's posted.

Subscribe Free