There is a pattern playing out right now in boardrooms across the world, and it deserves to be named clearly.
A new Gartner study of 350 global executives, each running a company worth at least $1 billion, revealed something that should reframe the entire conversation about AI and work. Eighty percent of those executives admitted they cut staff to fund AI investments. And then Gartner measured the results.
The executives who replaced their people with AI saw the same financial returns as the executives who kept their teams intact.
No advantage. No edge. The same numbers.
That is not a technology problem. That is a strategy problem. And it points directly to something we have been tracking in enterprise AI for years: the assumption that removing human judgment from the equation is how you extract value from AI. That assumption is not just wrong. The data is now showing it is costly.
The Tool-First Trap
Here is what we believe happened in most of those organizations.
Executives saw AI as a replacement lever rather than a performance lever. They asked: "What can AI do instead of this person?" rather than "What could this person do if AI made them significantly more capable?"
That framing produces a predictable outcome. You remove the people who carried context, relationships, institutional knowledge, and judgment. You replace them with a tool that was trained on general information and given no understanding of your specific business, your customers, your processes, or your goals. Then you measure the returns and wonder why they are flat.
This is what the Kendall Framework calls tool-first thinking. It is the most common and most expensive approach to AI adoption in enterprise today.
What the Research Is Actually Telling Workers
If you are a professional who has felt anxious watching the headlines about AI layoffs, we want to offer something more useful than reassurance: evidence.
The Remote Labor Index, produced by researchers at the Center for AI Safety and Scale AI, found that current AI agents complete only 2.5 percent of real-world remote work projects at an acceptable quality level. They produce corrupted files, incomplete deliverables, and inconsistent outputs.
Forrester Research found that 55 percent of employers already regret their AI layoffs. Half of those roles, they predict, will be quietly rehired, often offshore and at lower wages, because the AI did not deliver what the promise suggested it would.
The companies that fared best were not the ones that replaced their people. They were the ones that equipped their people.
That is the finding that deserves the headline.
Why AI Performs Better as a Teammate Than as a Replacement
This should not be surprising to anyone who understands how AI systems actually work.
AI performs at the level of the context it is given. When a skilled professional works with an AI tool, they bring something the AI cannot generate on its own: deep knowledge of the specific situation, the ability to recognize when an output is wrong, the judgment to adjust course, and the relationships that make the work matter.
That combination, a capable person and a well-configured AI system working together, is where the performance gains live.
We have seen this across Context 360 engagements. The moment an organization stops asking AI to replace human judgment and starts asking it to extend human capability, everything shifts. Output quality improves. Speed increases. And the people involved feel more effective, not less relevant.
But that shift does not happen by accident. It requires a deliberate architecture: the right context built around the AI, the right roles connected to it, and a clear understanding of where human judgment is load-bearing and where AI can carry more of the weight. Without that architecture, even the best AI tools produce the same flat returns the Gartner data revealed.
What This Moment Actually Means for Workers
The data from this period of AI adoption is delivering a message that workers deserve to hear directly.
Your expertise is not replaceable by a general-purpose AI tool. The knowledge you carry about how your organization actually works, what good looks like in your domain, where the edge cases live, and how to serve the people you serve well: that knowledge is exactly what AI needs to perform at its best.
The question is not whether AI will be part of your work. It will be. The question is whether the organizations you work for understand that your knowledge is not a cost to be eliminated. It is the raw material that makes their AI investment work at all.
The executives in the Gartner study who kept their teams and invested in AI together are seeing the same returns as those who cut. That finding will feel discouraging until you understand what it is actually measuring: the difference between buying AI and building the conditions for AI to perform. Most organizations have done the former. Almost none have done the latter.
The returns are real. They are just waiting on the infrastructure that unlocks them. And that infrastructure cannot be built without the people who understand the work.
That is not optimism. That is what the evidence shows.