Insights from training 1,000+ professionals on AI literacy, context, and problem-solving
Over the past year, AI has moved from experimentation to expectation. Leaders are under pressure to “do something with AI,” teams are being trained on new tools, and vendors are promising step-change improvements in productivity.
At The Kendall Project, we’ve had a front-row seat to how this is actually playing out. By delivering workshops and training sessions with more than 1,000 professionals across industries and roles, we’ve spent time inside organizations as they wrestle with AI in a very practical way, not in demos, but in the context of their real work and real problems.
What we observed was both surprising and, in hindsight, entirely predictable. The biggest challenges weren’t technical. They weren’t about models, vendors, or tools. They showed up much earlier. They were about whether teams understood AI well enough to use it, whether AI was being given enough context to understand the team and its challenges, and whether organizations knew how to train AI to help solve their specific problems in order to deliver real value.
One realization came up repeatedly: AI does not behave like traditional enterprise technology. It can’t be implemented, governed, or delegated the same way. The organizations making real progress understood that AI success depends on teams, shared understanding, and collaboration. In short, AI is a team sport.
Across industries, functions, and levels of AI maturity, the same patterns kept emerging. Below are five of the most important insights we’ve drawn from this work, and why they matter for organizations entering the next phase of AI adoption.
1. Less Than 5% of Professionals Know What “Context” Is
At the beginning of every Kendall Project workshop, we ask a simple question: “Who here can clearly explain what context is when it comes to AI?”
Consistently, fewer than 5% of participants raise their hands.
In practice, context means more than background information. It includes how problems are defined, what constraints matter, what tradeoffs exist, what success looks like, and what should not be optimized. In most businesses a shocking percentage of this information lives only in people’s heads or in informal conversations, AI is forced to guess. And when AI guesses, results become inconsistent, untrustworthy, or misleading.
What surprised us wasn’t that context understanding was missing. It was how rarely team members and organizations realized that this was the missing piece. Many assumed high AI failure rates was a tooling issue, a data issue, or a model issue, when in reality the foundation simply hadn’t been built.
Without shared, explicit context, even the most powerful AI systems struggle to contribute meaningfully to problem-solving.
2. There Is a Massive Gap in AI Literacy - and Prompt Engineering Isn’t It
When organizations recognize that AI isn’t delivering value, the most common response is to invest in prompt engineering. Better prompts, smarter prompts, longer prompts.
Prompting matters. But what we’ve observed in speaking with business executives and IT leaders is that prompt engineering is often treated as a substitute for AI literacy, and that substitution doesn’t work.
AI Literacy is not about knowing which words to type. It’s about understanding how AI works, how it interprets information, where it guesses, where it lacks grounding, and how it uses context to reason toward an answer. Without that understanding, even well-crafted prompts become fragile. They work once, in one situation, and then quietly fail in the next.
This is where AI diverges from traditional enterprise technology. You can’t treat it as a tool that lives in IT or a skill owned by a small group of specialists. AI success depends on shared understanding across teams, and of the problem, the constraints, and the intent behind the work.
In that sense, AI literacy is an organizational mandate, not an individual trick. And without it, no amount of prompting can compensate.
3. AI Doesn’t Understand Your Spreadsheets
A common failure point we’ve seen over and over is the assumption that AI can “understand” existing artifacts of work, especially spreadsheets.
In reality, spreadsheets are containers of implicit human knowledge. Column names, formulas, color coding, exceptions, and workarounds make sense to the people who created and use them, but not to AI. Without explanation, AI has no way of knowing what matters, which assumptions are baked in, or where the edge cases live.
We often point out a simple truth in our enterprise AI workshops: if only one person truly understands a spreadsheet, then AI doesn’t understand it at all. At best, it can infer patterns. At worst, it produces confident answers based on incorrect assumptions.
This is why context matters more than access. AI doesn’t just need data, it needs explanation. What does this represent? Why does it exist? What decisions depend on it?
When teams skip this step, AI doesn’t fail loudly. It fails quietly, producing outputs that look plausible but aren’t grounded in how the business actually works, and this is a HUGE problem.
4. You Can’t Outsource AI Success
As AI frustrations grow, many organizations respond by bringing in outside help (consultants, vendors, or system integrators) to “fix” the problem.
External expertise can be valuable. But what we consistently observed is that AI success cannot be outsourced in the way traditional enterprise initiatives often are. When internal teams lack shared understanding of their problems, workflows, and constraints, no outside party can supply that context on their behalf.
AI depends on translation: turning how work actually happens into something explicit, structured, and observable. That translation requires deep, day-to-day knowledge of the business. When it lives only in individuals’ heads or informal practices, outside partners are forced to infer, generalize, or fill in gaps. The result is usually fragile solutions that work briefly, or not at all.
This is where many AI efforts stall. Organizations buy tools, commission pilots, and produce impressive demos, but fail to build internal capability. When the engagement ends, progress stalls with it.
The teams that made real progress understood a simple truth: external partners can accelerate learning, but they can’t replace it. AI only delivers sustained value when the people closest to the work are actively involved in shaping how it’s applied.
That’s why AI isn’t a vendor problem. It’s a team problem.
5. 2025 Is Not the Year of AI Agents
AI agents are getting significant attention. The promise of autonomous systems that can plan, act, and coordinate work is compelling, especially to technologists and CIOs.
What we observed in practice tells a different story.
Fewer than 1% of Kendall Project training participants indicated that their organizations were actively executing on AI agents in any meaningful way. Most were still working through more basic challenges: clarifying problems, aligning teams, and building AI literacy.
This gap is critical. AI agents don’t reduce the need for context, they amplify it. When goals, constraints, or workflows are poorly defined, autonomy increases risk rather than value.
The organizations making progress weren’t chasing agents. They were focused on fundamentals: shared understanding, governance, and team capability. They recognized that automation without readiness leads to frustration, not leverage.
AI agents will matter. But for most organizations, that moment hasn’t arrived. The next phase of AI adoption will be led by teams that have done the foundational work first.
Why Enterprise AI Success Depends on Teams, Not Tools
After working with more than 1,000 professionals across industries, one thing is clear: AI success is not a tooling problem.
The organizations making progress aren’t the ones chasing the latest capabilities. They’re the ones investing in shared language, clearer problem definition, and the ability to translate human knowledge into context AI can actually use. They recognize that AI doesn’t replace teams; it depends on them.
This is why AI can’t be treated like traditional enterprise technology, and why it can’t be delegated to a single function or vendor. AI only delivers value when teams are actively involved in shaping how it’s applied.
As organizations move into the next phase of AI adoption, success won’t be defined by automation speed, but by the ability to solve meaningful problems together, consistently and at scale.