Listen

Everybody got the same idea at the same time.

The moment AI tools became accessible, many businesses did the same three things: built a prompt cheatsheet, assembled a “curated” prompt library, and scheduled an all-hands demo where someone screen-shared ChatGPT and said see, it can write emails for you.

I know because I was one of them. I built an AI prompt library for our performance cycle—feedback templates, self-reflection scaffolds, manager rating guides—and deployed it to 4,000+ employees. It worked. People used it. Engagement was high.

But we taught people to use a tool. We didn’t teach them to think differently about the work the tool was doing.

That’s the gap. And it’s widening.


The Cheatsheet Trap

The current corporate AI learning playbook looks like this:

  1. Pick a tool (usually whatever the company licensed)
  2. Build a prompt library or cheatsheet
  3. Run a workshop: “10 Ways to Use AI at Work”
  4. Measure adoption (logins, usage, maybe a survey)
  5. Declare victory

This is the equivalent of teaching someone Microsoft Word by handing them a list of keyboard shortcuts. Technically useful. Fundamentally incomplete.

The problem isn’t that cheatsheets are bad. It’s that they’re the ceiling, not the floor. Most AI enablement programs stop at tool proficiency—here’s how to get the thing to do the thing—and never get to the questions that actually matter:

We’ve been so focused on teaching people to use AI that we forgot to teach them to use AI well. And “well” is where L&D either levels up or becomes irrelevant.

Where the Work Actually Is

After 13 years building learning programs—and the last year building an AI system that literally learns about me—the shift comes down to three things. None of them are about prompts.

1. How (Properly)

AI fluency isn’t knowing which tool to open. It’s understanding what’s happening underneath.

The most common usage pattern I see: type question, get answer, move on. They don’t understand why the same prompt gives different results on Tuesday than it did on Monday. They don’t know what a system prompt is, why context length matters, or that the model is pattern-matching, not reasoning.

You don’t need everyone to understand transformer architecture. But you need them to understand enough to make good decisions about when to trust the output, when to push back, and when to throw it away and do the work themselves.

The learning challenge here isn’t technical—it’s behavioral. People need to develop an intuition for what AI is good at and what it’s bad at. That intuition doesn’t come from a cheatsheet. It comes from structured practice with feedback loops. The same way you learn to drive—not by reading the manual, but by getting behind the wheel with someone who tells you when you’re drifting.

Right now, most organizations are handing people the keys and saying good luck. The ones that build the practice infrastructure—guided exploration, exercises with actual stakes, and someone coaching in the moment—will be the ones whose people actually get better. Everyone else will plateau at “it writes my emails.”

2. Critical Thinking (Fact-Checking the Machine)

AI generates fluent, confident, well-structured text that is sometimes completely wrong. Not obviously wrong—subtly wrong. Wrong in ways that require domain knowledge to catch. Wrong in ways that sound more convincing than the truth because the model optimizes for coherence, not accuracy.

I’ve watched it happen in my own system. An agent confidently cited a date that was off by four years. Another synthesized two unrelated data points into a claim that sounded great in a cover letter but had no factual basis. These weren’t hallucinations in the cartoon sense—they were plausible, well-written, and would have survived a casual review.

The skill that matters now isn’t how do I get AI to generate content. It’s how do I evaluate what AI just gave me. That’s a different skill entirely—and most corporate training has never touched it.

We’ve spent decades in L&D teaching people to follow processes, apply frameworks, and use tools. We’ve spent very little time teaching people to question the output of the thing they just used. But that’s the new core competency: the ability to read something that sounds right and ask wait—is this actually true?

This doesn’t mean everyone needs to become a fact-checker. It means learning programs need to build the reflex. When AI gives you an answer, your next thought shouldn’t be great, done. It should be how would I verify this?

The organizations that build this reflex into their culture will catch errors before they ship. The ones that don’t will discover the cost of AI-generated confidence the hard way—like the cover letter with a fabricated claim that sounded better than the truth.

3. Rebuilding the Human Layer

Here’s what I worry about most.

AI is genuinely good at a lot of the things we used to need humans for. It can draft a performance review. It can summarize a meeting. It can generate a training module. It can answer employee questions at 2 AM. The efficiency gain is real and it’s significant.

But the things AI can’t do are the things that make work actually work. Reading the room. Building trust. Giving feedback that lands because you earned the relationship. Navigating conflict where the answer isn’t a framework—it’s a conversation.

When I built Conversations that Count—a feedback skills program for people managers—the curriculum wasn’t the hard part. The hard part was getting humans in a room together practicing difficult conversations. The framework (SBI-D) was the scaffold. The learning happened in the awkwardness of trying it live with a real colleague and a real facilitator who could say here’s what I noticed you doing just now.

No AI can do that. Not because the technology isn’t there, but because the trust isn’t. Feedback from a human who knows your work, your context, and your growth edge lands differently than feedback from a model that’s pattern-matching on “constructive criticism” training data.

The risk isn’t that AI replaces human learning experiences. The risk is that organizations, under cost pressure, quietly let the human layer atrophy. I’ve already seen the early signs: a team replacing a live coaching session with a chatbot “for efficiency,” a manager onboarding program reduced to a self-paced AI module. Not maliciously—just under budget pressure, the human part is always the most expensive line item to justify.

The L&D teams that matter in the next five years will be the ones who can articulate—clearly, with evidence—which parts of learning need to be human, which parts can be AI-assisted, and where the boundary should be drawn. Not because AI is bad. Because learning that changes behavior requires something AI can’t provide: a relationship.

The Actual Job Now

If you work in learning, your job just changed. Not in the way the popular advice says—“upskill in AI” and “become a prompt engineer.” That’s the cheatsheet answer.

The real shift is this: your job used to be designing content and delivering programs. Now it’s figuring out where the human matters and where the machine does—in learning specifically.

That means:

The organizations that get this right won’t be the ones with the best prompt libraries. They’ll be the ones where people trust AI enough to use it and trust themselves enough to question it.

That’s the reset.

Josh Stoner spent 13 years building learning systems at scale—building for 4,000+ employees across the full stack: curriculum design, LMS infrastructure, automation, analytics, and AI enablement. He currently builds stonerOS, a persistent AI memory system, and writes about what happens when you give AI actual context.