The gap between building AI tools and teaching people to use them is where I’ve been living for the past several months.
There are two conversations happening about AI in organizations right now, and they’re not talking to each other.
The first conversation is technical: model selection, agent architectures, context engineering, tool-use patterns, token economics. This conversation happens in engineering teams, AI labs, and developer communities. The people in this room can build the thing. What they can’t always do is explain it, teach it, or design for adoption.
The second conversation is organizational: change management, upskilling, enablement programs, “AI readiness.” This conversation happens in L&D departments, HR, and consulting decks. The people in this room understand people. What they often can’t do is build the thing they’re supposed to teach.
I’ve spent the last decade in the second room and the past several months building my way into the first. The view from the middle is clarifying.
The Gap Most Enablement Programs Miss
Most corporate AI enablement follows the same playbook: pick a tool, build a prompt library, run a workshop, measure adoption, declare victory. At Lyft, I helped build a prompt library for our performance cycle—tool-agnostic, designed to set a baseline for how teams could use AI to reduce the burden of recurring workflows. It landed well with the teams that got it.
That experience taught me something, though. We’d nailed the how to use it part. The harder question—how do you know when the output is wrong?—was a different skill entirely, and one I didn’t fully understand until I started building with AI at the system level.
The gap isn’t awareness. Everyone knows AI exists. The gap is judgment—knowing when the output is wrong, when to push back, when to throw it away and do the work yourself. That’s a different skill entirely, and most enablement programs don’t touch it.
The people designing enablement programs often haven’t built with AI at the system level. They haven’t watched an agent confidently fabricate a fact—something I caught in my own system when it surfaced incorrect details about my previous employer. They haven’t debugged a prompt that hallucinates just often enough to be dangerous. They haven’t needed to build verification layers because they’ve never shipped AI output that mattered.
You can’t teach judgment about a tool you haven’t broken.
What Builders Get Wrong About People
On the other side: the teams building AI tools routinely underestimate what it takes for someone to actually use them well. They ship features and assume adoption. They write documentation and assume understanding. They build agent systems and assume trust.
I built a 20+ agent AI memory system. Building the architecture was hard. Trusting it took longer.
Trust isn’t a feature. It’s a relationship built through repeated experience—watching the system be right, catching it being wrong, developing an intuition for when to lean on it and when to double-check. That process is learning. And it needs to be designed, not assumed.
The best technical systems fail when the human layer is treated as an afterthought. An AI assistant that can do 50 things is useless to someone who doesn’t know which 3 they need. A powerful agent is dangerous if its user can’t tell when it’s hallucinating.
What the Middle Looks Like
The best enablement I’ve built came from breaking the tool first.
When I built feedback programs at Lyft, the best curriculum came from watching what actually went wrong—real conversations, where managers struggled, what the design needed to account for.
Same thing happened with AI. My system has a corrections file that exists solely because the model made confident factual errors about my own career. That experience—not a course, not a whitepaper—shaped how I think about teaching AI judgment.
Adoption design turned out to be a technical skill.
Getting people to change behavior isn’t soft work—it’s systems design. At Lyft, I built a team effectiveness assessment as an automated pipeline—the learning was embedded in the system, not stapled on afterward.
AI enablement works the same way: learning embedded in workflows, not added as a training module. That requires someone who can design the learning and build the pipeline.
The verification layer was the whole game.
The most important AI skill isn’t prompting. It’s evaluating. Can you tell when the output is wrong? Do you know which claims to check?
I built a QA validator agent that checks other agents’ outputs before they reach me. Not paranoia—engineering. That same reflex is a learning design problem, and it requires someone who understands both the failure modes and the pedagogy.
The Role That Doesn’t Exist Yet
What I’m describing is a role most org charts don’t have yet. Someone who’s been in both rooms and knows what each one is missing. Not an AI engineer. Not an L&D generalist.
The job title might be AI Enablement Lead, or Learning Architect, or something nobody’s named yet. The work is the same regardless: design the learning experience, build the tools that support it, measure whether behavior actually changed, iterate.
The organizations where AI actually works—beyond the engineering team—seem to be the ones figuring out this intersection. The ones that keep these two conversations separate keep wondering why adoption stalls at “it writes my emails.”
Josh Stoner spent 10+ years as a Learning Architect—most recently at Lyft, working across curriculum design, automation, analytics, and AI enablement. He now builds AI memory systems and native macOS apps, and writes about the gap between what AI can do and what people actually do with it. Portfolio at josh-stoner.github.io.