Play Before You Work
Most AI adoption programs skip play entirely because it looks unproductive. That's exactly why they stall.
One of our engineers spent the better part of an afternoon asking an AI model to explain Kubernetes networking in a thick Minnesota accent. "You betcha, that's a lotta pods talking to each other, doncha know." He sent screenshots to the team chat. People laughed. Then he went back and spent another hour having it roast his bash scripts.
He wasn't doing anything useful. That was the point.
Three weeks later, when we started integrating AI into actual sprint work, he was the one helping everyone else. Not because he'd read documentation or attended a training. Because he'd already burned through enough weird edge cases to have a real mental map of what the tool could do, and what it couldn't.
That's the thing most AI adoption programs never give people the chance to build.
The yardstick gets set wrong
Most adoption programs start with the use cases that matter: code review, test generation, documentation. They measure adoption through productivity metrics. And they set that measurement baseline during the period when engineers are most skeptical, most self-conscious, and most likely to confirm their suspicion that AI doesn't work reliably.
The first impression sticks. An engineer who approaches the tool with obligation, not curiosity, tests every output against a skeptic's standard. The AI keeps failing that test, not because it's incapable, but because the person using it never learned how to actually work with it.
Play resets the baseline before the stakes arrive.
It's not just your team
The research on this is uncomfortable. Studies have found developers using AI tools measurably slower than those without them. The predictable response was to blame the study design, blame the tools, blame the developers for not knowing how to prompt correctly.
The more honest read is simpler: most of those developers never built genuine intuition for the tool. Every interaction was a test the AI had to pass. And it kept failing the test because the human had no baseline for what "passing" even looked like.
That's not a capability problem. It's a trust problem.
The failure mode isn't obvious at first. Teams see low adoption and assume the use cases aren't compelling, or the tooling needs improvement. They respond with better prompting guides, more structured onboarding, dedicated enablement sessions. The adoption still stalls.
What they skipped was letting engineers be genuinely curious about the tool without any professional stakes attached. No one watching their output. No one measuring their velocity. Fear of replacement doesn't operate when the task is asking AI to write a limerick about your pull request queue.
Play isn't a detour around AI adoption.
It's the activation energy that makes structured adoption possible. Teams that skip it are measuring a tool against a skeptic's baseline, then wondering why the numbers don't add up.
AI is a skill. Most teams treat it like a feature.
When you treat it like a feature, adoption looks like: read the docs, watch the demo, apply it to your work. When you treat it like a skill, adoption looks like: mess around, build intuition, then apply it to your work.
You'd never expect someone to learn to ski by starting on a black diamond run. You'd put them on a bunny slope. Low stakes, high repetition, lots of small failures that don't matter. The same logic applies here, and almost no one applies it.
The difference only shows up at scale. Teams that did the play phase first have engineers who course-correct quickly when AI produces something wrong. They've seen enough edge cases during low-stakes experimentation that bad output doesn't throw them. Teams that skipped it have engineers who either over-trust outputs or abandon the tool entirely when it fails once. Both of those outcomes are expensive.
What we actually did
Before any organized adoption effort, we gave engineers two weeks of unstructured time with the tools we were evaluating. The only rule: don't use it for work you're accountable for yet. Use it for anything else.
Some people used it for personal projects. One engineer automated something in his home lab. Another spent time drafting emails in the voice of historical figures, which produced zero business value and apparently considerable amusement. Nobody was optimizing for outcomes.
When we started the structured phase afterward, the conversation changed. Instead of "does this tool work?", people were saying "here's where I've found it breaks, here's where it's genuinely useful." They arrived with intuition already built. The onboarding wasn't starting from scratch. It was channeling something that already existed.
A few things we learned:
- The play period needs to be explicitly sanctioned, not just tolerated. If engineers think it's technically allowed but professionally frowned upon, they'll find something productive-looking to do with it. That defeats the purpose.
- Two weeks is roughly right for a team with limited prior AI exposure. Less than a week isn't enough time to move past initial skepticism. More than three and people lose the thread.
- Give people a few absurd prompts to start with. "Ask it to explain your last bug fix as if it were a 1920s radio drama." These aren't jokes. They're on-ramps. They lower the activation energy for that first real interaction and make the tool feel approachable rather than evaluative.
One decision, not a checklist
If you're planning a structured AI adoption effort, do this first: give your engineers one week to use AI for anything except their assigned work. No metrics. No reporting. No showcasing. Just time.
At the end of the week, run a fifteen-minute conversation about what surprised them. Not what they think the team should do with AI. Not a productivity pitch. Just what surprised them.
That conversation will tell you more about your team's actual readiness than any survey or pilot program. And the people who come into your structured adoption phase afterward will arrive with something training alone can't create: a mental model built from genuine exploration, not obligation.
For more on AI adoption patterns and team enablement, see the blog archive.
When your engineers first used AI, were they trying to prove something, or trying to discover something? And which of those modes do you think produced better intuition?