Copilot Catalyst Lesson 2: Sprint Beats Marathon

Many have traced the turning point in modern aviation to the U.S. Army Air Corps in 1939, when America realized it had fewer than 5,000 trained pilots and a global war on the horizon. There was no time for the traditional peacetime programs that had produced a trickle of aviators. By the spring of 1941, the Air Corps had expanded from training 4,500 pilots over two years to targeting 30,000 annually. By 1943, that number had climbed to almost 100,000. To meet the demand, the military built a network of rapid, immersive training fields. Runways were carved into farmland. Simulators were assembled overnight. Instructors improvised lessons as they went.

It was an audacious experiment that ultimately worked. Within five years, more than 250,000 pilots had been trained. The speed forced a redesign of everything: shorter sessions, immediate feedback, peer teaching, and an early expectation that cadets would self-solve in the air. The process was messy, often dangerous, but it built confidence and it built it fast.

Today we are seeing the introduction of a new technology that is reshaping the business landscape with similar urgency. Much like in 1939, when flight training had to scale overnight, organizations now face a different kind of race: the need to build AI fluency across the workforce. What once felt optional now borders on survival. The scramble to scale strategic AI adaptability within organizations has become the defining challenge of this next era of work.


Building Momentum through Speed

When we first rolled out our Copilot Catalyst program, we had taken a hard look at the AI training options already on the market. Many were effective at teaching functionality, but we realized something important very early on. They were training people well, but they weren’t fully preparing these companies for the future they were heading into. Our goal from the start was never just to help employees learn how to use Copilot. It was to help them understand how to think with it. That meant knowing where and when to apply AI, when to trust it, and when to challenge it. We chose not to focus on mechanics. We taught adaptability and judgment.

And as I discussed in my first article in this series, we had to do it at breakneck speed. In some cases, as little as 2- or 3-week sprints.

There are moments when a slower, more thorough approach makes sense. A pilot preparing for a career in commercial aviation needs that kind of depth. They need precision, repetition, and time to master every control. But a fighter pilot heading out on a bombing run over Paris during a war needs something different. They need readiness and confidence under pressure. They need enough fluency with their tools to make fast decisions in unpredictable conditions. The two missions require different kinds of training because the stakes are different.

That same distinction applies to AI adoption. Before designing any learning program, organizations have to ask a harder question: why are we training? If the goal is to create expert Copilot operators, a long, feature-rich curriculum may be the right approach. But if the goal is to build a workforce that can adapt quickly, experiment with new tools, and respond when technology shifts again, then mastery is not the initial point. Momentum is.

Our shift to a sprint model came from that realization. We wanted people to get their hands dirty fast. To learn through doing, to build confidence, and to start figuring out how to solve real problems with AI from day one. But that choice to move fast was deliberate. It reflected a deeper question about what kind of capability we were trying to build.


Making Learning and Adoption Self-Perpetuating

But no one can run at a sprint forever. The pivot to sustained momentum came when we learned that lasting engagement didn’t come from more feature training. It came from conversation. We created forums where employees could surface wins, trade prompts, and showcase how they were applying AI in their daily work. The tone was peer-to-peer, not top-down. When people saw colleagues experimenting, sharing real examples, and even laughing at their early missteps, it kept people curious and moving forward.

That sustained momentum also became the opportunity to develop the next wave of internal champions. The early adopters who emerged during the sprint phase became the organization's mentors and facilitators. They helped others navigate new use cases, refine prompts, and apply AI to their own work. Over time, those champions became the backbone of adoption for the organization.

The Air Corps learned that you cannot train for every scenario before takeoff. You prepare people to fly, then trust that experience will do the rest. The same idea applies here. We do not need everyone to master every feature. We need them confident enough to experiment, curious enough to keep learning, and connected enough to help each other improve.

The sprint creates motion. The sustained engagement turns that motion into conversation. And that conversation makes learning and adoption self-perpetuating. When people have the space to share wins, challenge assumptions, and coach the next wave, adoption no longer depends on a program or a training plan. It becomes part of how the organization thinks and works.

This is what happens when you answer the harder question with clarity: why are we training? If the goal is organizational adaptability, not individual expertise, then the measures of success shift entirely. We are not counting certifications or tracking course completions. We are building a culture where experimentation leads, learning spreads, and capability compounds over time.


The Takeaway

If you are thinking about rolling out Copilot to your organization, make sure you start with a sprint rather than a marathon of feature training. Generating excitement and momentum early is key to accelerated adoption and creates an environment of togetherness in which your future thought leaders and AI champions can emerge.


Up Next

Next we’ll explore Lesson 3: Teach People to Fish. Don't Give Them Cool Tips.

We’ll be diving a little deeper into teaching adaptability and judgment and cover some funny but embarrassing real-world examples of what happens when you over-index on feature training.

For those navigating this right now: take what fits, ignore what doesn't. These frameworks are meant for action, not just reading. Need help applying them? Our team at FlexPoint Consulting is available. Feel free to reach out.

And if something here sparked a different insight, or your own experience challenges this, comment below. That's where we all learn.

Engage with the original LinkedIn post here.

Previous
Previous

Copilot Catalyst Lesson 3: Teach People to Fish, Don't Give Them Cool Tips

Next
Next

Copilot Catalyst Lesson 1: Leaders Go First, or Nothing Goes Anywhere