How to build an AI champions network that actually drives adoption
Top-down AI mandates generate compliance, not commitment. The companies seeing real adoption are building three-tier champion networks where trusted peers test, validate, and spread AI use cases from the middle of the organization outward.

The short version
AI adoption works when trusted peers carry it forward, not when executives mandate it. Build a three-tier structure with a steering committee setting direction, a working group managing execution, and a distributed champion network testing use cases in real workflows before wider rollout.
- Top-down AI mandates fail 82% of the time because they create compliance without commitment
- Champions are not just the tech-savvy people; they are the trusted influencers whose colleagues actually listen to them
- Two-week champion sprints prevent half-baked AI rollouts by validating use cases in real conditions first
- The biggest killer of champion programs is treating them as unpaid extra work with no executive air cover
Every AI steering committee I’ve observed follows the same arc. Smart people meet monthly. They discuss strategy. They approve pilots. Then the pilots die somewhere between the meeting room and the actual teams doing the work.
The gap between executive intent and frontline reality is where most AI programs go to waste. Research from AI4SP across 600,000 individuals and organizations found that top-down mandated tools fail 82% of the time. Meanwhile, grassroots AI adoption succeeds at roughly 70%. The math is brutally clear.
But pure bottom-up adoption has its own problems. It fragments. People pick random tools. Shadow AI proliferates. Nobody learns from each other. You get pockets of excellence surrounded by organizational chaos.
The answer is a structured middle path. A champion network.
What a three-tier structure looks like
The organizations getting this right use three distinct layers, each with different responsibilities and cadences.
At the top sits the steering committee. Five to nine senior leaders who set direction, allocate resources, and provide political cover. They meet biweekly. Their job is not to approve every use case. Their job is to remove obstacles that champions can’t remove on their own. If you’ve already built a governance framework, your steering committee probably exists. You just need to connect it to the layers below.
In the middle sits the working group. This is a cross-functional team of six to ten people who translate strategy into practical guidance. They maintain the approved tools list, create templates, document successful use cases, and coordinate champion activities. They meet weekly. Think of them as the operating system that keeps the champion network functioning.
At the base sits the champion network itself. These are the people embedded in departments across the organization. They test AI tools in their actual workflows, train peers through demonstration rather than lectures, surface use cases nobody in the steering committee would think of, and report back on what works and what doesn’t.
Citi built exactly this structure and reached over 70% adoption of firm-approved AI tools across 182,000 employees. They started with 25 to 30 AI Champions coordinating a broader network of more than 4,000 AI Accelerators embedded across teams in 84 countries. The scale is impressive. The principle applies at any size.
Finding the right champions
This is where most programs go wrong first.
The instinct is to pick the most technical people. The developer who’s already built three internal tools. The analyst who automates everything. The person who won’t stop talking about large language models in the break room.
These people matter. But they’re not your best champions.
Your best champions are the people others already trust and listen to. The operations manager who everyone goes to when they’re stuck. The sales lead whose opinion carries weight in team meetings. The HR coordinator who somehow knows everyone in the building by name.
GitHub’s AI champions playbook describes these people as part coach, part translator, and part feedback loop. That’s exactly right. A champion’s value isn’t technical skill. It’s peer trust. When this person says “I tried this and it saved me two hours,” people believe them.
A useful benchmark from OpenAI’s champion program guidance is to aim for 5 to 10% of your initial AI user base as champions, with one champion lead for every 10 to 20 champions. For a 300-person company, that’s 15 to 30 champions with two or three leads coordinating them.
Here’s what surprised me when I started paying close attention to successful programs: the best champions often come from non-technical functions. Finance. Operations. Marketing. Customer support. They bring the perspective of normal users who need AI to solve real problems, not technology enthusiasts looking for interesting puzzles.
What champions actually do day to day
The job description for a champion is deceptively simple. But the specifics matter.
Peer training through doing, not presenting. Champions don’t run workshops. They sit next to someone, watch them struggle with a task, and show how AI handles it. The peer learning approach works because it’s contextual. A champion in accounts receivable knows exactly which invoicing headaches AI can fix, because they deal with the same headaches.
Use case discovery. Champions find applications nobody at the executive level would ever think of. The legal team using AI to compare contract versions. The facilities manager using it to draft maintenance schedules. The recruiter using it to write more honest job descriptions. These micro-use-cases add up faster than any top-down initiative.
Feedback collection. Champions are the early warning system. They hear the complaints, the confusion, the workarounds. They know which tools people actually use versus which ones they open once and abandon. This intelligence is worth more than any survey.
Resistance handling. When a team member pushes back on AI, they’re not going to be convinced by an email from the CEO. But when a peer they respect says “I felt the same way, and here’s what changed my mind,” that lands differently. Champions don’t overcome resistance through authority. They dissolve it through credibility. This is the core of good change management; people trust peers more than policies.
Running champion sprints
Here’s the practical mechanism that separates champion networks that produce results from ones that produce meetings.
Champion sprints are two-week cycles where a small group of champions tests and validates a specific use case before it gets rolled out more broadly. The structure is straightforward.
Week one: explore and test. Three to five champions in a relevant department pick a use case. They try it in their actual work. Not in a sandbox. Not in a demo environment. In the messy, real conditions where things break and edge cases surface. They document what works, what doesn’t, what’s confusing, and what’s missing.
Week two: validate and package. Champions refine the approach based on week one results. They create a simple one-page guide (not a 40-page playbook) that any colleague could follow. They present findings to the working group with a clear recommendation: roll it out, modify it, or kill it.
This sprint approach does two things. First, it prevents the nightmare of rolling out an AI tool to 200 people only to discover it doesn’t work for the most common use case. Second, it gives champions tangible wins on a regular cadence. They’re not volunteering indefinitely. They’re committing to two weeks at a time.
The working group maintains a backlog of use cases to test. Champions pull from this backlog based on their department and expertise. Over six months, a 20-person champion network can validate 30 to 40 use cases. That’s a library of proven applications, not theoretical possibilities.
One critical detail: rotate champions through sprints. Don’t let the same three people carry every sprint for six months straight. Rotation keeps the workload distributed, brings fresh perspectives, and prevents the creeping resentment that comes from feeling permanently “voluntold” for extra duties.
How champion programs fail
The failure modes are predictable. Which means they’re preventable.
Treating it as extra work. This is the number one killer. Champions already have full-time jobs. If championing AI is just added to their plate with no corresponding reduction in other responsibilities, they burn out. UC Berkeley research tracked early AI adopters and found that productivity gains from AI tools often morphed into expanded workloads rather than freed-up time. The same pattern hits champions hard. They take on more because the tools make more feel possible, and eventually the whole thing collapses.
Allocate 10 to 20% of their time explicitly. Make it part of their performance goals. If you can’t do that, you’re signaling that this work doesn’t actually matter.
No executive air cover. Champions need protection. When a department head pushes back on someone spending time on AI experiments instead of “real work,” the steering committee needs to step in. Without that top cover, champions retreat to their day jobs and the network goes quiet. HBR’s research on organizational barriers confirms that executive sponsorship isn’t optional; it’s the difference between programs that survive and ones that dissolve.
No metrics. You need to measure what champions produce. Number of use cases validated. Adoption rates in champion-supported teams versus others. Time saved on validated workflows. Peer feedback scores. Without numbers, the steering committee loses interest and budget follows attention.
Letting enthusiasm replace structure. Early on, champions are excited. They volunteer. They stay late. They evangelize. This feels great. It’s also unsustainable. The programs that last five years look nothing like the ones that look exciting at month three. Structure, cadence, rotation, and explicit boundaries prevent the burnout that kills AI’s most enthusiastic adopters first.
Picking only technologists. If your champion network is entirely composed of people who already love technology, you’ve built an echo chamber. You need the skeptics, the practical operators, the people who will say “this doesn’t work for my workflow” and force you to make it better. Diversity of perspective beats depth of technical knowledge every time.
I’ll be honest about something. The first time I saw a well-run champion program in action, I was frustrated. Not at the program itself. At how many organizations skip this step entirely and then blame the technology when adoption stalls. The infrastructure for adoption is not complicated. It’s just work that nobody wants to fund because it doesn’t look like innovation. It looks like coordination. And coordination is unglamorous but essential.
The companies that build these networks don’t just adopt AI faster. They adopt it better. They find use cases nobody predicted. They catch problems before they scale. They build organizational muscle that transfers to the next technology shift, whatever that turns out to be.
Start with five champions. Give them a sprint. See what they find. That’s the whole playbook.
About the Author
Amit Kothari is an experienced consultant, advisor, coach, and educator specializing in AI and operations for executives and their companies. With 25+ years of experience and as the founder of Tallyfy (raised $3.6m), he helps mid-size companies identify, plan, and implement practical AI solutions that actually work. Originally British and now based in St. Louis, MO, Amit combines deep technical expertise with real-world business understanding.
Disclaimer: The content in this article represents personal opinions based on extensive research and practical experience. While every effort has been made to ensure accuracy through data analysis and source verification, this should not be considered professional advice. Always consult with qualified professionals for decisions specific to your situation.