AI team structure: the optimal setup
Most organizations build AI teams backward, hiring specialists before defining what they actually need to build. The most effective university AI lab setup starts with three core functions, cloud infrastructure, and a hybrid collaboration model that scales with real problems.

Key takeaways
- Start with three core roles, not ten specialists - Research engineers, ML engineers, and infrastructure specialists form the foundation that scales efficiently
- Cloud infrastructure beats on-premise for education - Universities running AI labs on cloud platforms reduce setup costs while providing students hands-on experience with production environments
- Hybrid organizational models win - Centralized expertise with decentralized execution delivers both consistent standards and rapid innovation across departments
- Build skills internally before hiring externally - With AI talent shortages hitting 50%, upskilling existing team members often delivers better results than competing for scarce specialists
- Need help implementing these strategies? [Let's discuss your specific challenges](/).
The typical university AI lab setup starts with a hiring spree. Ten specialists, twelve roles, infrastructure decisions postponed until “we have the team in place.”
This fails. Always.
I’ve watched institutions spend months assembling dream teams that never ship because nobody defined what they’re supposed to build. Gartner research shows that AI teams need data scientists, ML engineers, and AI architects working with business domain experts - but most organizations confuse roles with functions and end up with expensive overlap.
Why most AI teams fail before they start
The problem isn’t talent. It’s structure.
McKinsey’s organizational research found that small outcome-focused teams of 2-5 people can supervise 50-100 specialized AI agents running end-to-end processes. But universities keep building teams like it’s 2018 - massive, centralized, disconnected from actual use cases.
When Princeton built their AI Lab, they didn’t start with dozens of researchers. They created shared infrastructure first - 300 H100 GPUs, administrative support, research software engineers - then let specific projects attract the right specialists.
The University of Tokyo took it further. Their Matsuo-Iwasawa Laboratory equipped actual hardware environments including robot arms, mobile manipulators, simulators, and VR devices. They grew from core faculty to 50 members through a research community model that attracted talent to problems, not positions.
Start with infrastructure and clear functions. Talent follows.
The three roles that actually matter
Forget the ten-specialist fantasy. A functioning university AI lab setup needs three core roles that map to actual work:
Research engineers who experiment and prototype. These are the people testing hypotheses, exploring new approaches, and figuring out what’s actually possible with current technology. Not pure theorists, not production engineers. Researchers who code.
ML engineers who move prototypes to production. Research shows ML engineers focus on transitioning models from research to production, building scalable systems that operate in real environments. They work closely with research engineers but solve completely different problems.
Infrastructure specialists who keep systems running. Scrum.org’s analysis of AI team scaling shows that data engineers construct and maintain data pipelines critical for AI development. Without solid infrastructure, research and production both grind to halt.
Everything else - data scientists, ethicists, NLP specialists, security officers - either maps to these three functions or gets added when specific projects demand it. Deloitte found only 22,000 AI specialists existed globally in 2022. You won’t hire your way into ten distinct roles. You’ll burn budget trying.
Build the three core functions. Let specialists emerge from project needs.
Cloud versus on-premise for university labs
Here’s where universities make expensive mistakes.
On-premise infrastructure requires massive upfront investment - hardware, cooling, power, maintenance staff, physical security. Cost analysis shows that on-premise solutions need substantial initial capital plus additional costs for power consumption, cooling systems, space, and maintenance. The breakeven point usually hits around 12-18 months for organizations running AI workloads continuously.
Universities don’t run AI workloads continuously. Classes happen in bursts. Research projects ramp up and wind down. Student projects spike during semesters then disappear.
CloudLabs and similar platforms transform university AI lab setup by providing cloud-based, customizable hands-on learning environments. Students get dedicated access to Big Data Analytics, Deep Learning, and NLP labs hosted on AWS, Azure, and GCP. When class ends, you’re not paying for idle GPUs gathering dust.
The Minnesota Supercomputing Institute figured this out. They deployed high-performance computing resources powered by cloud infrastructure, enabling researchers to conduct large-scale experiments concurrently without capital expenditure on hardware that becomes obsolete in three years.
For teaching and research that varies by semester, cloud infrastructure wins on economics and student experience. Students learn on the same platforms they’ll use professionally. Universities avoid hardware refresh cycles and maintenance overhead.
Reserve on-premise for the rare cases where you have sustained, predictable workloads that justify capital investment.
Hybrid models beat pure centralization
The debate shouldn’t be centralized versus decentralized AI teams. It should be which elements to centralize and which to distribute.
AWS research on generative AI operating models recommends centralizing foundations - infrastructure, data governance, security standards - while decentralizing innovation across business domains. This hybrid approach balances robust AI governance with agile delivery.
Pure centralization creates bottlenecks. Every department waits for the central AI team to get around to their project. Studies of data team structures show that centralized Centers of Excellence offer control and shared expertise, particularly beneficial in large organizations, but they sacrifice speed and domain alignment.
Pure decentralization fragments everything. Each department builds its own solutions, none of which talk to each other, all reinventing the same infrastructure and governance patterns. You end up with the fragmentation problem that undermines ROI.
The hybrid or federated model - what some call hub-and-spoke - centralizes the infrastructure, security, and standards (the hub) while embedding AI specialists into department teams (the spokes). This way university AI lab setup maintains consistent data quality and security while letting departments move fast on domain-specific problems.
Airbnb learned this the hard way. They transitioned from fully centralized data science to a hybrid model as they scaled, maintaining the data science team as a single unit for career development and standards while dividing it into sub-teams aligned with specific product areas.
Build your hub first. Then grow spokes as departments demonstrate readiness.
Building skills instead of buying talent
The math doesn’t work on hiring.
IBM reports a 50% AI talent gap in 2024, with AI spending expected to exceed USD 550 billion. One-third of tech leaders rate finding employees skilled in AI as their main hiring challenge. Industry surveys show that 60% of IT decision makers think AI constitutes their largest skills shortage.
You can’t compete with tech companies offering equity and unlimited budgets for the 22,000 global AI specialists. But you can develop internal talent.
The most successful organizations focus on upskilling. Research shows 63% of companies now provide in-house data analytics training rather than competing in impossible hiring markets. This works because AI expertise builds on existing domain knowledge - your biology faculty who understand the research problems just need the technical tools.
The key skills aren’t mysterious. Research on AI role requirements identifies working knowledge of security, privacy, data science, statistics, software development, coding, and understanding of models and algorithms as the core competencies. These are teachable.
Universities have built-in advantages here. Google Cloud offers 200 Google Skills credits for students and 5,000 credits for faculty at no cost. AWS Academy provides approximately 40 hours of content delivered through lectures, hands-on labs, and project work. Oracle University offers digital training for cloud architecture, operations, security, and AI.
The infrastructure exists to train your own talent. Use it before burning budget on hiring battles you’ll lose.
Start with three people who want to learn. Give them cloud credits and real problems. Grow from there.
What this means for building your AI team
Stop planning the perfect team structure and start with the minimal viable team.
Three roles. Cloud infrastructure. Hybrid organizational model. Internal skills development. Add specialists only when specific projects prove the need.
The organizations that succeed with AI don’t have the biggest teams or the most PhDs. They have clear functions, appropriate infrastructure, and people learning by shipping real projects.
Your university AI lab setup should look more like a startup than an enterprise. Small, focused, building something students and researchers can actually use. Everything else is planning theater that burns budget without delivering value.
About the Author
Amit Kothari is an experienced consultant, advisor, and educator specializing in AI and operations. With 25+ years of experience and as the founder of Tallyfy (raised $3.6m), he helps mid-size companies identify, plan, and implement practical AI solutions that actually work. Originally British and now based in St. Louis, MO, Amit combines deep technical expertise with real-world business understanding.
Disclaimer: The content in this article represents personal opinions based on extensive research and practical experience. While every effort has been made to ensure accuracy through data analysis and source verification, this should not be considered professional advice. Always consult with qualified professionals for decisions specific to your situation.