AI

Cross-disciplinary AI programs that work

Hiring AI experts does not fix your AI problem. The programs that actually scale combine deep domain expertise with technical capability in integrated teams. Most companies build the wrong team structure first-pure technologists who cannot understand business context. Success requires cross-disciplinary squads where domain experts, engineers, and process designers work together daily from day one.

Hiring AI experts does not fix your AI problem. The programs that actually scale combine deep domain expertise with technical capability in integrated teams. Most companies build the wrong team structure first-pure technologists who cannot understand business context. Success requires cross-disciplinary squads where domain experts, engineers, and process designers work together daily from day one.

Key takeaways

  • Domain experts outperform AI specialists - Teams combining business knowledge with technical skills achieve 30% higher success rates than pure technical teams
  • Cross-disciplinary ai programs require structural change - Success demands transformation squads with business domain experts, AI engineers, and process designers working together daily
  • The ROI gap is closing fast - 74% of organizations now report positive ROI from AI, with 82% of decision-makers using AI at least weekly
  • Different metrics reveal different problems - Traditional ROI measurements miss the point when leading organizations achieve 57% productivity increases while others struggle to measure value
  • Need help implementing these strategies? Let's discuss your specific challenges.

Your AI team is failing.

Not because they lack technical skills. Not because you chose the wrong vendor. Your team fails because you hired AI people to solve business problems they do not understand.

The Wharton 2025 AI Adoption Report surveyed 800 enterprise leaders and found 74% already see positive ROI from AI. But here’s the catch: those returns concentrate in organizations with the right team structures. The ones still struggling? They built teams the wrong way from the start.

The programs that work? They built cross-disciplinary ai programs from day one.

Why your technical team cannot solve this alone

Here’s what happens when you staff an AI initiative with pure technical talent. Smart people. Strong credentials. They build impressive models. Then those models sit unused because nobody in the business understands them or trusts them.

I’ve watched this pattern repeat. The AI team speaks in embeddings and inference latency. The business team speaks in customer churn and revenue per account. They schedule meetings. They create Slack channels. But they’re having different conversations in the same language.

PwC’s 2025 Global AI Jobs Barometer found something striking: productivity growth has nearly quadrupled in industries most exposed to AI. But that growth concentrates in organizations where domain experts work alongside technical teams. Workers with AI skills command a 56% wage premium, up from 25% the year before.

The gap isn’t communication. It’s fundamental understanding.

Your ML engineer can build a brilliant recommendation engine. But they cannot tell you which customer behaviors actually predict churn in your specific market. That requires someone who has lived in your business, understood your customers, made decisions with incomplete information.

Gartner’s CIO talent planning survey shows 87% of enterprises have implemented or plan to implement AI engineering roles. But here’s the pattern: the successful ones aren’t hiring pure technologists. They’re building roles that combine domain expertise with technical capability. Because organizations finally figured out that technical capability without business context is just expensive science projects.

The structure that actually works

The successful cross-disciplinary ai programs I’ve seen share a specific structure. Not a matrix organization where people report to two bosses. Not a center of excellence that advises from the sidelines.

Transformation squads.

These teams combine business domain experts, process designers, AI and MLOps engineers, IT architects, software engineers, and data engineers. Same squad. Same daily standups. Same success metrics.

Bowling Green State University is launching the nation’s first “AI + X” bachelor’s degree, combining AI with any other discipline. This isn’t just an elective track. It’s a fundamental restructuring that recognizes AI implementation requires deep domain expertise alongside technical capability. The University of Wisconsin-Madison created an entire College of Computing and AI, their first new college since 1983. Because they learned that separating technical training from domain application produces graduates who cannot actually implement AI in real organizations.

The critical difference: these are not coordination meetings between separate teams. These are integrated teams where the domain expert sits next to the ML engineer while they design the model. Where the process designer and the data engineer map workflows together. Where decisions get made with full context.

Three things happen when you structure teams this way. Domain experts learn enough about AI to ask the right technical questions. AI specialists learn enough about the business to build actually useful models. And everyone stops talking past each other.

But here’s where most organizations fail: they try to retrofit this structure onto existing teams. You cannot. You need to build new teams with cross-disciplinary ai programs as the foundation, not an addition.

What to measure when everything is new

Traditional metrics lie in AI programs.

Wharton’s research shows 74% of organizations now report positive ROI from AI. But dig deeper and you find 72% formally measure ROI, focusing on productivity and incremental profit. The top performers? They achieve 57% productivity increases and generate 300% ROI on AI training investments. The gap between leaders and laggards is widening.

The problem isn’t that AI delivers no value. The problem is that value accrues to organizations measuring the right things.

ROI assumes you know the input costs and output value. But in early AI programs, you’re learning what’s even possible. You’re discovering which processes can be automated. You’re finding unexpected use cases. Standard financial metrics miss all of that.

Better metrics focus on different questions:

  • How many business processes have we successfully augmented with AI?
  • What percentage of domain experts are actively using AI tools in their daily work?
  • How quickly can we deploy a new AI capability from concept to production?
  • Are our AI-augmented processes outperforming manual processes on quality, not just cost?

The emerging Role-Based ROI Framework aligns AI training to organizational roles rather than abstract skill sets. Corporate training research shows the metrics that matter: AI tool adoption rates, reduced processing times, lower error rates, and new AI-driven projects. But the critical insight is sequencing. Time savings come first. Cost reduction comes second. Quality improvements come third.

Most organizations measure cost reduction first because it’s easiest to quantify. That’s exactly backward.

The teams that scale AI programs track learning velocity: how fast are we getting better at identifying AI opportunities and deploying solutions? That’s the leading indicator. Revenue and cost benefits are trailing indicators that show up later.

The collaboration problem nobody solves

Here’s what kills cross-disciplinary ai programs: terminology.

Your clinical experts use one vocabulary to describe patient outcomes. Your technical team uses a different vocabulary to describe model performance. Those vocabularies map to different data collection approaches, different analytical frameworks, different success criteria.

Medical school research shows this pattern clearly. While 77% of medical schools now cover AI in their curricula, only 12% of medical faculty report being “very familiar” with the technology. Clinical experts and technical experts describe the same concepts differently. That leads to different data collection, different management approaches, different analyses. The teams think they’re aligned because they nod in meetings. They’re not aligned.

The solution isn’t translation. Translation assumes one group speaks the real language and the other needs interpretation. That’s wrong.

The solution is building a shared language from scratch. Your cross-disciplinary AI team needs to create its own vocabulary that combines domain precision with technical precision. This takes time. Weeks, usually. But without it, you’re building on sand.

Practical approach: take one high-value use case. Spend a week with the domain expert and technical lead mapping every term, every concept, every metric. Write it down. Get specific. “Customer satisfaction” means what exactly? “Model accuracy” measures what specifically?

Do this before building anything. The teams that skip this step build technically correct solutions to the wrong problems.

Where smart companies start

An August 2025 study of over 1,000 executives and hiring managers found 93% rate written and oral communications, critical thinking, and ethical judgment as important skills. Tech companies like Apple, Microsoft, and Google actively recruit liberal arts talent because designing products requires empathy and cultural awareness. Diversity alone is not enough. You need structural support.

Start with one problem that matters. Not the biggest problem. Not the flashiest AI opportunity. Find a problem where domain expertise clearly matters and technical capability clearly matters and solving it creates measurable value.

Build a squad: one deep domain expert, one ML engineer, one process designer, one data engineer. Four people. Co-located if possible, daily video calls if not.

Give them three months. Not to build production systems. To build shared understanding and one working prototype that proves the concept.

OpenAI and GitHub both report that internal AI champion networks are among the most effective approaches to driving real AI adoption, more effective than centralized training programs alone. The most successful programs position champion responsibilities as 30-60 minutes per week, designed to fit within existing work. Without executive cover and internal advocates, cross-disciplinary teams drown in organizational politics.

Your job as the executive: remove obstacles. When the domain expert says they need different data, get them different data. When the ML engineer says they need better compute, get them better compute. When the whole squad says the existing process is broken, let them fix it.

Three months proves whether the model works. It proves whether the team works. It proves whether the organization can actually support cross-disciplinary ai programs or whether the immune system will reject the transplant.

Most AI initiatives fail because companies skip this proving phase. They jump straight to enterprise-wide rollout with teams that have never built shared understanding. Then they wonder why adoption is low and results are disappointing.

Small squad. Real problem. Three months. Prove the model before you scale the model.

About the Author

Amit Kothari is an experienced consultant, advisor, coach, and educator specializing in AI and operations for executives and their companies. With 25+ years of experience and as the founder of Tallyfy (raised $3.6m), he helps mid-size companies identify, plan, and implement practical AI solutions that actually work. Originally British and now based in St. Louis, MO, Amit combines deep technical expertise with real-world business understanding.

Disclaimer: The content in this article represents personal opinions based on extensive research and practical experience. While every effort has been made to ensure accuracy through data analysis and source verification, this should not be considered professional advice. Always consult with qualified professionals for decisions specific to your situation.