AI

Cross-disciplinary AI programs that work

Hiring AI experts does not fix your AI problem. The programs that actually scale combine deep domain expertise with technical capability in integrated teams. Most companies build the wrong team structure first—pure technologists who cannot understand business context. Success requires cross-disciplinary squads where domain experts, engineers, and process designers work together daily from day one.

Hiring AI experts does not fix your AI problem. The programs that actually scale combine deep domain expertise with technical capability in integrated teams. Most companies build the wrong team structure first—pure technologists who cannot understand business context. Success requires cross-disciplinary squads where domain experts, engineers, and process designers work together daily from day one.

Key takeaways

  • Domain experts outperform AI specialists - Teams combining business knowledge with technical skills achieve 30% higher success rates than pure technical teams
  • Cross-disciplinary ai programs require structural change - Success demands transformation squads with business domain experts, AI engineers, and process designers working together daily
  • Most organizations are not scaling - Only one-third of companies scale their AI programs, with most attributing less than 5% of earnings to AI use
  • Different metrics reveal different problems - Traditional ROI measurements miss the point when 95% of organizations see zero return despite massive investment
  • Need help implementing these strategies? Let's discuss your specific challenges.

Your AI team is failing.

Not because they lack technical skills. Not because you chose the wrong vendor. Your team fails because you hired AI people to solve business problems they do not understand.

McKinsey found only one-third of organizations are actually scaling their AI programs. Most report less than 5% of their earnings come from AI use. That’s after years of investment and hype.

The programs that work? They built cross-disciplinary ai programs from day one.

Why your technical team cannot solve this alone

Here’s what happens when you staff an AI initiative with pure technical talent. Smart people. Strong credentials. They build impressive models. Then those models sit unused because nobody in the business understands them or trusts them.

I’ve watched this pattern repeat. The AI team speaks in embeddings and inference latency. The business team speaks in customer churn and revenue per account. They schedule meetings. They create Slack channels. But they’re having different conversations in the same language.

Research shows professionals with expertise in both technical and domain-specific areas have 30% higher project success rates. Not 3%. Thirty percent.

The gap isn’t communication. It’s fundamental understanding.

Your ML engineer can build a brilliant recommendation engine. But they cannot tell you which customer behaviors actually predict churn in your specific market. That requires someone who has lived in your business, understood your customers, made decisions with incomplete information.

Gartner predicts the domain-specific AI market will hit $11.3 billion by 2028. Not generic models. Domain-specific. Because organizations finally figured out that technical capability without business context is just expensive science projects.

The structure that actually works

The successful cross-disciplinary ai programs I’ve seen share a specific structure. Not a matrix organization where people report to two bosses. Not a center of excellence that advises from the sidelines.

Transformation squads.

These teams combine business domain experts, process designers, AI and MLOps engineers, IT architects, software engineers, and data engineers. Same squad. Same daily standups. Same success metrics.

Stanford’s programs bring together faculty from the business school, engineering, law, medicine, and humanities. Not for a single lecture series. For integrated program design. Because they learned that separating technical training from domain application produces graduates who cannot actually implement AI in real organizations.

The critical difference: these are not coordination meetings between separate teams. These are integrated teams where the domain expert sits next to the ML engineer while they design the model. Where the process designer and the data engineer map workflows together. Where decisions get made with full context.

Three things happen when you structure teams this way. Domain experts learn enough about AI to ask the right technical questions. AI specialists learn enough about the business to build actually useful models. And everyone stops talking past each other.

But here’s where most organizations fail: they try to retrofit this structure onto existing teams. You cannot. You need to build new teams with cross-disciplinary ai programs as the foundation, not an addition.

What to measure when everything is new

Traditional metrics lie in AI programs.

UC Berkeley found something remarkable: despite $30-40 billion in enterprise investment, 95% of organizations studied see zero return on their AI initiatives. Zero.

The problem isn’t that AI delivers no value. The problem is measuring value with the wrong instrument.

ROI assumes you know the input costs and output value. But in early AI programs, you’re learning what’s even possible. You’re discovering which processes can be automated. You’re finding unexpected use cases. Standard financial metrics miss all of that.

Better metrics focus on different questions:

  • How many business processes have we successfully augmented with AI?
  • What percentage of domain experts are actively using AI tools in their daily work?
  • How quickly can we deploy a new AI capability from concept to production?
  • Are our AI-augmented processes outperforming manual processes on quality, not just cost?

Microsoft recommends establishing rigorous impact tracking with clear metrics for value delivered: time savings, cost reduction, and quality improvements. But the critical insight is sequencing. Time savings come first. Cost reduction comes second. Quality improvements come third.

Most organizations measure cost reduction first because it’s easiest to quantify. That’s exactly backward.

The teams that scale AI programs track learning velocity: how fast are we getting better at identifying AI opportunities and deploying solutions? That’s the leading indicator. Revenue and cost benefits are trailing indicators that show up later.

The collaboration problem nobody solves

Here’s what kills cross-disciplinary ai programs: terminology.

Your clinical experts use one vocabulary to describe patient outcomes. Your technical team uses a different vocabulary to describe model performance. Those vocabularies map to different data collection approaches, different analytical frameworks, different success criteria.

Research on clinical AI teams found this exact pattern. Clinical experts and technical experts described the same concepts differently. That led to different data collection, different management approaches, different analyses. The teams thought they were aligned because they nodded in meetings. They were not aligned.

The solution isn’t translation. Translation assumes one group speaks the real language and the other needs interpretation. That’s wrong.

The solution is building a shared language from scratch. Your cross-disciplinary AI team needs to create its own vocabulary that combines domain precision with technical precision. This takes time. Weeks, usually. But without it, you’re building on sand.

Practical approach: take one high-value use case. Spend a week with the domain expert and technical lead mapping every term, every concept, every metric. Write it down. Get specific. “Customer satisfaction” means what exactly? “Model accuracy” measures what specifically?

Do this before building anything. The teams that skip this step build technically correct solutions to the wrong problems.

Where smart companies start

Data from Stanford and MIT shows teams composed of members with different academic backgrounds produce 20% more innovative ideas compared to homogenous groups. But diversity alone is not enough. You need structural support.

Start with one problem that matters. Not the biggest problem. Not the flashiest AI opportunity. Find a problem where domain expertise clearly matters and technical capability clearly matters and solving it creates measurable value.

Build a squad: one deep domain expert, one ML engineer, one process designer, one data engineer. Four people. Co-located if possible, daily video calls if not.

Give them three months. Not to build production systems. To build shared understanding and one working prototype that proves the concept.

Research shows 77% of ML implementation leaders had C-level leadership driving their projects. That’s not correlation. That’s causation. Without executive cover, cross-disciplinary teams drown in organizational politics.

Your job as the executive: remove obstacles. When the domain expert says they need different data, get them different data. When the ML engineer says they need better compute, get them better compute. When the whole squad says the existing process is broken, let them fix it.

Three months proves whether the model works. It proves whether the team works. It proves whether the organization can actually support cross-disciplinary ai programs or whether the immune system will reject the transplant.

Most AI initiatives fail because companies skip this proving phase. They jump straight to enterprise-wide rollout with teams that have never built shared understanding. Then they wonder why adoption is low and results are disappointing.

Small squad. Real problem. Three months. Prove the model before you scale the model.

About the Author

Amit Kothari is an experienced consultant, advisor, and educator specializing in AI and operations. With 25+ years of experience and as the founder of Tallyfy (raised $3.6m), he helps mid-size companies identify, plan, and implement practical AI solutions that actually work. Originally British and now based in St. Louis, MO, Amit combines deep technical expertise with real-world business understanding.

Disclaimer: The content in this article represents personal opinions based on extensive research and practical experience. While every effort has been made to ensure accuracy through data analysis and source verification, this should not be considered professional advice. Always consult with qualified professionals for decisions specific to your situation.