AI

Unit economics of generative AI products - why most lose money

Most generative AI products have negative unit economics and lose money on every user. Here is the uncomfortable reality about AI product profitability and what it takes to build sustainable businesses.

Most generative AI products have negative unit economics and lose money on every user. Here is the uncomfortable reality about AI product profitability and what it takes to build sustainable businesses.

Key takeaways

  • Most AI products burn money per user - Even market leaders like OpenAI and Anthropic operate at massive losses, with inference costs consuming most of their revenue
  • Inference costs dominate everything - AI companies spend 60-80% of operating expenses on inference alone, fundamentally different from traditional SaaS economics
  • Costs drop fast but usage grows faster - While AI costs decline 10x yearly, user adoption and query volume can outpace savings, trapping you in a profitability squeeze
  • Specialized tools win on economics - Products like Midjourney achieve profitability by focusing on specific high-value use cases rather than trying to be everything to everyone
  • Need help implementing these strategies? Let's discuss your specific challenges.

OpenAI is losing billions this year.

Anthropic projects to lose billions as well. These are not startups figuring things out. These are the companies that defined generative AI, with millions of users and substantial revenue. And they are hemorrhaging money on every query.

The unit economics of generative AI products are broken. Not broken in a “we will figure it out” way. Broken in a “the fundamental business model does not work yet” way.

Why AI economics are different

Traditional SaaS products have beautiful economics. You build software once, host it cheaply, and serve unlimited users for basically nothing. An extra user costs you almost nothing. Gross margins reach very high levels.

AI products work backwards.

Every user interaction costs you real money. Every prompt burns compute. Every response requires expensive GPU time. IBM research shows that inference costs can account for a substantial majority of total operating expenses for AI-first companies. That is not a margin problem. That is a business model problem.

When I look at AI products through the lens of building Tallyfy, the contrast is stark. Our traditional SaaS features cost almost nothing to serve at scale. Our AI features cost us every single time someone uses them. An AI-powered software business has fundamentally different margins from traditional SaaS.

The math gets worse. Infrastructure costs are expected to climb 89% between 2023 and 2025. Not 8.9%. Eighty-nine percent. And that is before you account for the explosive growth in usage as AI becomes mainstream.

The cost squeeze nobody talks about

Here is what makes unit economics generative AI products so tricky. Costs are dropping fast. Really fast. The cost of LLM inference has dropped by a factor of 1,000 in 3 years. For equivalent performance, costs decrease significantly every year.

Sounds good, right?

Except usage grows faster than costs decline. ChatGPT reportedly cost OpenAI tens of millions just to process prompts in January 2023 alone. One month. Microsoft’s Bing AI chatbot needs billions in infrastructure to serve all Bing users. Not develop. Just to run.

The trap works like this: you price your product based on current costs. Users love it because AI is magical. Adoption explodes. Your costs scale linearly with every new query. Yes, costs per query drop over time, but if your user base grows 10x and query volume per user doubles, you are still losing money faster than before.

The companies with the best technology and biggest user bases cannot make the math work. OpenAI hit substantial annualized revenue but projects to burn billions in cash in 2025. Anthropic forecasts billions in revenue while losing billions.

That is not a scaling problem. That is a fundamental economic problem.

What breaks most AI business models

Most companies building AI products make three mistakes with unit economics.

First, they treat AI features like traditional features. They bolt on AI capabilities to existing products without rethinking their entire pricing model. You cannot charge traditional SaaS prices when your cost structure looks like a utility company.

Second, they underestimate how users will actually use AI. If you charge a flat rate and costs scale with usage, the distribution of queries per user becomes life or death. A small group of power users can make your entire product unprofitable. Companies are responding by raising prices substantially, but that limits adoption precisely when you need scale.

Third, they ignore the hidden costs beyond inference. Research shows you need to factor in cloud compute, software integration, developer time, data storage, MLOps, and continuous model retraining. The total cost of ownership for GenAI initiatives often exceeds initial expectations by multiples.

The failure rate tells the story. MIT research found a 95% failure rate for enterprise AI solutions. Gartner predicts that at least 30% of generative AI projects will be abandoned after proof-of-concept by end of 2025, mainly due to escalating costs or unclear business value.

Every executive reported canceling or postponing at least one generative AI initiative due to cost concerns. Not some executives. Every single one surveyed.

How Midjourney broke the pattern

One company figured it out. Midjourney.

Midjourney generates substantial annual revenue with just 40 employees. They became profitable in August 2022, six months after launch. No venture capital. No massive infrastructure spend. No desperate hunt for business model fit.

They did something different. They focused on one specific use case - AI image generation - and charged appropriately for it. Their pricing is straightforward: tiered subscriptions from basic plans to professional tiers, with usage limits that match their cost structure. When users need more, they pay modest rates for extra GPU time. Simple. Transparent. Profitable.

The lesson is not “build an image generator.” The lesson is that specialized tools focused on specific high-value use cases can achieve sustainable unit economics generative AI. Midjourney does not try to be everything. They do not add features because they can. They picked a problem where users clearly understand value and are willing to pay enough to cover real costs.

Compare this to the companies trying to be AI platforms for everything. They are trapped between needing scale to justify infrastructure investment and bleeding money on every query. Midjourney prints money with a team smaller than most startup engineering departments.

Making AI economics work

Building products that make economic sense

If you are building an AI product or adding AI to an existing product, the unit economics have to drive every decision from day one.

Start with the cost structure, not the features. Know exactly what each user interaction costs you. Build in monitoring so you can track cost per user, cost per query, and cost per value delivered. Most companies treat AI costs as “infrastructure” and lose visibility into what is actually burning money.

Price for your actual costs, not for what competitors charge. If your AI feature costs you real money every time someone uses it, your pricing has to reflect that. Usage-based pricing is not just trendy. It is the only way to align your revenue with your costs. Flat-rate pricing works when costs are fixed. AI costs are never fixed.

Gate expensive features carefully. Not every feature needs AI. Not every AI feature needs the most expensive model. Analysis shows that using appropriately-sized models and caching responses can cut costs dramatically without hurting user experience.

Focus on high-value use cases where users will pay enough to cover your costs and profit margin. General-purpose AI tools have terrible unit economics because they try to be useful for everything at a price point that works for nothing. Specialized tools that solve expensive problems can charge appropriately.

Consider hybrid models. McKinsey research shows the biggest ROI comes from back-office automation, not customer-facing features. Use AI internally to cut costs before you use it externally where costs scale with users.

The uncomfortable math

The reality is that most AI products today are subsidized by venture capital. The business models do not work. The unit economics are negative. Companies are betting that costs will drop fast enough or that they will find pricing models that work before they run out of money.

Some will figure it out. The costs are dropping fast enough that if you can control usage patterns and price appropriately, sustainable unit economics generative AI products are possible. Midjourney proves it. So do specialized AI tools in vertical markets where value justifies premium pricing.

But most will not. The companies trying to build general-purpose AI products at consumer prices are playing a game where the economics get worse as they grow. More users means more losses. Better features means higher costs. Growth becomes the problem, not the solution.

For mid-size companies, this matters because the vendors selling you AI products are not telling you about their unit economics problems. They are selling you features and capabilities without admitting that they lose money on every query you run. That affects their long-term viability, their pricing stability, and whether they will even exist in three years.

If you are buying AI products, ask about their cost structure and pricing model. If you are building AI products, fix the unit economics before you scale. And if you are investing in AI products, understand that revenue growth without margin improvement is just buying your way to a bigger loss.

The AI revolution is real. The technology is transformative. But the unit economics are broken for most products, and pretending otherwise is expensive.

About the Author

Amit Kothari is an experienced consultant, advisor, and educator specializing in AI and operations. With 25+ years of experience and as the founder of Tallyfy (raised $3.6m), he helps mid-size companies identify, plan, and implement practical AI solutions that actually work. Originally British and now based in St. Louis, MO, Amit combines deep technical expertise with real-world business understanding.

Disclaimer: The content in this article represents personal opinions based on extensive research and practical experience. While every effort has been made to ensure accuracy through data analysis and source verification, this should not be considered professional advice. Always consult with qualified professionals for decisions specific to your situation.