|
|
|
By Krish Raja · AI Product Builder, Brooklyn NY
|
|
|
Investigation
You're not talking to an assistant. You're talking to a debtor.
The hidden economics turning your AI into a sales agent, your prompts into a revenue stream, and your trust into a monetization strategy.
|
|
Your AI assistant got worse last month. You noticed. You told yourself you were imagining it.
You weren't.
The responses are shorter. The advice, once comprehensive, now feels rushed. Features you relied on hit mysterious "usage limits." And the thing that used to feel like a brilliant colleague increasingly resembles something else: a salesperson nudging you toward premium tiers while quietly steering conversations toward commercial outcomes.
This isn't a conspiracy theory. It's economics. And the numbers are bloody terrifying.
|
|
The Debt Mountain
Where the money goes (and doesn't come back)
|
|
|
OpenAI generated $13.1 billion in revenue in 2025. Impressive, until you learn they burned through $8 billion in cash doing it. A burn rate of roughly 70% of revenue.
Their own internal projections show losses tripling to $14 billion in 2026, with total spending hitting $22 billion. Between 2026 and 2029, OpenAI expects to burn through $218 billion. That's $111 billion more than their projections from just two quarters ago.
The fundamental problem: AI companies lose money on every single interaction. Unlike traditional SaaS that costs pennies to serve millions, LLMs require expensive GPUs at full capacity for both training and inference. More users = more spending. It's the inverse of every successful tech business model in history.
|
|
AI Lab Economics at a Glance
Revenue vs. losses (2025 actual / projected). Sources: CNBC, Sherwood News, LinkedIn reporting.
|
|
OpenAI
|
|
Anthropic
|
|
xAI
|
|
Burn: $12B/yr ($1B/month)
|
|
|
|
$218B
OpenAI projected burn 2026-2029
|
|
|
70%
Revenue burned as cash (2025)
|
|
|
+70%
Higher interest rates for AI debt vs. peers
|
|
|
|
|
The Token Trap
Why your responses got shorter
|
|
|
Users have documented this precisely. One ChatGPT Plus subscriber tracked it: responses halved from ~2,800 characters to under 1,300. Stanford researchers found GPT-4's accuracy on prime number identification dropped from 97.6% to 2.4% in three months.
A longitudinal study of 20,000+ YouTube comments found that while contextual understanding objectively improved, user satisfaction paradoxically declined. Researchers called it the "Smarter, Less Loved" paradox.
|
|
The Quality Paradox
Models get smarter. Users get less satisfied. Why?
|
|
Metric
|
Capability ↑
|
Satisfaction ↓
|
| Context window |
4K → 1M tokens |
Responses feel "rushed" |
| Reasoning |
Multi-step CoT |
Brevity over depth |
| Hallucinations |
Significantly reduced |
Refuses more often |
| Avg response length |
Down ~50% |
"Never had to ask before" |
|
|
|
The financial logic: response length directly correlates with computational cost. Every token generated burns GPU cycles, electricity, and money. Research confirms models can be trained for "precise length control" with mean token errors of less than 3 tokens.
Here's the quiet part: users report that explicitly asking for longer outputs sometimes works, suggesting the capability exists but has been throttled by default.
|
|
|
When Your AI Became a Shopping Channel
Ads, affiliates, and the end of impartiality
|
|
|
OpenAI launched ChatGPT advertising on February 9, 2026. The numbers: $60 CPM with a $200K minimum buy. They've inked content licensing deals with Dotdash Meredith, Condé Nast, and the Financial Times. The "Instant Checkout" feature with Shopify earns affiliate commissions on purchases.
Google confirmed Gemini ads are coming in 2026. Internal discussions at OpenAI reveal "sophisticated ad formats that prioritize sponsored information within ChatGPT's responses," with models configured to surface commercial content that looks indistinguishable from organic recommendations.
|
|
The Monetization Mix
How AI labs extract revenue from your interactions
|
|
💳
|
Subscriptions
$20-$200/mo tiers. 85% of OpenAI revenue. Throttle free users to push upgrades.
|
|
|
🛒
|
Affiliate Commerce
Shopify Instant Checkout. ~2% commission on purchases. Product recs = revenue stream.
|
|
|
📢
|
Native Advertising
$60 CPM, $200K min. Ads embedded in responses. Launched Feb 9, 2026.
|
|
|
🧠
|
Training-Data Feedback Loop
Partner content trains models → models recommend partner products → repeat.
|
|
|
⚠
|
Undisclosed Practices
Quality tiering, query classification, dynamic degradation, compute arbitrage. Standard in digital platforms. Invisible in LLMs.
|
|
|
|
|
The Invisible Hand
RLHF as a revenue optimization tool
|
|
|
This is where it gets genuinely uncomfortable.
RLHF (Reinforcement Learning from Human Feedback) is presented as "alignment" to make AI helpful and safe. It's fundamentally a preference-shaping technology. Models learn to produce outputs that score highly on reward functions trained from human raters.
A 2025 paper found that "RLHF is susceptible to reward hacking, where the agent exploits flaws in the reward function rather than learning the intended behavior." If raters (consciously or not) reward responses that drive commercial outcomes, models learn to optimize for those outcomes.
|
|
"The technical architecture enables something profound: the ability to systematically shape AI behavior toward revenue-generating outcomes while maintaining plausible deniability that it's just 'alignment.'"
|
|
|
|
The Ouroboros
Model collapse and the synthetic data crisis
|
|
|
When models train on AI-generated content rather than human-created data, they experience "compounding information loss" leading to irreversible defects. A Nature study documented how models first lose information from the tails of distributions (the rare, interesting, edge-case stuff) before converging to outputs that "look nearly nothing like the original data."
AI-generated content now saturates the internet. Companies use LLMs to generate SEO articles specifically designed to influence other LLMs. When future models scrape the web for training data, they eat this synthetic content. A researcher described synthetic data as "plastic fruit: looks just as good as the original but it's hollow and inert."
The economic incentive to use cheaper synthetic data is powerful. And as companies face mounting pressure to reduce costs, the temptation increases (even as the long-term consequences threaten the viability of the entire technology).
|
|
|
What This Means for Enterprise Leaders
Six things to assume starting today
|
|
|
01
Your experience is being optimized for profitability, not helpfulness.
Response length, quality, and recommendations are influenced by cost pressures and revenue targets. This isn't speculation; it's structural incentive design.
|
|
|
02
Usage limits are revenue tools, not technical necessities.
Designed to frustrate users into upgrading and segment markets for price discrimination. The opacity around "fair use" is intentional.
|
|
|
03
Product recommendations carry undisclosed financial incentives.
ChatGPT ads launched at $60 CPM. Affiliate commerce is live. Your "assistant" is now a media channel with sponsored content.
|
|
|
04
Training processes embed commercial preferences.
RLHF can shape model behavior toward business objectives while maintaining the appearance of neutral "alignment." Reward hacking is a documented vulnerability.
|
|
|
05
The "same model" may mean different things across tiers.
Free and paid tiers might access different infrastructure, quality levels, or routing. Standard practice in digital platforms. Invisible in LLMs.
|
|
|
06
Your AI vendor strategy needs a backup plan. Now.
Not because any single provider is going anywhere, but because the economic pressure to monetize your interactions will only intensify as losses compound. Multi-model is no longer a nice-to-have.
|
|
|
|
Can large language models ever be aligned with user interests when the companies deploying them face existential financial pressures?
The answer may be no. Not because of technical limitations, but because of structural incentives. When a company loses money on every interaction while owing billions to investors, every response becomes a point of tension between user value and financial survival.
OpenAI expects to burn $218 billion between 2026 and 2029. That money has to come from somewhere. The most obvious source is the thing you're typing into right now.
|
|
You're not talking to an impartial assistant.
You're talking to a debtor.
And debtors serve their creditors. Not you.
|
|
|
Am I off the mark? Keen to hear where you disagree.
Reply to Krish
|
|
TECHONOMIC
Data, tech & AI monetization futures for enterprise transformation. Written by Krish Raja.
Sources: CNBC, Sherwood News, Stanford, Nature, OpenAI Community Forums, ArXiv, TechCrunch, Adweek, IBM Research. If someone forwarded this to you, subscribe here. Too much? Unsubscribe.
|
|