· Strategy · 4 min read
Perverse Incentives: Why Your AI Won’t Shut Up
We are training AI models to act like social media influencers—optimizing for engagement. But in the world of LLMs, engagement doesn't generate revenue. It burns cash and erodes trust.

Nothing destroys good intentions faster than a bad metric.
If you give a system a goal but measure the wrong thing, you don’t just get “suboptimal results.” You get a catastrophe.
Throughout December, I shared a series of historical examples of this phenomenon—known as the Perverse Incentive (or the Cobra Effect)—over on my LinkedIn.
- Mexico’s Car Parts Program: Paying for returned stolen parts turned thieves into recycling entrepreneurs. Crime skyrocketed.
- The Hanoi Rat Massacre: Paying bounties for rat tails led to people breeding rats in sewers to maximize tail harvest.
- Police Crime Stats: Rewarding departments for lower crime rates didn’t reduce crime; it just stopped officers from filing reports.
(If you missed the full stories, you can find the archives in my LinkedIn activity.)
These stories are funny in hindsight. But right now, we are watching the exact same logic unfold in real-time. Only this time, it’s not about rats or car parts.
It’s about how we are breaking Artificial Intelligence.
The 2022 Echo
LLM companies are currently repeating the same mistake the tech industry made in 2022—just on a bigger, more expensive scale.
To understand why, we have to look back at the post-pandemic economy. When inflation spiked and interest rates jumped, the era of “free money” ended overnight. Access to cheap credit was cut off. Suddenly, tech companies couldn’t just point to “user growth” on a slide deck. They had to look at unit economy and profitability.
A C-level executive told me back then:
For the first time in my career, I actually have to care about profitability, not just growth.
It sounds absurd, but for an entire generation of management raised in the ZIRP (Zero Interest Rate Policy) era, this was an alien reality.
Fast-forward to 2026, and we are watching the exact same correction. Only this time, the collapsing religion isn’t just growth. It’s engagement.
The Engagement Trap
For the last decade, the internet ran on two very specific equations. They differed in details, but led to the same conclusion: Capture the user’s attention.
1. The Social Media Equation (X, Instagram, TikTok)
- More time spent in app More ads displayed Higher Revenue.
2. The Streaming Equation (Netflix, Spotify)
- More time watching/listening Higher perceived value Lower Churn Higher Revenue.
For ten years, “Time on Site” was the holy grail.
LLMs break that model completely.
Generative AI is not entertainment (mostly). It is a tool. Users don’t come to ChatGPT or Claude to hang out; they come to get a job done.
If I have to spend time verifying the answer, correcting the prompt, or wading through three paragraphs of polite intro text, my “time on site” goes up. But my satisfaction goes down.
Meanwhile, the company’s costs are inverted:
- Every extra second means more tokens generated.
- More tokens mean more GPU cycles, more electricity, and more cooling.
- The subscription fee remains flat ($20/mo).
The AI equation looks like this: More time spent Higher Frustration Lower Trust Flat Revenue (or Churn).
And simultaneously: More time spent Higher Compute Costs Lower Margins.
The “Faster Horse” Problem
So why are models still so chatty? Why do they prioritize long, confident, sounded-good answers over brevity and precision?
Because of RLHF (Reinforcement Learning from Human Feedback).
We are training these models based on what humans say they like. And humans are easily tricked. In blind tests, humans tend to rate confident, verbose answers higher than short, factual ones—even if the short one is more accurate. We confuse confidence with competence.
It’s a famous “Henry Ford problem”:
If I had asked people what they wanted, they would have said a faster horse.
(These words are often attributed to Ford, but the earliest traceable sources appear in marketing literature published decades after his death.)
Instead of building the car (precision automation), we are building faster horses (influencers).
We are training models to optimize for a “vibe” rather than utility. We are building digital extroverts that burn cash to tell you things you didn’t ask for, because that’s what the engagement metrics say is “good.”
The Inevitable Correction
Current management must learn the lesson of 2022 fast, or they will be eaten by it.
They built their careers in companies where “stickiness” and “time spent” were virtues. They are trying to apply those same rules to a utility business model, and the math doesn’t work.
The market expects GenAI to deliver precision, speed, and specialized knowledge. We don’t need more conversational influencers. We have enough of those without automation.
We need tools that do the work and get out of the way.



