The AI Trap That Is Quietly Wiping Out Angel Investors
Most “AI-first” startups are being priced like software and built like infrastructure. There is one number that exposes the difference instantly, and most angels never ask for it.
Most angels think they are underwriting software when they write checks into “AI-first” startups. They are not. They are underwriting variable cost machines with fragile pricing power and capital structures that behave nothing like SaaS. That mismatch is about to torch a lot of portfolios.
The uncomfortable part is not that founders are confused. It is that angels are. We are still applying 2015 software heuristics to 2026 cost curves. The results look sophisticated in pitch decks and catastrophic in unit economics.
If you feel a twinge of defensiveness reading this, good. That reaction is the tell that something important is being missed.
The underwriting mismatch most angels refuse to see
Classic angel underwriting assumes near-zero marginal cost of delivery, expanding gross margins over time, and operating leverage that compounds as usage grows.
That mental model made sense for subscription software built on owned infrastructure with predictable costs.
It breaks the moment intelligence itself becomes a variable input.
Most AI startups do not sell software. They sell outcomes powered by rented intelligence.
Every inference has a bill attached.
Every user action carries a marginal cost that does not decay on a friendly schedule.
In many cases, it rises as usage increases or as models improve.
The real cost structure of many AI startups looks closer to infrastructure or services:
Variable costs tied directly to usage
Limited ability to raise prices without churn
Margin compression as customers demand more capability, not less
Yet valuations are still anchored to SaaS multiples and SaaS narratives.
I’m not saying this is not an AI problem but more a capital discipline problem.
The “AI-first” narrative trap
The label does real damage.
“AI-first” shuts down diligence. It replaces economics with vision. It suppresses boring questions and rewards grand framing.
Angels who pride themselves on pattern recognition stop asking basic questions because they fear missing the next platform shift.
That fear is visible in meetings.
More time spent on:
Models
Stacks
Demos
Less time spent on:
Cost responsibility
Pricing power
Who actually pays for intelligence
Angels nod along to architecture diagrams while ignoring the invoice that arrives every time the system works as promised.
The core mistake is subtle but deadly.
Angels conflate technical feasibility with economic leverage.
Something impressive is assumed to be scalable.
Scalability without contribution margin is just accelerated loss.
The one number that actually matters
There is one question that cuts through all of this.
What is the contribution margin per AI-driven action?
Not blended gross margin.
Not aspirational future margins.
Not “costs will come down.”
Contribution margin on the core unit of value delivered by the model today.
Examples:
If the product generates a recommendation, what does that recommendation cost and what revenue is attached to it
If the product automates a task, what is the marginal cost of that automation and what is the price captured
If the answer is unclear, defensive, or deferred to a roadmap, you already have your signal.
If a founder cannot answer this cleanly, you are underwriting belief, not economics.
This number removes:
Charisma
Architecture theater
Hope
It forces reality into the room.
The immediate sorting effect
Once you ask this question, startups fall into three buckets very quickly.
1. Disguised services businesses
High variable costs
Margins depend on customer restraint, not system efficiency
Revenue scales with human oversight or customization
These can be good businesses.
They are often terrible venture investments.
2. Pricing fictions
Healthy blended margins that hide per-action losses
Costs averaged, deferred, or buried in platform fees
Growth masks fragility until capital tightens
These survive on narrative until the math asserts itself.
3. Genuine leverage
Positive contribution margin per action today
Improving unit economics without pricing gymnastics
Clear ownership of costs and pricing power
Only this category deserves venture-style risk capital.
The filter works because it ignores intention and rewards structure.
Why this matters right now
The market phase has shifted.
Capital is tighter
Follow-ons are slower
Public comps are less forgiving
Margin ambiguity is no longer tolerated
AI startups will not fail because the technology is bad.
They will fail because the economics never worked.
Angels who ignore this will fund impressive demos that collapse under scaled usage. They will discover that variable costs do not politely disappear and customers do not subsidize ambition with unlimited pricing tolerance.
Common angel mistakes worth calling out
These show up repeatedly.
Mistake one: Model fixation
Which foundation model matters far less than how inference costs behave at the margin.
Mistake two: Stack obsession
Build versus buy debates ignore that both paths still carry real costs. The question is who absorbs them.
Mistake three: Demo intoxication
Every polished click burns cash. Angels reward polish and punish honesty.
Mistake four: Future-state underwriting
“Margins improve at scale” without a defined mechanism is not an argument.
Scale amplifies bad unit economics unless something structural changes.
The conclusion most angels avoid
Many AI startups should not be venture-backed at all.
They should be:
Bootstrapped
Priced as services
Sold as features inside companies with existing margin pools
Venture capital requires leverage. Without it, the risk is asymmetric in the wrong direction.
This is not so much a founder morality story but more a capital allocation reality.
Forcing startups into venture narratives when the economics do not fit is bad for founders and worse for angels.
Capital discipline is not optional when cycles turn.
The angels who last are not the most enthusiastic. They are the most exacting.
If you want one rule to carry forward, use this:
If contribution margin per AI-driven action is unclear, negative, or aspirational, walk.
Some will argue inference costs will collapse. Maybe. Betting on cost curves you do not control is speculation, not underwriting.
Others will argue pricing power will emerge. That assumes differentiation most startups do not have.
If you disagree, tell me which assumption fails and why contribution margin per action is the wrong filter.




This really lands, Susan, especially the way you collapse all the AI theater down to CM per AI-driven action. That single question cuts through an enormous amount of narrative fog.
What struck me is that even one step before that, many teams (and investors) aren’t clearly measuring whether the AI is changing the customer outcome at all. We jump straight to blended margins, adoption, or “costs will come down,” without instrumenting whether uncertainty, effort, or decision friction is actually reduced per action.
Your framing makes it clear why that gap matters: if you can’t tie an AI action to both outcome change and CM, you’re underwriting belief, not leverage. The fact that this still feels uncomfortable for many angels says a lot.
Spot on. Unlike Saas, 'intelligence' isn't a fixed asset—it's a consumable :)
I’d add that even if raw compute / AI costs drop, the 'marginal cost per action' likely won't hit zero because founders and customers may want to trade those savings for the next, more expensive model or 'fancier' features. It’s hard to see if there is a level where you will be able to “pocket” the efficiency that AI research produces. I believe that’s also what’s at play with OpenAI, you need to push ads because it’s hard to sustain the cost of free users and delivering frontier performance for those !