AI & Hype

Definitions

"Frontier" Resource Gap
The gap between what OpenAI/Google spends (billions) and what you spend (thousands). You cannot "out-train" or "out-model" them. You will always be a user, not a builder, of frontier models.
Subsidized Inference
The current price of AI tokens is artificially low, subsidized by VC money and cloud giants fighting for market share. Real costs are much higher. If your business model barely works now, it will fail when the subsidies end.
"Long Tail" of Failure
AI is great at the "fat head" (common, simple queries) but fails at the "long tail" (complex, edge cases). Handling the easy 90% is cheap; the remaining 10% takes 90% of the effort and often requires human intervention anyway.
Probabilistic vs. Deterministic
Traditional software is 1+1=2 (always). AI is 1+1= "probably 2, but maybe window". If you need 100% consistency, do not use an LLM.
Depreciation Bomb
Unlike real estate or fiber optics, AI hardware (GPUs) depreciates incredibly fast. An H100 chip bought today is likely "e-waste" in 4 years. Heavy capex investment in AI hardware carries massive financial risk.
Second-Mover Advantage
Letting other people burn money to figure out what works. Since AI capability renders old methods obsolete instantly, skipping the "learning phase" often saves you from learning useless things.
AI washing
Slapping "AI-powered" on products for marketing value. Common in enterprise sales. Always ask what the AI actually does.

Goals

Be a smart user of AI, not a delusional builder of AI.
Distinguish between a "Search" problem and a "Generation" problem.
Avoid "Solution in Search of a Problem" projects.
Future-proof against price hikes, because the subsidy party will end.

Questions to Ask

Do we truly understand the scale of resources required?

Frontier labs have 1000x or 1,000,000x your compute resources. Trying to build 'world-class' models in-house without that budget is setting money on fire.

Is our business model viable if API costs triple?

We are currently in a "subsidy bubble". When the VC money runs out and true inference costs hit, many thin-margin AI wrappers will die.

Is this a search problem or a generation problem?

Don't use a chatbot when a search bar will do. Search is cheap and accurate. Generation is expensive and hallucinates.

Who handles the "Failure Demand"?

When the AI answers incorrectly (which it will), the customer ends up frustrated and requires a human agent anyway. Have we calculated the cost of fixing the AI's mistakes?

What happens when this fails?

LLMs will fail. What's the fallback? Human review? Graceful degradation? If there's no answer, the system isn't production-ready.

Are we rushing just because of FOMO?

Technology becomes obsolete fast. Jumping in now often means building on a framework that will be dead in 6 months. waiting is a valid strategy.

Alarm Bells

The use case relies on 100% accuracy but uses an LLM without human review.

Guaranteed failure. LLMs are creative engines, not truth engines.

We have a "Shiny Solution" looking for a problem.

We built a cool demo for the hackathon, but no customer is actually asking for it.

The business model only works because current inference costs are cheap.

Unsustainable. You are building on a subsidized foundation that will eventually crumble.

We want to be a 'World Class' AI leader in this space.

Unless you have billions in compute, you won't be a leader. You will be a customer. Accept it and focus on application, not creation.

It doesn't matter if it's GPT-4, 5, or 6—what matters is the business use case.

Dead wrong. New models bring qualitative leaps (like Reasoning in o1) that fundamentally change what is possible. Dismissing model upgrades ignores that 'impossible' problems today become trivial tomorrow.

We need to build our own LLM from scratch to own the IP.

You are trying to reinvent the wheel, but your wheel will be square. You cannot beat the open-source community or the tech giants at their own game.

If we don't adopt this specific AI tool now, we'll miss the boat.

The boat is being rebuilt every 3 months. Whatever you learn today will likely be obsolete by Q4. You can always leapfrog later without missing anything critical.

Let's just use AI to fix our bad data/messy processes.

AI is not a janitor. If you feed it messy processes, it will just automate the chaos at light speed.

We saw a demo and it looked amazing.

Demos are cherry-picked. Production is where hallucinations, edge cases, and latency live. Always ask to see failure modes.

We need an AI strategy.

You need a business strategy. AI is a tool, not a strategy. Nobody has a "spreadsheet strategy."

Dealbreakers

Red Flag
The use case relies on 100% accuracy but uses an LLM without human review.
Why

Guaranteed failure. LLMs are creative engines, not truth engines.

Red Flag
The business model only works because current inference costs are cheap.
Why

Unsustainable. You are building on a subsidized foundation that will eventually crumble.

Red Flag
We are spending more on "AI R&D" than on solving the actual customer problem.
Why

Solution in search of a problem.

Apptitude / Curated by Zixian Chen

© 2024–2026. All Rights Reserved.