Running the Project

Definitions

Transaction completion rate (TCR)
Calculated by dividing the total number of successful (approved) transactions by the total number of attempted transactions over a given time period.

Questions to Ask

?
How could this product fail (i.e. pre-mortem)?

Work backwards from where failure occurs, so that you can then plug those gaps.

?
How does this scale? What do you have in place to scale vertically (make your existing infrastructure more powerful) and/or horizontally (add more infrastructure quantity/units)?

Either or both options have situations where they're better. No matter what, there must be a plan that's being executed to handle a surge later on.

?
Are our images and other static assets optimized? Are they webp/avif? Are there 2mb jpg hero images?

These are extremely simple but can absolutely show dedication (or lack thereof) to optimizing performance. Better performance will help improve transaction completion rates and user satisfaction.

?
How does our UI look across device screen sizes?

UIs have no excuse doing poorly on mobile responsiveness and accessibility.

?
What model are we using and why can't we use <insert model>?

There are obvious differences across models, some do better than others. Like Claude 3.5 sonnet is really good with code and Gemini 2.0 or gpt 4.5 give me really good creative writing pieces. The business objective to keep up with the best in class and choice when there are advancements. The fear is always stagnating because of a lack of incentives to change.

Alarm Bells

They say...
Using xxx, yyy, zzz resources (or X%, Y%, Z%) for cloud/on-prem resources.
Why it's scary

From a business perspective, need to understand how resource usage meets objectives. Can users access the offering? Can they transact? Are they satisfied? How can costs be reduced?

They say...
Haven't asked AWS support for an optimization check or used tools for optimization.
Why it's scary

Lack of utilization of free AWS tools like Trusted Advisor and enterprise support indicates a lack of cost discipline and workload optimization.

They say...
Transaction completion rate is really good! (90-99%).
Why it's scary

A 10% failure rate is common; excessively high rates suggest potential measurement errors. Review start and end points of measurements to ensure they accurately reflect customer transaction success.

They say...
System health checks and metrics are run/calculated manually; no immediate notification of service outages.
Why it's scary

Manual monitoring is inefficient and unresponsive. Real-time, automated monitoring is essential for timely service issue detection and resolution. Stats should be used for ongoing health checks, not just periodic presentations.

They say...
We have backup, this is not a critical system anyway.
Why it's scary

What is the backup frequency? Are these active or passive copies? Where are they stored? How many copies? How long does it take to restore them? Are these complete snapshots or diffs? You'd always want some variant of the 321 backup rule.

They say...
Oh this insert GovTech product? Not familiar with it but we know we need ours and they seem different anyway.
Why it's scary

You're trying to recreate something that's presumably solved well. And doesn't seem like you did your competitive audit. Even if you're satisfied with your solution, you should say why yours works better, eg from user research.

They say...
Let's do UI UX better.
Why it's scary

They're very different things! The person saying this betrays a lack of understanding of what these 2 acronyms mean. Ui is a part of ux. Ux is far more than just tinkering with screens, colors, typography, spacing on figma templates. And this lack of clarity prevents us from going into what needs to be solved and how we should solve them.

They say...
It's impossible to track whether our tool improves productivity, saves manual labor and how much it improves the business.
Why it's scary

You can't track the productivity increase for every person, but you can follow >5 teams for several weeks and record how much they benefit in time savings, efficiency, errors they encounter, or even entire work flows they no longer do. But this has to be done before you deploy and is pointless to lament you didn't after the fact.

They say...
How do we manage vulnerabilities (CVEs) and patches?
Why it's scary

It's part and parcel of running systems. Patching is essential.

They say...
Let's do UI UX better.
Why it's scary

They're very different things! The person saying this betrays a lack of understanding of what these 2 acronyms mean. UI is a part of ux. UX is far more than just tinkering with screens, colors, typography, spacing on figma templates. And this lack of clarity prevents us from going into what needs to be solved and how we should solve them.

They say...
There needs to be X approvals from xxx, yyy, zzz groups to make an edit to the website.
Why it's scary

Bureaucracy crept in. If this is for a new tool, then it's also the case we bought a new tool and overlaid past processes.

They say...
Work always costs a lot of man days and money, even for things that sound very simple.
Why it's scary

Finance processes (fixed payments for vendor work) can ironically encourage spending more than common sense dictates, because they can let vendors pocket the change if the job turned out easier. People also want to buffer so no one needs to reseek approval. End result, people add buffers, overestimate, and vendors pocket the "change".

They say...
Whenever there's a problem, the instinct is always to absolve themselves from blame, especially by questioning whether there was a delay in reporting that issue or whether administrative processes were followed.
Why it's scary

Tackling problems become secondary to covering asses.

Dealbreakers

They say...
Users are telling us the app sucks, the app is slow, the app is useless. But we're sure the noise will go away eventually.
Why it's scary

We only listen to feedback if it's good, or if our bosses are affected by it.

They say...
Team proudly presents top notched figures but hasn't validated measurement methodology or investigated outliers.
Why it's scary

Lack of data integrity and actionable insights. At this point we might as well not even have measured, that'd be better than getting wrong conclusions from incorrect data.

They say...
Operations relies on manual log reviews, powerpoint slides, and spreadsheets to detect and assess system outages.
Why it's scary

Relying on manual system checks signals a reactive operational model. Business outcomes benefit from proactive, automated monitoring, with clear triggers tied to business objectives.

They say...
Stakeholder feedback is limited to "Make it look nicer" or "Let's improve the UI" without user research or defined UX goals.
Why it's scary

Reducing UX to purely visual UI tweaks betrays a fundamental misunderstanding of user-centered design. True UX is about understanding user needs and designing effective, efficient, and satisfying solutions. Ignoring user research leads to poorly adopted products.

They say...
No established KPIs or tracking mechanisms to measure productivity gains or cost savings after launch.
Why it's scary

Launching projects without a plan to quantify business impact makes it impossible to justify the investment and demonstrate ROI. Projects must be accountable for delivering tangible value, requiring measurement from the outset.

They say...
You need a 5-level approval process for changing words on your app/website.
Why it's scary

Overly complex approval processes for routine tasks signal a bureaucratic culture that stifles agility and innovation. In tech, quick adaptation is crucial. Excessive bureaucracy hinders progress and responsiveness.

Apptitude / Curated by Zixian Chen

© 2024–2026. All Rights Reserved.