Please bear in mind, this was published on April 1st
Let’s be honest, testing is what teams do when they don’t trust their developers. It’s a tax on speed, a relic from waterfall days, and a crutch for people afraid to ship. It just slows down releases, kills creativity, and wastes budget that could be better spent on another sprint.
If you’ve spent any time around modern software teams, you’ve heard some version of that rant. Not always out loud, but in how priorities get set, how sprints get cut, and how “we’ll test it later” mysteriously becomes “we’ll test it in production.” And you know what, maybe they’re right.
I love testing; I’ve built much of my career around it. I’m here every week, telling you how useful it is, how to do it better, and which tools to use.
But maybe I’m sick of being that guy.
Maybe testing really is just a drag on the roadmap, and we should stop doing it?
What if move-fast-and-break-things is just a natural law of software, and testing is an artificial waste of time that pulls the focus away from real work at the most important times?
5 Arguments Against Testing
1: Testing Slows Us Down
The most obvious complaint about testing is simple: it really does slow everything down. This is categorically true. If your goal is to ship features fast, adding a stage will, by definition, delay your goal.
The logic is flawless.
- We need this feature out this sprint
- We don’t have time to write tests
- A bad product is better than no product
- We’ll fix in prod, once we’ve proven there’s demand
But then again, would a pilot skip preflight checks to arrive 20 minutes earlier at your destination? Different stakes, I guess, different regulations. Sure, they’re highly evolved processes that have made flying at 30k feet in a tin can one of the safest ways for fragile little humans to travel, but they don’t really apply to software.
Ok, for something more down-to-earth, consider that a stitch in time saves nine. Every serious study of software economics I’ve read echoes this proverb. Bugs found in production can take hundreds or thousands of times longer to fix than bugs caught during development or requirements definition.
But what if everyone has just consistently over-egged the impact of firefighting, hotfixes, reputational damage, and lost business?
Honestly, I’m torn. Let’s just call this a 50:50 split, depending on your priorities.
- If time-to-market is your key metric, don’t test
- If success is more important, do.
Argument 2: But We Can Fix It In Production
Ok, maybe I was being too dramatic. Of course you can fix it in prod; it’s the oldest software development approach, and it has stood the test of time for a reason.
Plus, this isn’t the olden days of mainframes and terminals. This is modern software development: agile, rapid, sexy. Systems are instrumented, monitored, and feature-flagged.
You can ship a feature to 1% of users, watch the dashboards, and roll it back if things go wrong. Why waste time with formal testing when the real world will tell you everything you need to know?
But then again, modern software is pretty complex. It’s integrated, it’s modular, often multichannel, omnichannel, omnipresent, omniscient… ok not quite yet.
Sure, canary releases and A/B tests are fantastic tools for learning how users behave and tweaking, but are they a substitute for checking whether your software works? Isn’t there a difference between validating a hypothesis and discovering basic correctness? I know that A/B tests are great for answering questions like:
- “Do users click this button more if we move it?”
- “Does this new onboarding flow improve conversion?”
But, can they answer questions such as:
- “Does this feature accidentally delete half our customers’ data?”
- “Does this API fail on Black Friday?”
- “Does this change violate a regulatory constraint?”
To be honest, they probably can these days, given technology as it is.
And after all, you can let a few users take the first hit; you’re bound to notice quickly enough to limit the damage. For consumer apps, that might just mean bad reviews and churn. For more critical infrastructure, the only real risks are compliance breaches, legal penalties, and serious safety risks.
Ok, pretty convinced, testing in prod wins this round.
Argument 3: Testers Are Just Failed Devs
Testing is less than development. Nobody went to uni to become a tester. There, I said it. It’s not creative; it’s not glamorous. Testing is where you put people who weren’t good enough to write code.
This isn’t hyperbole, it’s clear for all to see. Just look at team structures, career ladders, and who gets thanked in town halls. The hero is the developer who stayed up all night to push the hotfix, not the tester who clung on to their coattails.
Just think back to the last go/no-go meeting. Who had more influence in that decision? Dev. Why? because they understand software.
But then again, do they? I mean maybe you could look at it from another viewpoint and say that developers are siloed away in their own internal universe, with no real exposure to business needs, trying to fix cool problems and flex their superior minds.
What is a bug anyway? Is it a neat, isolated syntax error? Sure, but it’s also a weird combination of state, timing, data, assumptions, third-party dependencies, and user behaviour. And, good testers:
- Understand the business context and risk profile.
- Think systematically about failure modes.
- Design experiments that probe the edges, not just the happy path.
Is that lesser work, or applied systems thinking? Your guess is as good as mine.
I’m giving this one against testing, on a strong hunch.
Argument 4: AI-Driven Automation Makes Testing Obsolete
Hold on, hold on. Ok, there’s this elephant in the room that pretty much seals the deal. Maybe, just maybe, there was a need for testers before the mass adoption of DevOps and ChatGPT. But now? Come off it.
All we need is good CI/CD, orchestrated by devs, not testers. Get AI to auto-generate whatever assets it needs, fire it off, and watch for green bars in the pipeline. Any other colour, we chuck Cursor at it and crack on with the side hustle. It’s not rocket science. Rocket science isn’t even rocket science now, thanks to AI. Kids are doing it from their bedrooms.
But then again, automation is not a synonym for “job done”; it’s a force multiplier on top of human judgement.
Undirected auto-generated tests might flood you with coverage of trivial cases while quietly missing the one nasty interaction that matters. A wall of automated checks might give you a false sense of security, with no idea of underlying quality.
And when automation is even slightly haphazard, you get a juicy cocktail, let’s call it the ‘Bro Bypass’:
- 1 part positives (make sure they’re false).
- 1 part negatives (again, got to be false).
- Add in some dirty pipelines
- Served over long runtimes with a complete lack of faith
But who doesn’t love a cocktail, eh? And once you’ve drunk that bad boy, you’ll know the undeniable truth: just ship it.
Are you keeping score? I’ve lost count. The vibe I’m getting is scrap testing.
Argument 5: Testing Doesn’t Affect The Business
Case closed.
Glad We Settled That…
Colour me converted. It’s pretty clear, at least to me, that software testing isn’t worth the effort. So what else am I going to write about from here on in?
Maybe I will just share video clips of kittens or puppies. They are always popular and get lots of likes.
Or maybe I’ll do a piece asking, “How much avoidable risk are we comfortable carrying, just to pretend we’re moving faster?”
Yeah, I think I’ll do that, it could cover all sorts of things:
- The temporary illusion of speed
- The day that a bug introduced a cost you couldn’t hand-wave away
- The impact a badly tested programme had on the database over many weeks and the man-months of effort to untangle it.
- The ugly truths customers say about software before they ditch it.
- The luxury of reputation and revenue
Until then, keep on shipping. Together we’ll fix it in prod.
Who’s with me?













