Products

Problems
we solve

We can help your business

Request a Free Demo / trial

Insights

Insights | From a different perspective
1 April, 2026

5 Reasons Testing is a Waste of Time

Testing is a waste of time

Please bear in mind, this was published on April 1st

Let’s be honest, testing is what teams do when they don’t trust their developers. It’s a tax on speed, a relic from waterfall days, and a crutch for people afraid to ship. It just slows down releases, kills creativity, and wastes budget that could be better spent on another sprint.

If you’ve spent any time around modern software teams, you’ve heard some version of that rant. Not always out loud, but in how priorities get set, how sprints get cut, and how “we’ll test it later” mysteriously becomes “we’ll test it in production.” And you know what, maybe they’re right.

I love testing; I’ve built much of my career around it. I’m here every week, telling you how useful it is, how to do it better, and which tools to use.

But maybe I’m sick of being that guy.

Maybe testing really is just a drag on the roadmap, and we should stop doing it?

What if move-fast-and-break-things is just a natural law of software, and testing is an artificial waste of time that pulls the focus away from real work at the most important times?

5 Arguments Against Testing

1: Testing Slows Us Down

The most obvious complaint about testing is simple: it really does slow everything down. This is categorically true. If your goal is to ship features fast, adding a stage will, by definition, delay your goal.

The logic is flawless.

  • We need this feature out this sprint
  • We don’t have time to write tests
  • A bad product is better than no product
  • We’ll fix in prod, once we’ve proven there’s demand

But then again, would a pilot skip preflight checks to arrive 20 minutes earlier at your destination? Different stakes, I guess, different regulations. Sure, they’re highly evolved processes that have made flying at 30k feet in a tin can one of the safest ways for fragile little humans to travel, but they don’t really apply to software.

Ok, for something more down-to-earth, consider that a stitch in time saves nine. Every serious study of software economics I’ve read echoes this proverb. Bugs found in production can take hundreds or thousands of times longer to fix than bugs caught during development or requirements definition.

But what if everyone has just consistently over-egged the impact of firefighting, hotfixes, reputational damage, and lost business?

Honestly, I’m torn. Let’s just call this a 50:50 split, depending on your priorities.

  • If time-to-market is your key metric, don’t test
  • If success is more important, do.

Argument 2: But We Can Fix It In Production

Ok, maybe I was being too dramatic. Of course you can fix it in prod; it’s the oldest software development approach, and it has stood the test of time for a reason.

Plus, this isn’t the olden days of mainframes and terminals. This is modern software development: agile, rapid, sexy. Systems are instrumented, monitored, and feature-flagged.

You can ship a feature to 1% of users, watch the dashboards, and roll it back if things go wrong. Why waste time with formal testing when the real world will tell you everything you need to know?

But then again, modern software is pretty complex. It’s integrated, it’s modular, often multichannel, omnichannel, omnipresent, omniscient… ok not quite yet.

Sure, canary releases and A/B tests are fantastic tools for learning how users behave and tweaking, but are they a substitute for checking whether your software works? Isn’t there a difference between validating a hypothesis and discovering basic correctness? I know that A/B tests are great for answering questions like:

  • “Do users click this button more if we move it?”
  • “Does this new onboarding flow improve conversion?”

But, can they answer questions such as:

  • “Does this feature accidentally delete half our customers’ data?”
  • “Does this API fail on Black Friday?”
  • “Does this change violate a regulatory constraint?”

To be honest, they probably can these days, given technology as it is.

And after all, you can let a few users take the first hit; you’re bound to notice quickly enough to limit the damage. For consumer apps, that might just mean bad reviews and churn. For more critical infrastructure, the only real risks are compliance breaches, legal penalties, and serious safety risks.

Ok, pretty convinced, testing in prod wins this round.

Argument 3: Testers Are Just Failed Devs

Testing is less than development. Nobody went to uni to become a tester. There, I said it. It’s not creative; it’s not glamorous. Testing is where you put people who weren’t good enough to write code.

This isn’t hyperbole, it’s clear for all to see. Just look at team structures, career ladders, and who gets thanked in town halls. The hero is the developer who stayed up all night to push the hotfix, not the tester who clung on to their coattails.

Just think back to the last go/no-go meeting. Who had more influence in that decision? Dev. Why? because they understand software.

But then again, do they? I mean maybe you could look at it from another viewpoint and say that developers are siloed away in their own internal universe, with no real exposure to business needs, trying to fix cool problems and flex their superior minds.

What is a bug anyway? Is it a neat, isolated syntax error? Sure, but it’s also a weird combination of state, timing, data, assumptions, third-party dependencies, and user behaviour. And, good testers:

  • Understand the business context and risk profile.
  • Think systematically about failure modes.
  • Design experiments that probe the edges, not just the happy path.

Is that lesser work, or applied systems thinking? Your guess is as good as mine.

I’m giving this one against testing, on a strong hunch.

Argument 4: AI-Driven Automation Makes Testing Obsolete

Hold on, hold on. Ok, there’s this elephant in the room that pretty much seals the deal. Maybe, just maybe, there was a need for testers before the mass adoption of DevOps and ChatGPT. But now? Come off it.

All we need is good CI/CD, orchestrated by devs, not testers. Get AI to auto-generate whatever assets it needs, fire it off, and watch for green bars in the pipeline. Any other colour, we chuck Cursor at it and crack on with the side hustle. It’s not rocket science. Rocket science isn’t even rocket science now, thanks to AI. Kids are doing it from their bedrooms.

But then again, automation is not a synonym for “job done”; it’s a force multiplier on top of human judgement.

Undirected auto-generated tests might flood you with coverage of trivial cases while quietly missing the one nasty interaction that matters. A wall of automated checks might give you a false sense of security, with no idea of underlying quality.

And when automation is even slightly haphazard, you get a juicy cocktail, let’s call it the ‘Bro Bypass’:

  • 1 part positives (make sure they’re false).
  • 1 part negatives (again, got to be false).
  • Add in some dirty pipelines
  • Served over long runtimes with a complete lack of faith

But who doesn’t love a cocktail, eh? And once you’ve drunk that bad boy, you’ll know the undeniable truth: just ship it.

Are you keeping score? I’ve lost count. The vibe I’m getting is scrap testing.

Argument 5: Testing Doesn’t Affect The Business

Case closed.

Glad We Settled That…

Colour me converted. It’s pretty clear, at least to me, that software testing isn’t worth the effort. So what else am I going to write about from here on in?

Maybe I will just share video clips of kittens or puppies. They are always popular and get lots of likes.

Or maybe I’ll do a piece asking, “How much avoidable risk are we comfortable carrying, just to pretend we’re moving faster?”

Yeah, I think I’ll do that, it could cover all sorts of things:

  • The temporary illusion of speed
  • The day that a bug introduced a cost you couldn’t hand-wave away
  • The impact a badly tested programme had on the database over many weeks and the man-months of effort to untangle it.
  • The ugly truths customers say about software before they ditch it.
  • The luxury of reputation and revenue

Until then, keep on shipping. Together we’ll fix it in prod.

Who’s with me?

Stephen Davis
by Stephen Davis

Stephen Davis is the founder of Calleo Software, a OpenText (formerly Micro Focus) Gold Partner. His passion is to help test professionals improve the efficiency and effectiveness of software testing.

To view Stephen's LinkedIn profile and connect 

Stephen Davis LinkedIn profile

1st April 2026
OpenText Summit 2026

OpenText Summit: Why This Free Event Is Worth Your Time

You walk into a room where people are talking about the exact problems you wrestle with: tricky deployments, clunky processes, and how to test faster. Sometimes, the right conversation with the right person is enough to unlock a solution or a possibility you hadn’t even considered.

Python

Functional Testing 26.1: Adds Python, Cloud Testing, and more AI

With 26.1, OpenText is giving you something concrete: Python‑based automation, AI‑assisted verification, and cloud labs that fit into your existing CI/CD. This turns functional testing from a separate QA activity into a shared capability that developers, SDETs, and testers can all contribute to.

LoadRunner AI

LoadRunner 26.1: A New Direction in Performance Testing?

OpenText’s version 26.1 is a clear statement of where the Performance Engineering (LoadRunner) family is heading: AI-assisted, simplifying complex tasks and enabling your team to be more productive. This creates a very practical question: how do you buy and deploy these new capabilities in a way that actually moves the needle on risk, cost, and delivery speed?

Hooked on Open Source

Revealed: How Consultancies Get You Hooked on Bad Tools

Picture the scene: you’re about to engage a consultancy for testing services, and their proposal leans heavily on open‑source tools, but there’s a nagging doubt… a misalignment between what you’re paying for and what they’re delivering.

You want the guidance and support to prevent costly mistakes; they want more billable days.

Calleo Sell Test Tools

Calleo: We Sell Test Tools

With Calleo, you get expert guidance to find the right options, demos and trial licenses to evaluate them, and practical help to get up and running. You’ll see the pros, cons and long-term costs clearly before making any decisions, and stay supported with renewals and updates long after you’ve started using the tool.

cut test maintenance

4 Ways to Cut Test Maintenance Effort with AI

Automation maintenance is a pain. It’s a frustrating time drain that nobody enjoys. Unfortunately, teams are doing more of it than ever, with modern solutions changing like the wind and each new release jeopardising script integrity. Thankfully, AI-driven automation is here to help.

2025 top testing articles

2025 Roundup: Check Out The Top 5 Testing Times Articles

Thanks to your support, 2025 was another excellent year for Testing Times and our 10,000+ subscribers. We explored a wide range of software testing topics, including test automation, performance testing, Jira fatigue, tester authority, and more. Below is a quick look at the five newsletters with the most reactions this year, and why they resonated so strongly.

Is WFH worth the risk

Remote Testing: Is Working From Home Worth The Risk?

Increasingly, organisations expect remote and hybrid testers to use borrowed tool licences, unstable VPNs, and software never designed to leave the office. That creates significant compliance and security risks that can turn into serious long‑term problems. It’s not the testers per se, but remote execution over on‑prem licences is a software audit waiting to happen. Read on to learn why a compliance nightmare isn’t the only reason your test setup might not be fit for distributed and home‑working team members.

Effortless automation

Solved: 4 Common Test Automation Headaches

Software teams know the story all too well: automation promises speed and reliability, but reality often brings fragile scripts, phantom failures, and endless rework. In the end, the technology intended to accelerate releases ends up bogging things down. Or at least, that’s how things used to be… Today’s AI-powered functional

Insights

Search

Related Articles

To get other software testing insights, like this, direct to you inbox join the Calleo mailing list.

You can, of course, unsubscribe 

at any time!

By signing up you consent to receiving regular emails from Calleo with updates, tips and ideas on software testing along with the occasional promotion for software testing products. You can, of course, unsubscribe at any time. Click here for the privacy policy.

Sign up to receive the latest, Software Testing Insights, news and to join the Calleo mailing list.

You can, of course, unsubscribe at any time!

By signing up you consent to receiving regular emails from Calleo with updates, tips and ideas on software testing along with the occasional promotion for software testing products. You can, of course, unsubscribe at any time. Click here for the privacy policy.