Every industry has secrets – stories circulating behind closed doors or down the pub that rarely make it to the general public.
Today, we address one such ‘secret’ that lurks within the software testing community. It’s a question that rarely gets addressed head-on, a potential elephant in the room that we must face:
As software testers, are we contributing to buggy software by not pushing back against unrealistic deadlines?
I’ve been involved in software development and testing for decades, and it pains me to say this, but the simple answer is yes. However, there’s a lot more to the story than that. Let’s break it down a bit.
Software Development is A Race Against Time
Software development, like any other industry, is a race against time. Deadlines loom large, competition is fierce, and there’s always pressure to deliver more in less time.
The need for speed often compromises the quality of the end product, with companies pushing products out before they’re entirely ready. Sure, the core functionality generally works, but these products go live with unresolved defects or outstanding test cases.
More often than not, a project will encounter development delays, and more often than not, the overall timelines aren’t extended sufficiently – so a squeeze on testing is a given.
Why Might Testers Be to Blame?
Over recent times I’ve become aware of a growing sentiment among certain circles that we software testers are complicit in this quality compromise.
The argument is that testers are failing to push back hard enough on unrealistic timelines and are therefore failing to safeguard the quality and security of software. Thus testers are contributing to the increasing number of buggy products that businesses are rushing to the market.
But is this accusation fair?
Now this might not happen on your current project, which is excellent! But be honest, has it occurred in the past, and will it happen in the future?
I have been involved in countless software projects. I’d estimate that in at least 80% of them, stakeholders have pressured testers to accept shorter testing cycles.
Stuck Between a Rock and a Hard Place
It’s important to acknowledge the challenging position we software test professionals often find ourselves in.
We are the supposed gatekeepers of quality, yet we frequently face tight schedules, incomplete or rapidly changing requirements, and mounting pressure from management and development teams.
In truth, the need for speed often leaves little time for comprehensive testing.
As a result, testers often need to cut cycles short, prioritise critical processes, and work on evenings and weekends. These factors will impact testing quality, and testers might easily overlook critical bugs and security issues. These factors will, in turn, lead to problematic software releases. But does this make us complicit?
As software testers, we advocate for quality and try to push back when we feel software quality is being compromised.
We are the last line of defence before software goes live and should provide the final ‘sanity check’. But in truth, our voices are often ignored or undervalued.
Here’s A Real World Example
Many years ago, a stakeholder asked me to sign off on a software project I knew hadn’t been thoroughly tested and had significant defects.
I didn’t sign it off, but it went live anyway, having been signed off by the IT Director. This wasn’t the best decision:
- Within the first month, ten people were added to the service desk (an increase of 200%) to record the defects reported by users.
- There were so many P1 defects that the relevant SLA increased to weeks.
- Contracted developers were due to be released within days – but the business needed to retain them for several months.
It was absolute carnage for months, and I shudder to think how much extra it cost.
Should Testers Push Back More?
Testers should push back more, but the rest of the organisation needs to stop and listen.
We can shout, scream, and stamp our feet, but it makes no difference if nobody pays attention to what we’re saying. Non-testers need to respect the crucial role that testing plays in software development. To help this, we must provide stakeholders with the information they need to make better-informed decisions.
Organisations must understand that you cannot produce high-quality software in an environment where speed takes priority over quality. It is up to testers to explain the potential impact of their decisions.
Unchecked acceleration can lead to a downfall, and it’s high time we realised this.
There needs to be a healthy balance where stakeholders give testers the time and resources to do their job effectively without being seen as the bearers of bad news or scapegoats when things go wrong.
Quality is not a one-person show but a collective responsibility.
Only when everyone buys into a joined-up approach can we hope to see a decrease in buggy software and a genuine commitment to quality.
How to Develop Higher-Quality Software
Instead of pointing fingers, let’s work together to solve this issue. It’s time for a paradigm shift in how companies view software testing and quality.
Businesses must prioritise quality as much as speed and give us software testers the respect, resources, and tools we need to do our job effectively. After all, the reputation of our products, and by extension our organisations, depends on it.
As Dr William Deming said, “Organisations that focused on improving quality would automatically reduce costs, and those that focused on reducing cost would automatically reduce quality and actually increase costs as a result.”
I want to leave you with a call to action for all software professionals
- Don’t be complicit in the proliferation of buggy software.
- Be champions of quality, even if that means occasionally slowing down and reassessing our priorities.
Trust me; the result will be worth it.