Are performance and security testing still treated as tick boxes to tag on to the end of a project when all the development is done? In my experience, neither performance nor security is taken seriously by enough companies.
‘Twas ever thus, you might say. And you’d be right. But these are different times, with different expectations, different threats, and different consequences.
These days, performance and security failures can bring down businesses. So, why does it still feel like most companies don’t take the related testing phases seriously?
A Personal Story
Once upon a time, I was involved in launching a web security testing service. To get things started, we offered customers a free health check, and one particular financial company jumped at the chance.
We quickly got to work and shortly identified two severe security defects, both of which could have resulted in data theft. We notified them immediately and continued on.
Our final report, which they sincerely thanked us for, included an additional 20 medium-priority security defects.
When I asked what they planned to do, I was shocked by their answer. They told me, matter-of-factly, that they would do nothing about the defects, as they had not budgeted for additional effort.
Granted, this was some time ago, and I’d love to say they would handle this news differently today, but my experience fills me with doubt.
A Tougher Landscape and Sky-High Expectations
The simple fact is, though, the stakes have never been higher. It’s no longer enough for software to be functionally adequate or for severe performance and security defects to be knowingly accepted into live solutions.
On the one hand, the threat landscape is more complex than ever, with new vulnerabilities and attack vectors cropping up daily. High-profile cyberattacks are commonplace in modern news cycles.
At the same time, customers expect lightning-fast performance. Anything less — a sluggish checkout page, or an app that crashes under load — and you risk losing more than just that user.
These days, ‘working’ must also include robust performance and rock-solid security. The funny thing is, businesses know this. Stakeholders absolutely know that security and performance are fundamental to success, yet they still cut corners and therefore don’t take their testing seriously.
Why Are Security and Performance Testing Still Regarded as Second-Class Citizens?
So why are security and performance often only tested at the very end of the development cycle, if at all?
I can think of a few reasons:
- Misconceptions: Teams often think you can’t do performance or security testing before you have a finished product. In fact, there are lightweight, iterative approaches that identify problems early, saving costly rework and failures.
- Project Pressures: Fast deadlines, tight budgets, and pressure to deliver minimum viable products (MVPs) can lead teams to cut corners. Testing is often seen as a way to reduce time, and performance and security are prime candidates for the squeeze.
- Underestimating Risk: If past releases didn’t have problems (or you didn’t detect them), there’s a false sense of security that it will be the same next time.
- Lack of Expertise or Tooling: Many teams lack in-house specialists or the necessary tools to conduct meaningful tests early. Without champions to advocate for it, testing gets pushed back.
The more I think about it, the more I think it’s simply down to basic human nature. Functional issues must be addressed immediately, but performance and security concerns can often be put off until tomorrow.
In many organisations, functional testing has become part of the DevOps culture. It’s planned early, automated where possible, and viewed as a core part of the development process.
Meanwhile, performance and security testing are left lingering at the back of the queue, if at all. All too often, they’re squeezed into the last sprint or pushed until the week before go-live. After all, as long as a solution is functionally adequate, we can ship it and deal with any fall out.
Even When Testing Is Done, It’s Often Done Half-Heartedly
Sure, some teams run a basic load test or click a few checkboxes for compliance, but without time to analyse the results or act on them, these tests do little to improve the actual quality of the product.
To make matters worse, many test teams are being encouraged to move away from robust, enterprise-grade tooling to “free” or lower-cost alternatives. While budgets matter, the business often doesn’t realise these tools don’t deliver the same quality of insight, and the subsequent risk increase is all too real.
For example, I once worked with a company that replaced a proven enterprise performance testing tool (LoadRunner) with a cheaper solution. The migration required weeks of script rework, and ultimately, the tool was unable to simulate real-world traffic properly. The delays and post-launch firefighting quickly wiped out any savings.
Often, they are being encouraged to do this by external consultants. Have you ever wondered why? Two main reasons:
- They are more interested in diverting your budget to their fees and away from tools
- Most free or low-cost tools require more effort to achieve results, so more consultancy days sold
A win-win for them.
Will More Businesses Ever Take Performance and Security Testing Seriously?
Despite increasing media coverage and general awareness of the issues, many businesses are stuck in old habits. This begs the question: Will businesses ever take this seriously?
What needs to happen to wake people up to the consequences of short-term thinking?
Performance and security can’t just be checkboxes. We’ve all seen the consequences when they aren’t taken seriously. They must be integrated into development from the beginning — with the right mindsets, the right expertise, and serious, professional tools like LoadRunner Cloud, LoadRunner Professional and Fortify.
But do you think this will ever happen?