There’s a polarisation of opinion amongst test tool users – it boils down to where you stand on the following question: Open source v paid tools.
In this debate, there’s one area that I don’t think gets enough attention, application security.
Even when it is discussed, I often hear the open-source champions casually proclaim that “Open source tools are not necessarily more prone to hacks than proprietary test tools.”
Sure, it’s easy to make claims like this. But can it really be true?
In this blog, I ask. How secure are open source test tools? I highlight 4 areas where open source security should give you cause for concern.
1. Open Source Code Doesn’t Always Have a Security Review
As a test professional, you know that even the best developers have defects in their code. These defects range from easily detected functional issues, though more complex performance issues, to deeply buried security vulnerabilities.
The fact is, it’s hard, if not impossible, to develop flawless code. All software needs to be tested thoroughly prior to release, and this includes security testing. Most developers aren’t experts in writing secure code, it’s way down on their list of priorities.
2. Open Source Relies Heavily on Third Party Libraries
Open Source software often pulls in third-party libraries that are then used in blind faith. We should give the original developers the benefit of the doubt. I’m sure they initially vetted the libraries that were used and implemented any fixes required during the development.
However, once open source software goes live, it is increasingly difficult to ensure that all libraries are known and patched appropriately.
3. Bad Actors Have Visibility of Open Source Code Too
Open Source proponents like to counter the issues above, by pointing out that their code is widely used and constantly scrutinised by many developers. Their argument is that this increases the likelihood of security flaws being caught and squashed. And that is partially true. At least, they’re likely to fix them eventually. But what happens in the meantime?
Open Source projects have development communities, these are the people who contribute to the development of the solution.
When a potential vulnerability is detected, the members of the development community are notified– By its very nature, this is before the vulnerabilities are fixed.
Unfortunately, not all development community members have the best intentions in mind – some of them could be cybercriminals.
These notifications mean they don’t even have to do any homework. They can immediately pounce on anyone running old versions. In fact, criminals can often access the vulnerabilities even after fixes have been implemented in the source code. After all, not all companies deploy fixes immediately.
4. Open Source Add-Ons Add Additional Security Complications
Even if we disregard the issues associated with the main open source solution – which we absolutely shouldn’t do – there’s still a huge elephant in the room. Most open source users rely on add-ons.
These add-ons are also open source tools and are therefore subject to all of the same issues faced by the software they are adding to.
Conclusion
In my opinion, it is undeniable that open source test tools carry significant security risks.
On the surface, I find it odd that some financial institutions, usually risk-averse, appear happy to use open source tools.
However, when I think about this more deeply, I wonder if those controlling the purse strings know how to assess and understand open source’s risk and true cost.
These are risks can that you can avoid by using the right paid tools.
Tools like the Micro Focus suite get external security validation done. They automatically validate every line of code and release regular and consistent patches. They will, for most, prove cheaper in the long run.