Would you intentionally plant defects to test your test team? Bebugging, as it’s known, is a technique where software flaws are purposely introduced to gauge testing effectiveness.
Possibly counterintuitive, probably confrontational, and definitely controversial, this approach has been dividing opinion since the 1970s—if not earlier.
Right off the bat, I want to make it clear that this process doesn’t sit well with me. But maybe I’m wrong? Maybe there are times and places where bebugging would be a valid way to help improve processes, tighten up testing, or root out a potential weak link?
The Case For Bebugging: Continuous Improvement
Bebugging proponents see it as a proven methodology to improve testing. By tracking how many of the injected bugs are discovered during testing, teams can objectively measure the effectiveness of their QA processes. If testers find most of the seeded bugs, it suggests their methods are thorough; if not, there may be blind spots in test coverage.
It can also work in a similar way to chaos testing, by introducing real-world failures. This can reveal hidden vulnerabilities, allowing teams to identify weak points in their systems before these defects become production issues.
Honestly, I have never taken part in bebugging, but from what I can tell, people generally don’t do it with malicious intent. I.e. they’re not looking for the gotcha moment. Rather, it is driven by logical folk seeking continuous process improvement.
By regularly challenging the system and the team, organisations hope to foster that almost chimeric of goals—a culture of learning and constant adaptation.
The Dark Side of Bebugging: The Hunter Becomes The Hunted
I think anyone with project experience, or indeed anyone who’s ever met people, can see that bebugging might just lead to trouble.
For starters, it can seriously impact team morale. Let’s be honest, developers and testers don’t always have the best relationships. Tight deadlines and high expectations already create stressful dynamics. Introducing extra bugs to the mix can be a recipe for disaster.
Deliberate bug injection also changes the nature of testing, introducing new metrics that—if handled transparently—might inadvertently encourage the wrong kind of testing. If not done openly, bebugging could erode trust, especially when teams inevitably discover the truth.
Lastly, who has the time for this? Creating, managing, and tracking seeded bugs requires effort—where’s that going to come from? Surely this extra workload could be better utilised elsewhere?
But then, how do you know whether your testers are catching your bugs?
My Opinion: The Tester’s “Where’s Wally?”
To me, fault seeding is a bit like the tester’s version of “Where’s Wally?”—a hidden challenge that, if pitched just right, could maybe sharpen skills and reveal valuable insights. Maybe.
However, if the injected bugs are too obvious, they’re quickly discovered and offer little value. Too subtle, and they may go undetected, missing the point entirely.
Additionally, the diversion of resources and the potential impact on morale and team spirit make this a non-starter. But maybe I’m wrong?
Would You Consider Bebugging?
So, is deliberate bug injection a clever tool for continuous improvement, or an unnecessary source of friction? Does the answer depend on your context, your team, and your goals?
What if you had a major go-live planned, but something about testing was giving you bad vibes? What if you suspected a leaky test team, or that somebody was passing tests without giving them due attention, but couldn’t prove it?
Is bebugging a valuable activity, a recipe for mistrust, an option of last resort, or something else?