Products

Problems
we solve

We can help your business

Request a Free Demo / trial

Insights

Insights | Under The Spotlight
20 November, 2024

WQR: This 1 AI Recommendation Could Derail Your QA Strategy

AI caution

The World Quality Report 2024-25 (WQR) provides a few interesting insights into adopting AI in software testing. However, one of their recommendations regarding AI implementation is a terrible idea for most businesses.

When it comes to implementing AI for software testing, the WQR has the following key recommendations:

  1. Start Now: If you are not yet exploring or actively using Gen AI solutions, it’s crucial to begin now to stay competitive.
  2. Experiment Broadly: Don’t rush to commit to a single platform or use case. Instead, experiment with multiple approaches to identify the ones that provide the most significant benefits for your organisation
  3. Enhance, Don’t Replace: Understand that Gen AI will not replace your quality engineers but will significantly enhance their productivity. However, these improvements will not be immediate; allow sufficient time for the benefits to become apparent

Two of these are entirely sound, and I will look at them in more detail at the end of this article. However, recommendation 2 is only potentially appropriate for the largest corporate behemoths (those above £1 billion turnover) and is bad advice for every other company.

What Does The WQR Mean by Software Testing AI?

AI means many things to many people, so let’s start by looking at what it means to the authors of the World Quality Report.

While the authors don’t define Gen AI per se, page 31 does contain a list of use cases:

Gen AI Use Cases

Three Problems With ‘Experiment Broadly’

This advice may appear reasonable at the surface level, but in reality it is only even potentially appropriate for huge businesses. In the UK, there’s probably only around 100 companies for whom this is sound advice.

An overwhelming majority of companies face incredible budget constraints, resources challenges and lack the bandwidth or expertise to carry out grandiose projects like this. QA teams even less so, given the high-pressured, time-constrained nature of software testers.

Taking this approach would eat up time and effort, impact software quality, and give AI such a bad reputation that it would be years before they gave it another go.

I still speak to people who are scarred by poorly thought-out experiments with automation in the 1990s and 2000s.

So, while the WQR’s other AI recommendations do offer valuable guidance for most companies the advice to ‘Experiment Broadly’ could hinder rather than help their AI adoption journey in software testing in a number of ways:

1. Short-Term Productivity Impact

Implementing and evaluating multiple AI solutions is extremely time-consuming and will decrease short-term productivity, as testers would need to divert their attention from their primary testing duties to learn and assess various AI tools.

This short-term disruption is too costly for most organisations that rely on quick turnarounds and efficient use of resources.

2. Difficulty in Evaluation

The WQR’s suggestion assumes that organisations can effectively evaluate the benefits of different AI approaches. Ok, so how, exactly?

How are teams meant to conduct comprehensive evaluations of multiple tools without a standardised comparison method—this is all new—or the luxury of extended trial periods?

Plus, Gen AI solutions move so fast that any comparison would likely be redundant by the time any reasonable trial was completed.

3. Immediate vs. Long-Term Benefits

The report acknowledges that “improvements will not be immediate” and advises allowing “sufficient time for the benefits to become apparent“. Sure, in a perfect world we’d, but test teams don’t work like this.

Automation started off like this, but wasn’t widely adopted by most normal businesses until it could quickly demonstrate value and justify the investment of time and resources.

Here’s a More Realistic Approach to AI

Rather than broadly experimenting with multiple AI solutions, a more practical approach is to:

  • Prioritise High-Impact Areas: Identify specific testing processes that could benefit most from AI augmentation and start there. This targeted approach can yield more immediate and tangible results.
  • Leverage Existing Tools: Many testing tools now incorporate AI features. Companies should explore these options first, as they can often be integrated more seamlessly into existing workflows.
  • Incremental Implementation: Focus on implementing proven AI features within current tooling piecemeal. This allows for gradual adoption without overwhelming the team or disrupting existing workflows.
  • Monitor developments: Look for emerging solutions that incorporate AI and adopt more mature solutions that give tangible results.

Useful Recommendations From the WQR

While the “Experiment Broadly” approach is unsuitable for most companies, the World Quality Report other key AI recommendations are realistic and beneficial for these organisations.

You Should Start Now

The report’s advice to “Start Now” if not yet exploring or actively using Gen AI solutions is sound for most companies, although this doesn’t mean diving in headfirst. Instead companies should beginning to educate themselves and identify potential areas where AI could enhance their testing processes.

Benefits of Starting Now:

  • Competitive Advantage: Early adoption, even on a small scale, can give you an edge over competitors who are slower to embrace AI in testing.
  • Learning Curve: Starting now allows teams to gradually build expertise in AI-assisted testing, reducing the risk of falling behind industry trends.
  • Incremental Improvements: By starting small, you can see gradual improvements in their testing processes without significant disruption.
  • Future-Proofing: As AI becomes more prevalent in software development, early familiarity will help you adapt more quickly to future advancements.

Enhance, Don’t Replace

The WQR’s recommendation to “Enhance, Don’t Replace” is particularly relevant for normal companies—Gen AI will not replace quality engineers but will significantly enhance their productivity.

This approach aligns well with the incremental implementation strategy, allowing companies to gradually introduce AI tools that complement their existing workforce and processes.

Key Considerations:

  • Skill Augmentation: AI should be viewed as a tool to enhance the capabilities of existing quality engineers, not as a replacement for human expertise.
  • Process Optimisation: Focus on using AI to streamline repetitive tasks, allowing human testers to concentrate on more complex, strategic aspects of quality assurance.
  • Balanced Integration: Strive for a balance between AI-assisted testing and traditional methods, leveraging the strengths of both approaches.
  • Continuous Learning: Encourage testers to adapt and evolve their skills alongside AI implementation, fostering a culture of continuous improvement.

Conclusion

While the WQR 2024-25 provides valuable insights into the growing adoption of AI in software testing, its recommendation to ‘Experiment Broadly’ is unlikely be the most practical approach for all but the biggest businesses.

The report’s data showing that 68% of organisations are either using Gen AI or developing roadmaps after initial trials is encouraging, but it’s important to note that this is larger organisations with more resources—and only 34% of these corporate giants are actually using Gen AI in anger.

Gen AI Adoption

For most companies, a more measured, targeted approach to AI adoption in software testing is likely to be more effective.

By focusing on incremental improvements, leveraging existing tools, and prioritising high-impact areas, companies can reap the benefits of AI in testing without overextending their resources or disrupting their core testing activities.

As the use of AI in software testing continues to evolve, it’s crucial for businesses to stay informed and adaptable, but also to approach AI adoption in a way that aligns with their specific needs, constraints, and goals.

Stephen Davis
by Stephen Davis

Stephen Davis is the founder of Calleo Software, a OpenText (formerly Micro Focus) Gold Partner. His passion is to help test professionals improve the efficiency and effectiveness of software testing.

To view Stephen's LinkedIn profile and connect 

Stephen Davis LinkedIn profile

20th November 2024
Test Automation what's new

What’s New: Exciting Test Automation Tool Updates

As great as OpenText is at software development, it’s not always the best at keeping people informed about changes. So, today, I’m sharing a few recent updates to the OpenText automation tools. These are just a tiny sample of recently implemented changes. They focus on cloud capabilities, AI-powered object detection, codeless testing, and streamlined workflows that make test automation more accessible and efficient than ever.

Software Testing in 2030

Software Testing in 2030: 4 Ways QA Will Change

Over the next five years, software and software testing are set to evolve at a rate we’ve never seen. In fact, it has already started. Over the last few years, everyone remotely involved in tech has witnessed the constant change in the way things are done. This seemingly non-stop innovation has been driven by emerging technologies, shifting development paradigms, and businesses reevaluating their priorities… and is set to accelerate.

Software Testers v Rogue AI

Software Testers: Humanity’s Best Chance Against Rogue AI

In the race to protect us against rogue AI, our best defence might not be scientists or politicians, but the often-overlooked heroes of the tech world: software testers. As AI systems increasingly mediate healthcare, criminal justice, and military decisions, this unlikely profession could hold the key to preventing existential catastrophe.

4 testing breakthroughs

Software Testing AI: 4 Breakthroughs You Can’t Ignore in 2025

It’s 2025 and software testing AI can no longer be ignored. AI innovations in software testing can deliver unprecedented efficiency gains and bridge the gap between manual and automated workflows. This article contains four software testing AI breakthroughs you can’t ignore in 2025.

Remote Software Testing

Remote Testing Teams: 4 Strategies to Avoid Collaboration Disaster

It’s been years since the pandemic. Still, many companies I speak to have struggled to adapt to changing practices and have failed to implement effective working habits. Unfortunately, you can’t just continue as if nothing has changed—this approach just won’t cut it anymore. In this week’s insight, I provide four actionable approaches that I have picked up from the many successful testing projects I talk to. These easy fixes will help you prevent collaboration disasters in your remote testing teams.

Top Software Lists

Exposed Why ‘Top Software’ Lists Can’t Be Trusted!

You see them everywhere. Top 10 this, top 20 that. We have all searched for lists that rank products. Whether cars, phones, software, or anything else. But how trustworthy are the ‘top software’ lists on the internet?

How to Choose A Test Management Tool

How to Choose The Right Test Management Tool

Test management tools ensure efficient, effective, and auditable testing processes. When choosing an enterprise-level test management solution, it’s essential to use a proven and trusted solution.

why choose loadrunner

Performance Testing: 6 Reasons Companies Choose LoadRunner

Despite what some might think, LoadRunner is not a single performance testing tool but a family of world-class load performance testing solutions. The family has recently undergone a name change that you may not be familiar with. Today, I will explain why LoadRunner family products are relied on by the largest companies.

Insights

Search

Related Articles

To get other software testing insights, like this, direct to you inbox join the Calleo mailing list.

You can, of course, unsubscribe 

at any time!

By signing up you consent to receiving regular emails from Calleo with updates, tips and ideas on software testing along with the occasional promotion for software testing products. You can, of course, unsubscribe at any time. Click here for the privacy policy.

Sign up to receive the latest, Software Testing Insights, news and to join the Calleo mailing list.

You can, of course, unsubscribe at any time!

By signing up you consent to receiving regular emails from Calleo with updates, tips and ideas on software testing along with the occasional promotion for software testing products. You can, of course, unsubscribe at any time. Click here for the privacy policy.