Products

Problems
we solve

We can help your business

Request a Free Demo / trial

Insights

Insights
20 August, 2025

AI in Software Testing: Just Another Fad?

AI in software testing

AI is everywhere. The software testing industry is flooded with buzzword-heavy solutions, and you’d be hard pressed to find a vendor that hasn’t marked at least one of their tools as AI-powered.

Such overuse naturally invites scepticism, especially given past fads that promised the moon but failed to deliver. But is AI another in a long list of cautionary tales, or does it genuinely herald a new era?

Do Testing Fads Give You a Sense of Déjà Vu?

Something about this current trend is so familiar, you could be forgiven for thinking you’ve been here before.

In the last decade alone, codeless automation, parallel programming, and big data have swept through SDLCs with much fanfare and promise, only to fade as reality set in.

Most fizzled out because the hype exceeded capability, underlying complexity was downplayed, and organisations resisted (wouldn’t fund) the change required to realise the benefits.

Is AI-Based Testing Just The Latest Fad?

There are definite similarities with the current, and rampant, AI washing.

For a start, many testing products labelled as AI-driven lack genuine AI capabilities. Instead, they use simple-but-effective coding techniques that have been around for decades.

As with every other field, this is a case of software testing marketeers exaggerating the current state of technology. Claims that AI can automate everything, generate test suites instantly, or eliminate the need for human testers are just plain false.

Why Software Testing AI Might Just Be Different

Behind the excessive and unrealistic marketing though, lies something undeniable, enough for me to say with absolute conviction that AI isn’t just another marketing fad.

For one thing, unlike most prior SDLC trends, AI is already making a tangible impact:

  • Self-healing scripts: AI updates tests as applications evolve, reducing flakiness and maintenance.
  • Adaptive test selection: Machine learning targets high-risk areas, boosting testing efficiency.
  • Video-based test creation: Teams can generate manual and automated tests from short user videos.
  • Anomaly detection and analytics: AI sifts through massive test result sets, highlighting subtle regressions and risks.

AI has already been shown to cut test design and maintenance time, predict defects before they hit production, and fill test coverage gaps faster than manual methods.

AI Could Be The Most Significant Shift in QA… Ever

The reason AI isn’t just another fad? We are in the earliest days of a technology that has the potential to redefine human life in ways not seen since the Industrial Revolution.

In my opinion, AI will have a bigger impact than the telephone, the Internet, maybe even the printing press. It will continuously learn, adapt, and accelerate into something truly extraordinary; for better or worse.

AI is already polarising society. It has created a two-tier system between the AI-literate, confident early adopters and the rest. Those who embrace it see the opportunities, those who fear it see the threat.

The reality probably lies somewhere in between, and AI is already fundamentally changing working patterns across myriad industries.

When it comes to software testing, AI is already augmenting test management (identifying critical test sets, analysing results), generating and interpreting defect reports (even from videos or logs), and automating repetitive checks—which frees up QA professionals to focus on strategy and creative problem-solving.

AI’s role won’t stop at automation. It will become embedded in every stage of the SDLC, making the feedback loop between development, testing, and business goals smarter and faster than ever.

The question isn’t whether AI will change testing, it’s whether you will embrace it and thrive, or not.

Caveats: Tempering Hype with Current Reality

Of course, not every claim is valid, and present-day testing of AI solutions comes with fundamental limitations:

  • Requires extensive, high-quality data for training.
  • Lacks domain intuition and nuanced understanding.
  • Needs human oversight for context, creativity, and strategic decision-making.
  • Can be complex to implement, especially at scale.

Organisations that succeed will blend strong QA expertise with thoughtful use of AI, sceptically evaluating claims and investing in upskilling—not chasing magic bullets or quick wins.

Conclusion: AI Is a Defining Leap in Software Development

While the current hype cycles echo many past SDLC trends, AI is already delivering advances that outstrip previous fads.

For software testing professionals, the path is clear: Start now.

Look past the marketing noise and adopt AI with care and education. Measure its impact, stay up to date with advancements and, where you can, help guide your organisation’s AI future.

We may well be at AI v0.1, but as capabilities mature, the impact will only grow.

Stephen Davis
by Stephen Davis

Stephen Davis is the founder of Calleo Software, a OpenText (formerly Micro Focus) Gold Partner. His passion is to help test professionals improve the efficiency and effectiveness of software testing.

To view Stephen's LinkedIn profile and connect 

Stephen Davis LinkedIn profile

20th August 2025
Test Automation Hype

Are Test Automation Claims Just Marketing Hype?

Read the marketing collateral from test automation vendors and you’ll encounter bold promises around costs, coverage, and defect reduction. However, for many who have been through multiple automation initiatives, the reality frequently fails to live up to the pitch.

Adding More Testers Makes Quality Worse

When Adding More Testers Makes Quality Worse!

You’re deep into a project, go-live is rapidly approaching, but there is a mountain of testing to get through. Then, a key stakeholder chimes in, “Let’s just pull more people into testing.” It sounds logical: bigger effort, higher quality. But doubling down on resources can easily lead to chaos, confusion, and worse software quality.

Is Open Source Trustworthy

Do You Trust Open-Source Tools for Enterprise Testing?

Open-source testing tools like JMeter and Selenium have obvious appeal—no licensing fees, endless customisation, and a community to lean on. But, if you’re using open-source for mission-critical testing, you need to ask—is it really worth the risk?

Should testers be allowed to block releases?

Should Testers Be Allowed to Block Releases?

Your testers find a critical bug the night before a major release. Should they have the power to stop the launch?

Testers provide essential insights into software quality and risk. Their analysis is critical for decision-makers, so would it make sense to give them the power to veto releases?

Bug seeding

Bebugging: Would You Plant Defects to Test Testers?

Would you intentionally plant defects to test your test team? Bebugging, as it’s known, is a technique where software flaws are purposely introduced to gauge testing effectiveness. Are there times and places where bebugging is a valid way to help improve processes, tighten up testing, or root out a potential weak link?

Unethical Test Tool Marketing

Exposed: Are You Being Conned By Test Tool Marketing?

We have all witnessed an alarming rise in deceptive marketing practices that undermine customer decision-making and market integrity, with tool vendors increasingly comparing their tools to industry leaders using deliberately misleading information.

Flaky Automated Tests

Are Flaky Automated Tests Better Than None at All?

Is flaky automation better than no automation at all? Does it help accelerate projects and reduce timelines, or does it end up causing more problems than it solves? And are the questions moot when, with modern AI-powered tools, there’s no excuse for flaky tests?

Software Testing Concepts

Software QA Mythbusting: 5 Misunderstood Testing Concepts

We’ve all been there—sitting in a meeting, nodding along, confident that everyone shares the same understanding, only to discover later that our ideas were built on shaky ground, based on false assumptions and an incomplete grasp of a complex situation. In the world of software development, nowhere is this more common, or more consequential, than with software testing.

LoadRunner v JMeter

LoadRunner: Cheaper & Easier Than JMeter?

Four years ago, I wrote about how LoadRunner Cloud was debunking the myth that open-source is cheaper. At the time, LoadRunner Cloud’s pay-as-you-go pricing, bundled infrastructure, and rapid setup were already making it a compelling alternative to JMeter and similar tools.

Insights

Search

Related Articles

InsightsTrending

To get other software testing insights, like this, direct to you inbox join the Calleo mailing list.

You can, of course, unsubscribe 

at any time!

By signing up you consent to receiving regular emails from Calleo with updates, tips and ideas on software testing along with the occasional promotion for software testing products. You can, of course, unsubscribe at any time. Click here for the privacy policy.

Sign up to receive the latest, Software Testing Insights, news and to join the Calleo mailing list.

You can, of course, unsubscribe at any time!

By signing up you consent to receiving regular emails from Calleo with updates, tips and ideas on software testing along with the occasional promotion for software testing products. You can, of course, unsubscribe at any time. Click here for the privacy policy.