Products

Problems
we solve

We can help your business

Request a Free Demo / trial

Insights

Insights | OpenText
15 April, 2026

DevOps Aviator: AI Made For Testers

Aviator Testing AI

DevOps Aviator brings generative AI into software delivery to help test teams move sooner, reduce manual effort, and get answers faster. It is part of the broader Aviator suite: a set of AI capabilities embedded across OpenText products.

Unlike generic chatbots, it works inside governed business systems. This is important for organisations that need stronger control over security, compliance, and how business data is handled.

Why DevOps Aviator Matters

OpenText’s Pick n Pay case study shows 95% software test automation, up to three days saved per cycle in testing time, and 20% additional coverage on platform-specific scenarios after using DevOps Aviator.

The workflow shift mattered as much as the raw numbers.

Pick n Pay used Aviator to generate test cases from user stories and features, compared the results with manual testers, and found that the AI suggestions often matched the human output while still surfacing valid test cases the team had missed.

The approach helped reduce the time spent writing test cases and freed testers to focus on exploratory testing and scenario analysis.

Eight Ways DevOps Aviator Helps Test Teams

DevOps Aviator turns test tooling from a static repository into an AI-assisted workbench that can help teams move from idea to validation more quickly.

Here are eight ways it helps:

1. AI-powered smart assistant

Ask questions about features, defects, tasks, or tests and get plain-English answers without jumping between tools. For example, a test lead can ask what changed in a release, which tests are linked to a feature, or which defects are still open.

2. AI-powered test conversion

Turn manual tests into automated assets more quickly, using either codeless workflows or Gherkin BDD. A tester can take a high-value regression scenario and ask Aviator to convert it into a structured automated starting point instead of rebuilding it from scratch.

3. AI-powered time-to-market predictions

Use historical velocity and delivery data to estimate feature completion and release timing more confidently. This gives product and QA teams a better sense of risk when planning sprint commitments.

4. AI-powered session replays

Review user recordings and convert the useful parts into defects or test steps more quickly. For example, if a session replay shows a checkout failure, the tester can turn that evidence into a defect rather than manually reconstructing the problem.

5. AI-generated test suggestions

Generate test ideas and steps to broaden coverage and reduce gaps. This is useful when a feature is new and the team wants a quick first pass before refining the suite.

6. AI-generated full-stack testing

Use AI-assisted generation to build more repeatable tests across layers, including JUnit, Java, Selenium, and UI automation. That helps teams create usable test assets faster, especially when the same feature needs regression coverage at multiple levels.

7. AI-generated threat modelling

Apply AI to security design reviews and STRIDE-style thinking earlier in the lifecycle. A team can use it to surface likely risks before the feature reaches production.

8. AI-generated defect analysis

Summarise conversations, identify likely root causes, and suggest next steps faster. That shortens triage time when multiple defects are coming in at once.

How DevOps Aviator Can Change Your Test Processes

The biggest shift is that testing becomes assisted by a trusted AI solution, and less dependent on repetitive manual authoring or personal chatbots.

Instead of treating test tools as places to record and run tests, teams can use them to generate tests, propose scenarios, summarise defects, and reduce analysis time.

That changes the workflow in a few concrete ways.

Test design starts earlier, automation can begin before the manual testing cycle is complete, and testers spend less time writing routine cases from scratch. It also helps developers, testers, and managers work from the same source of truth, which reduces the usual back-and-forth between tools and people.

A Simple Example

Instead of waiting for a manual tester to finish a first pass on a new checkout feature, the team can ask Aviator to generate scenarios during sprint planning. The tester then reviews, edits, and prioritises those scenarios rather than building them from a blank page.

In practical terms, it lets you test earlier, faster, cheaper, and more comprehensively.

Stephen Davis
by Stephen Davis

Stephen Davis is the founder of Calleo Software, a OpenText (formerly Micro Focus) Gold Partner. His passion is to help test professionals improve the efficiency and effectiveness of software testing.

To view Stephen's LinkedIn profile and connect 

Stephen Davis LinkedIn profile

15th April 2026
Testing is a waste of time

5 Reasons Testing is a Waste of Time

Let’s be honest, testing is what teams do when they don’t trust their developers. It’s a tax on speed, a relic from waterfall days, and a crutch for people afraid to ship. It just slows down releases, kills creativity, and wastes budget that could be better spent on another sprint.

OpenText Summit 2026

OpenText Summit: Why This Free Event Is Worth Your Time

You walk into a room where people are talking about the exact problems you wrestle with: tricky deployments, clunky processes, and how to test faster. Sometimes, the right conversation with the right person is enough to unlock a solution or a possibility you hadn’t even considered.

Python

Functional Testing 26.1: Adds Python, Cloud Testing, and more AI

With 26.1, OpenText is giving you something concrete: Python‑based automation, AI‑assisted verification, and cloud labs that fit into your existing CI/CD. This turns functional testing from a separate QA activity into a shared capability that developers, SDETs, and testers can all contribute to.

LoadRunner AI

LoadRunner 26.1: A New Direction in Performance Testing?

OpenText’s version 26.1 is a clear statement of where the Performance Engineering (LoadRunner) family is heading: AI-assisted, simplifying complex tasks and enabling your team to be more productive. This creates a very practical question: how do you buy and deploy these new capabilities in a way that actually moves the needle on risk, cost, and delivery speed?

Hooked on Open Source

Revealed: How Consultancies Get You Hooked on Bad Tools

Picture the scene: you’re about to engage a consultancy for testing services, and their proposal leans heavily on open‑source tools, but there’s a nagging doubt… a misalignment between what you’re paying for and what they’re delivering.

You want the guidance and support to prevent costly mistakes; they want more billable days.

Calleo Sell Test Tools

Calleo: We Sell Test Tools

With Calleo, you get expert guidance to find the right options, demos and trial licenses to evaluate them, and practical help to get up and running. You’ll see the pros, cons and long-term costs clearly before making any decisions, and stay supported with renewals and updates long after you’ve started using the tool.

cut test maintenance

4 Ways to Cut Test Maintenance Effort with AI

Automation maintenance is a pain. It’s a frustrating time drain that nobody enjoys. Unfortunately, teams are doing more of it than ever, with modern solutions changing like the wind and each new release jeopardising script integrity. Thankfully, AI-driven automation is here to help.

2025 top testing articles

2025 Roundup: Check Out The Top 5 Testing Times Articles

Thanks to your support, 2025 was another excellent year for Testing Times and our 10,000+ subscribers. We explored a wide range of software testing topics, including test automation, performance testing, Jira fatigue, tester authority, and more. Below is a quick look at the five newsletters with the most reactions this year, and why they resonated so strongly.

Is WFH worth the risk

Remote Testing: Is Working From Home Worth The Risk?

Increasingly, organisations expect remote and hybrid testers to use borrowed tool licences, unstable VPNs, and software never designed to leave the office. That creates significant compliance and security risks that can turn into serious long‑term problems. It’s not the testers per se, but remote execution over on‑prem licences is a software audit waiting to happen. Read on to learn why a compliance nightmare isn’t the only reason your test setup might not be fit for distributed and home‑working team members.

Insights

Search

Related Articles

To get other software testing insights, like this, direct to you inbox join the Calleo mailing list.

You can, of course, unsubscribe 

at any time!

By signing up you consent to receiving regular emails from Calleo with updates, tips and ideas on software testing along with the occasional promotion for software testing products. You can, of course, unsubscribe at any time. Click here for the privacy policy.

Sign up to receive the latest, Software Testing Insights, news and to join the Calleo mailing list.

You can, of course, unsubscribe at any time!

By signing up you consent to receiving regular emails from Calleo with updates, tips and ideas on software testing along with the occasional promotion for software testing products. You can, of course, unsubscribe at any time. Click here for the privacy policy.