Products

Problems
we solve

We can help your business

Request a Free Demo / trial

Insights

Insights | Performance Testing
24 February, 2026

LoadRunner 26.1: A New Direction in Performance Testing?

LoadRunner AI

OpenText’s version 26.1 is a clear statement of where the Performance Engineering (LoadRunner) family is heading: AI-assisted, simplifying complex tasks and enabling your team to be more productive.

This creates a very practical question: how do you buy and deploy these new capabilities in a way that actually moves the needle on risk, cost, and delivery speed?

26.1 in plain English: why this release matters

Version 26.1 of Enterprise Performance Engineering (LoadRunner Enterprise) pulls several strands together that have been building over the last few years: cloud-first delivery, tighter CI/CD integration, and now AI doing real work in scripting and analysis.

Instead of adding “one more protocol” or “one more UI refresh,” this release pushes on three themes performance testers care about:

  • AI is now part of the core experience, not a side feature. Performance Engineering Aviator, natural-language help, and guided flows are designed to reduce the effort and expertise needed to get usable tests running. (Aviator is the OpenText name for their AI-powered virtual assistant)
  • Performance becomes more “as code.” DevWeb, IDE integration, and the Model Context Protocol (MCP) approach make it easier for developers and SDETs to contribute to performance testing alongside functional automation.
  • The march to cloud continues. LRC is purely a SaaS product, while LRP and LRE (either SaaS or on-prem) continue to provide the most reliable platform for those who need an on-prem solution.

If you are deciding what to renew, upgrade, or buy next, v26.1 is less about “a few nice new features” and more about signalling where the platform is going over the next 3–5 years.

AI features in 26.1

The most visible shift in v26.1 is how AI shows up in real workflows, especially in scripting and analysis. This is important not because AI is fashionable, but because scripting has historically been a significant overhead during the project and maintenance phases.

Performance Engineering Aviator

OpenText’s Aviator capabilities in Performance Engineering aim to remove friction from three core activities: creating scripts, maintaining them, and analysing the results. In practice, that means:

  • You can describe what you want to test in natural language and let AI help construct or refine the load test scripts, especially for DevWeb and HTTP-based workloads.
  • The assistant can suggest correlations, parameterisations, and code-level changes, so you spend less time digging through help files and community posts to fix errors. This will help newer performance testers, and while it may be written off by the more experienced, it really is pointing the direction of travel.
  • During or after runs, AI-backed analysis can summarise what changed, where bottlenecks appeared, and which metrics actually matter, rather than leaving you to dig through results manually. This will save time for all performance engineers.

The intent is not to replace experienced performance engineers, but to let them focus on more complex tasks.

DevWeb and MCP: performance testing where developers live

Another big directional signal is the focus on DevWeb and Model Context Protocol (MCP) integration.

DevWeb is lightweight, code-centric, and friendly to CI/CD and Git. With the addition of MCP:

  • Developers can work with performance tests from within their familiar tools (such as VS Code), using natural-language interactions to generate and modify scripts.
  • It becomes far more realistic to treat performance tests as versioned artefacts in the same repositories as application code and test automation.
  • Collaboration between dev, QA, and performance specialists becomes less about “throwing requests over the wall” and more about co-owning a shared codebase of test assets.

For organisations, this is a shift from “performance testing happens at the end, in a separate tool, run by a separate team” to “performance is another dimension of quality that lives inside your delivery pipeline.”

What this actually changes for users

From a user’s perspective, the question is: what will v26.1 let us do that we can’t do today, and how should that affect our buying decisions?

Here are the practical implications:

  • You can expect faster time-to-value from new licenses. AI-assisted scripting, analysis and cloud delivery mean teams can run meaningful load tests with less setup and training time.
  • The “skills barrier” to using LoadRunner is reducing. That opens the door to involving more QA engineers, SDETs, and even developers directly, instead of relying on a small performance team. However, it is important not to underestimate the significant value added by experienced performance testers when planning the project, the tests to be executed, and analysing the results.
  • Roadmap alignment becomes strategic. If your organisation is moving towards SaaS, DevOps, and shift-left testing, the v26.1 direction fits that narrative. Whilst LRP and LRE will support those who need an on-prem solution.

How Calleo helps you choose wisely

Calleo exists for a very straightforward purpose: to help you buy the right test tools and get value from them quickly and pragmatically. In the context of LoadRunner v26.1, that role becomes even more important because the choices are more complex and the impact is longer-term.

Concretely, Calleo can help you:

  • Match editions and deployment models to your reality: Calleo works with customers to understand team structure, delivery model, and infrastructure constraints, then matches those against the LoadRunner portfolio: Professional, Enterprise, Cloud, and related OpenText testing products. That avoids over-buying on features you won’t use, or under-buying and then discovering you can’t integrate performance into your pipeline without a painful upgrade.
  • Plan for AI and “performance as code”: v26.1’s AI and DevWeb/MCP directions are powerful, but they only help if your people and processes are ready to use them.
    Calleo can advise on where AI-backed scripting will realistically help your teams, where you still need specialist expertise, and how to phase adoption to minimise risk.
  • De-risk renewals and migrations: If you already own LoadRunner, upgrading to the v26.1, shifting from on-prem to cloud, having a hybrid cloud and on-prem model, or changing license structures are all moments where mistakes are expensive. We understand how licensing, bundling, and roadmaps interact and can help you make decisions that stand up over a 3–5-year horizon, not just this quarter.
  • Provide demos and proof-of-value: Before you commit, you can see how AI-assisted scripting, DevWeb, and cloud execution behave against workloads that look like your own, not contrived examples. That helps stakeholders move from curiosity to confidence and provides you with the internal evidence you need to justify spending.

Moving the needle: what to do next

If you want this release to actually change your outcomes, rather than just your version number, a simple sequence can help:

  1. Clarify your direction: Are you trying to reduce bottlenecks, involve dev more, move to cloud, or all three? That will determine which v26.1 capabilities matter most.
  2. Map current usage: Where are performance tools used today, by whom, and how often? Identify where AI and DevWeb could unblock real work, not just look impressive in a demo.
  3. Talk to Calleo about options: Use Calleo to translate those needs into concrete licensing and product choices, with a view on roadmap and total cost over several years.

OpenText has made its intent clear with v26.1: performance testing is becoming smarter, faster, and more widely adopted across teams. This trajectory will continue, and as LoadRunner follows the path UFT One has already gone down, I fully expect to see AI capabilities being rolled in with every new version.

Calleo’s role is to ensure you invest in the right pieces so that as the train pulls out of the station, you’re not left behind.

Stephen Davis
by Stephen Davis

Stephen Davis is the founder of Calleo Software, a OpenText (formerly Micro Focus) Gold Partner. His passion is to help test professionals improve the efficiency and effectiveness of software testing.

To view Stephen's LinkedIn profile and connect 

Stephen Davis LinkedIn profile

24th February 2026
Hooked on Open Source

Revealed: How Consultancies Get You Hooked on Bad Tools

Picture the scene: you’re about to engage a consultancy for testing services, and their proposal leans heavily on open‑source tools, but there’s a nagging doubt… a misalignment between what you’re paying for and what they’re delivering.

You want the guidance and support to prevent costly mistakes; they want more billable days.

Calleo Sell Test Tools

Calleo: We Sell Test Tools

With Calleo, you get expert guidance to find the right options, demos and trial licenses to evaluate them, and practical help to get up and running. You’ll see the pros, cons and long-term costs clearly before making any decisions, and stay supported with renewals and updates long after you’ve started using the tool.

cut test maintenance

4 Ways to Cut Test Maintenance Effort with AI

Automation maintenance is a pain. It’s a frustrating time drain that nobody enjoys. Unfortunately, teams are doing more of it than ever, with modern solutions changing like the wind and each new release jeopardising script integrity. Thankfully, AI-driven automation is here to help.

2025 top testing articles

2025 Roundup: Check Out The Top 5 Testing Times Articles

Thanks to your support, 2025 was another excellent year for Testing Times and our 10,000+ subscribers. We explored a wide range of software testing topics, including test automation, performance testing, Jira fatigue, tester authority, and more. Below is a quick look at the five newsletters with the most reactions this year, and why they resonated so strongly.

Is WFH worth the risk

Remote Testing: Is Working From Home Worth The Risk?

Increasingly, organisations expect remote and hybrid testers to use borrowed tool licences, unstable VPNs, and software never designed to leave the office. That creates significant compliance and security risks that can turn into serious long‑term problems. It’s not the testers per se, but remote execution over on‑prem licences is a software audit waiting to happen. Read on to learn why a compliance nightmare isn’t the only reason your test setup might not be fit for distributed and home‑working team members.

Effortless automation

Solved: 4 Common Test Automation Headaches

Software teams know the story all too well: automation promises speed and reliability, but reality often brings fragile scripts, phantom failures, and endless rework. In the end, the technology intended to accelerate releases ends up bogging things down. Or at least, that’s how things used to be… Today’s AI-powered functional

Test the Untestable

Test the Untestable: Unlock Savings & Accelerate Your Project

Testers have long been asked to test earlier, faster, and more often. In truth, however, when critical APIs, integrations, or microservices aren’t ready, testing gets stuck. We’ve all been there, raring to go, like greyhounds in the slips…  but with nothing to test, and increasingly concerned about the impending last-minute panic.

The Test Tools You Need

Testers: Will We Finally Get The Tools We Need?

During the 2008 credit crunch, companies slashed technical investment. The mantra “do more with less” stuck—and 17 years later, testers are still paying the price as demands, complexity, and expectations have soared. It’s no coincidence that we’re witnessing an increasing number of high-profile software failures and cyber attacks. Yet, there’s still little willingness to invest in the right test tools and training.

Test Automation Fails Smaller Teams

Why Test Automation Fails for Smaller Teams

Many small software teams turn to test automation, expecting substantial time and cost savings. However, they often fail to achieve any of these goals; instead of seeing a return on investment, they end up spending more effort and cost fixing their automation packs. This failure can leave lasting scars, deterring people from embracing automation and realising its many benefits…

Insights

Search

Related Articles

To get other software testing insights, like this, direct to you inbox join the Calleo mailing list.

You can, of course, unsubscribe 

at any time!

By signing up you consent to receiving regular emails from Calleo with updates, tips and ideas on software testing along with the occasional promotion for software testing products. You can, of course, unsubscribe at any time. Click here for the privacy policy.

Sign up to receive the latest, Software Testing Insights, news and to join the Calleo mailing list.

You can, of course, unsubscribe at any time!

By signing up you consent to receiving regular emails from Calleo with updates, tips and ideas on software testing along with the occasional promotion for software testing products. You can, of course, unsubscribe at any time. Click here for the privacy policy.