OpenText’s version 26.1 is a clear statement of where the Performance Engineering (LoadRunner) family is heading: AI-assisted, simplifying complex tasks and enabling your team to be more productive.
This creates a very practical question: how do you buy and deploy these new capabilities in a way that actually moves the needle on risk, cost, and delivery speed?
26.1 in plain English: why this release matters
Version 26.1 of Enterprise Performance Engineering (LoadRunner Enterprise) pulls several strands together that have been building over the last few years: cloud-first delivery, tighter CI/CD integration, and now AI doing real work in scripting and analysis.
Instead of adding “one more protocol” or “one more UI refresh,” this release pushes on three themes performance testers care about:
- AI is now part of the core experience, not a side feature. Performance Engineering Aviator, natural-language help, and guided flows are designed to reduce the effort and expertise needed to get usable tests running. (Aviator is the OpenText name for their AI-powered virtual assistant)
- Performance becomes more “as code.” DevWeb, IDE integration, and the Model Context Protocol (MCP) approach make it easier for developers and SDETs to contribute to performance testing alongside functional automation.
- The march to cloud continues. LRC is purely a SaaS product, while LRP and LRE (either SaaS or on-prem) continue to provide the most reliable platform for those who need an on-prem solution.
If you are deciding what to renew, upgrade, or buy next, v26.1 is less about “a few nice new features” and more about signalling where the platform is going over the next 3–5 years.
AI features in 26.1
The most visible shift in v26.1 is how AI shows up in real workflows, especially in scripting and analysis. This is important not because AI is fashionable, but because scripting has historically been a significant overhead during the project and maintenance phases.
Performance Engineering Aviator
OpenText’s Aviator capabilities in Performance Engineering aim to remove friction from three core activities: creating scripts, maintaining them, and analysing the results. In practice, that means:
- You can describe what you want to test in natural language and let AI help construct or refine the load test scripts, especially for DevWeb and HTTP-based workloads.
- The assistant can suggest correlations, parameterisations, and code-level changes, so you spend less time digging through help files and community posts to fix errors. This will help newer performance testers, and while it may be written off by the more experienced, it really is pointing the direction of travel.
- During or after runs, AI-backed analysis can summarise what changed, where bottlenecks appeared, and which metrics actually matter, rather than leaving you to dig through results manually. This will save time for all performance engineers.
The intent is not to replace experienced performance engineers, but to let them focus on more complex tasks.
DevWeb and MCP: performance testing where developers live
Another big directional signal is the focus on DevWeb and Model Context Protocol (MCP) integration.
DevWeb is lightweight, code-centric, and friendly to CI/CD and Git. With the addition of MCP:
- Developers can work with performance tests from within their familiar tools (such as VS Code), using natural-language interactions to generate and modify scripts.
- It becomes far more realistic to treat performance tests as versioned artefacts in the same repositories as application code and test automation.
- Collaboration between dev, QA, and performance specialists becomes less about “throwing requests over the wall” and more about co-owning a shared codebase of test assets.
For organisations, this is a shift from “performance testing happens at the end, in a separate tool, run by a separate team” to “performance is another dimension of quality that lives inside your delivery pipeline.”
What this actually changes for users
From a user’s perspective, the question is: what will v26.1 let us do that we can’t do today, and how should that affect our buying decisions?
Here are the practical implications:
- You can expect faster time-to-value from new licenses. AI-assisted scripting, analysis and cloud delivery mean teams can run meaningful load tests with less setup and training time.
- The “skills barrier” to using LoadRunner is reducing. That opens the door to involving more QA engineers, SDETs, and even developers directly, instead of relying on a small performance team. However, it is important not to underestimate the significant value added by experienced performance testers when planning the project, the tests to be executed, and analysing the results.
- Roadmap alignment becomes strategic. If your organisation is moving towards SaaS, DevOps, and shift-left testing, the v26.1 direction fits that narrative. Whilst LRP and LRE will support those who need an on-prem solution.
How Calleo helps you choose wisely
Calleo exists for a very straightforward purpose: to help you buy the right test tools and get value from them quickly and pragmatically. In the context of LoadRunner v26.1, that role becomes even more important because the choices are more complex and the impact is longer-term.
Concretely, Calleo can help you:
- Match editions and deployment models to your reality: Calleo works with customers to understand team structure, delivery model, and infrastructure constraints, then matches those against the LoadRunner portfolio: Professional, Enterprise, Cloud, and related OpenText testing products. That avoids over-buying on features you won’t use, or under-buying and then discovering you can’t integrate performance into your pipeline without a painful upgrade.
- Plan for AI and “performance as code”: v26.1’s AI and DevWeb/MCP directions are powerful, but they only help if your people and processes are ready to use them.
Calleo can advise on where AI-backed scripting will realistically help your teams, where you still need specialist expertise, and how to phase adoption to minimise risk. - De-risk renewals and migrations: If you already own LoadRunner, upgrading to the v26.1, shifting from on-prem to cloud, having a hybrid cloud and on-prem model, or changing license structures are all moments where mistakes are expensive. We understand how licensing, bundling, and roadmaps interact and can help you make decisions that stand up over a 3–5-year horizon, not just this quarter.
- Provide demos and proof-of-value: Before you commit, you can see how AI-assisted scripting, DevWeb, and cloud execution behave against workloads that look like your own, not contrived examples. That helps stakeholders move from curiosity to confidence and provides you with the internal evidence you need to justify spending.
Moving the needle: what to do next
If you want this release to actually change your outcomes, rather than just your version number, a simple sequence can help:
- Clarify your direction: Are you trying to reduce bottlenecks, involve dev more, move to cloud, or all three? That will determine which v26.1 capabilities matter most.
- Map current usage: Where are performance tools used today, by whom, and how often? Identify where AI and DevWeb could unblock real work, not just look impressive in a demo.
- Talk to Calleo about options: Use Calleo to translate those needs into concrete licensing and product choices, with a view on roadmap and total cost over several years.
OpenText has made its intent clear with v26.1: performance testing is becoming smarter, faster, and more widely adopted across teams. This trajectory will continue, and as LoadRunner follows the path UFT One has already gone down, I fully expect to see AI capabilities being rolled in with every new version.
Calleo’s role is to ensure you invest in the right pieces so that as the train pulls out of the station, you’re not left behind.













