Products

Problems
we solve

We can help your business

Request a Free Demo / trial

Insights

Insights
20 August, 2024

How NASA Does Software Testing: 4 Lessons We Can Learn

4 Lessons We Can Learn

As a software tester, I’ve always wondered how organisations like NASA test their critical systems. After all, their systems are unique and complex, and the stakes are almost incomprehensibly high—a single software bug can result in catastrophic failure and loss of life.

With that in mind, I explored how NASA, an organisation synonymous with precision and reliability, manages software quality assurance and builds high-quality software systems.

What I found was a treasure trove of practices that can be incredibly insightful for enterprise-level software testers.

The High Stakes of Space Exploration

Probably the most important thing I learned was that NASA, like every other organisation, has had their fair share of software failures.

Consider the infamous Mariner 1 spacecraft incident in 1962. A single missing hyphen in the code caused the rocket to veer off course, leading to the heartbreaking decision to self-destruct. The error was simple, yet the consequences were profound. The loss was equivalent to about $169 million today, not to mention wasted time and effort.

Another example is the Mars Polar Lander in 1999, where software mistakenly interpreted vibrations during descent as a landing event, shutting down the engines 40 meters above the surface. These failures underscore the critical importance of rigorous software testing in high-stakes environments.

For me, though, NASA’s failures aren’t the actual headline; I was more intrigued by their response.

As a testing professional, I’ve seen first-hand how many (most?) organisations pay lip service to lessons learned but rarely do anything about them. I mean, how often are tangible changes implemented post-project? It’s much more likely to see the same mistakes repeated, from rushed testing phases to missing documentation to ignoring failed tests altogether.

4 Key Ideas Enterprise Software Testers Learn from NASA

1. Leverage Simulated Environments

One of NASA’s most impressive software development feats is its ability to simulate incredibly complex environments through the Jon McBride Software Testing and Research (JSTAR) program.

JSTAR, a critical component of NASA’s Independent Verification and Validation (IV&V) Program, focuses on creating high-fidelity simulations that mirror the exact conditions spacecraft will encounter.

This includes everything from the vacuum of space, microgravity, and radiation exposure to the precise conditions of planetary atmospheres.

These simulations are not just basic models; they are dynamic, adaptive environments that can replicate the intricate physics and environmental conditions that spacecraft will face during a mission.

For instance, the simulation of re-entry into Earth’s atmosphere requires careful modelling of extreme heat, aerodynamic forces, and communication blackouts—all scenarios that can introduce significant software challenges.

In addition to testing under normal conditions, JSTAR’s simulations are designed to stress software systems under abnormal conditions that might occur during a mission.

For example, how does the software react to unexpected power surges, sensor failures, or unexpected physical forces? By pushing the software to its limits, NASA can identify weaknesses that may not be apparent in more straightforward testing scenarios.

While the stakes for enterprise systems might not be as high as those for space exploration, the principle of using detailed simulations holds excellent value.

Enterprises can create virtual environments, network virtualization, and chaos engineering to replicate user behaviour, network conditions, and hardware variations and failures to ensure their software behaves reliably across various scenarios. By integrating techniques into their testing regimes, businesses can anticipate and mitigate issues that would be difficult, costly, or risky to uncover in live production environments.

2. Early Defect Detection

NASA’s Software Formal Inspections Standard (NASA-STD-8739.9) is a cornerstone of its early defect detection approach.

This rigorous standard mandates a thorough and systematic inspection process to catch defects at the earliest stages of the software development life cycle.

The key idea (which applies to all software development) is that defects identified during requirements analysis, design, or early coding stages are exponentially cheaper and easier to fix than those found later in the development process or, worse, during a mission. I am sure that everyone has heard this, however, how many organisations live and breathe this approach, which leads to a reduction of defects found at a later stage. Compared to those that don’t and have the inevitable rush to fix & retest with the extra cost involved.

This standard outlines a structured inspection methodology, including the roles of inspectors, the processes they should follow, and the types of defects they should be looking for.

For example, a multidisciplinary team reviews every piece of code, including software engineers, system engineers, and domain experts. This ensures that the software meets the technical specifications and the mission’s operational requirements.

One of the most critical aspects of this process is the emphasis on checklists.

NASA uses comprehensive checklists during inspections to ensure that every possible type of defect is considered—ranging from simple syntax errors to complex logic flaws or mismatches between software and hardware interfaces. This disciplined approach leaves little room for oversight.

Adopting a similar approach can dramatically improve software quality in enterprise environments. Companies can implement formal code reviews, use detailed checklists, and involve cross-functional teams to catch defects early.

Automated tools can complement this process by performing static code analysis and identifying potential issues before they reach production.

NASA has demonstrated that early defect detection not only reduces the risk of catastrophic failure but also saves time and money in the long run.

3. Clarity of Objectives

At NASA, the clarity and precision of software requirements are paramount.

Given their missions’ complexity and critical nature, every requirement must be meticulously detailed to ensure that the final software product functions exactly as intended.

This is particularly important when multiple teams—often spread across different locations—are involved in the development process. Any ambiguity in requirements can lead to misinterpretations that could have dire consequences.

To achieve this, NASA employs a rigorous requirements engineering process. This begins with gathering requirements from all stakeholders, including engineers, scientists, and mission planners, and translating them into precise, unambiguous specifications.

These requirements are often documented in the form of use cases, flowcharts, and formal specifications that describe in precise detail what the software must do, how it will interact with other systems, and what constraints must be adhered to.

Moreover, NASA utilises traceability matrices to ensure that every requirement is accounted for throughout development.

These matrices link each requirement to corresponding design elements, code modules, and test cases, providing a clear map that guides the entire project. This not only helps in verifying that all requirements are met but also assists in managing changes.

The impact on other system parts can be quickly assessed and addressed if a requirement changes.

In enterprise software development, building clear and detailed requirements, like NASA’s, can reduce the risk of miscommunication and ensure that the software meets the needs of all stakeholders.

Implementing traceability matrices in tools like Quality Center and ValueEdge Quality/Octane can also help complex projects, ensuring that every requirement is traced from conception through to delivery.

4. Scenario-Based Testing

Scenario-based testing is a critical part of NASA’s software verification strategy, and tools like Kontest play a central role in this process.

Kontest simulates a wide array of operational scenarios, allowing NASA engineers to observe how software behaves under different conditions that the spacecraft might encounter during a mission. These scenarios can range from nominal (expected) conditions to off-nominal (unexpected or adverse) situations.

For example, NASA might use Kontest to simulate the software’s response to losing communication with Earth, encountering unexpected debris, or dealing with a sudden malfunction in onboard systems.

Each scenario is designed to push the software to its limits, revealing how it performs under stress and whether it can recover gracefully from unexpected events.

The power of scenario-based testing lies in its ability to expose flaws that might not be apparent in more straightforward test cases. By thinking through and testing against various potential situations, NASA can ensure that its software is functional, robust, and resilient.

In the enterprise world, scenario-based testing can be equally valuable.

Complex systems often operate in diverse and unpredictable environments, interacting with other systems and users in difficult-to-anticipate ways.

Companies can use tools like performance testing and chaos engineering to simulate different operational conditions—such as peak loads, and hardware or network failures—to ensure their software can handle real-world challenges.

Bringing NASA’s Insights Back to Earth

While the average enterprise might not launch rockets or travel to other planets, the principles guiding NASA’s software testing can still provide precious insights, just as Formula 1 cars have informed how road cars are built.

By adopting simulated environments, prioritising early defect detection, clarifying requirements, and leveraging scenario-based testing, enterprise-level testers can significantly improve the reliability and quality of their software systems.

The next time you’re faced with a critical testing decision, consider what NASA might do. After all, if these practices are good enough for missions beyond our planet, they’re worth considering for the software systems that drive our businesses.

Are Your Tools Holding You Back?

Implementing new processes can be challenging without the right tools to support them. Maybe you’re still trying to use Excel or Word to drive your testing efforts or using open-source or other tools with limited flexibility.

Fortunately, there are great tools out for any budget. Tools that are easy to deploy, quick to get going, and robust and ready for enterprise users.

Are you interested in improving your testing processes but lack the software you need to implement them? Contact Calleo today, and we can help you find the right tool at the right price.

Stephen Davis
by Stephen Davis

Stephen Davis is the founder of Calleo Software, a OpenText (formerly Micro Focus) Gold Partner. His passion is to help test professionals improve the efficiency and effectiveness of software testing.

To view Stephen's LinkedIn profile and connect 

Stephen Davis LinkedIn profile

20th August 2024
Calleo Sell Test Tools

Calleo: We Sell Test Tools

With Calleo, you get expert guidance to find the right options, demos and trial licenses to evaluate them, and practical help to get up and running. You’ll see the pros, cons and long-term costs clearly before making any decisions, and stay supported with renewals and updates long after you’ve started using the tool.

cut test maintenance

4 Ways to Cut Test Maintenance Effort with AI

Automation maintenance is a pain. It’s a frustrating time drain that nobody enjoys. Unfortunately, teams are doing more of it than ever, with modern solutions changing like the wind and each new release jeopardising script integrity. Thankfully, AI-driven automation is here to help.

2025 top testing articles

2025 Roundup: Check Out The Top 5 Testing Times Articles

Thanks to your support, 2025 was another excellent year for Testing Times and our 10,000+ subscribers. We explored a wide range of software testing topics, including test automation, performance testing, Jira fatigue, tester authority, and more. Below is a quick look at the five newsletters with the most reactions this year, and why they resonated so strongly.

Is WFH worth the risk

Remote Testing: Is Working From Home Worth The Risk?

Increasingly, organisations expect remote and hybrid testers to use borrowed tool licences, unstable VPNs, and software never designed to leave the office. That creates significant compliance and security risks that can turn into serious long‑term problems. It’s not the testers per se, but remote execution over on‑prem licences is a software audit waiting to happen. Read on to learn why a compliance nightmare isn’t the only reason your test setup might not be fit for distributed and home‑working team members.

Effortless automation

Solved: 4 Common Test Automation Headaches

Software teams know the story all too well: automation promises speed and reliability, but reality often brings fragile scripts, phantom failures, and endless rework. In the end, the technology intended to accelerate releases ends up bogging things down. Or at least, that’s how things used to be… Today’s AI-powered functional

Test the Untestable

Test the Untestable: Unlock Savings & Accelerate Your Project

Testers have long been asked to test earlier, faster, and more often. In truth, however, when critical APIs, integrations, or microservices aren’t ready, testing gets stuck. We’ve all been there, raring to go, like greyhounds in the slips…  but with nothing to test, and increasingly concerned about the impending last-minute panic.

The Test Tools You Need

Testers: Will We Finally Get The Tools We Need?

During the 2008 credit crunch, companies slashed technical investment. The mantra “do more with less” stuck—and 17 years later, testers are still paying the price as demands, complexity, and expectations have soared. It’s no coincidence that we’re witnessing an increasing number of high-profile software failures and cyber attacks. Yet, there’s still little willingness to invest in the right test tools and training.

Test Automation Fails Smaller Teams

Why Test Automation Fails for Smaller Teams

Many small software teams turn to test automation, expecting substantial time and cost savings. However, they often fail to achieve any of these goals; instead of seeing a return on investment, they end up spending more effort and cost fixing their automation packs. This failure can leave lasting scars, deterring people from embracing automation and realising its many benefits…

breaking up with legacy tools

When to Move on From Legacy Test Tools

I often speak to people who want to abandon legacy test tools and transition to shiny new solutions. They cite several reasons for the switch, many of which are valid, while others need greater consideration to avoid a negative or costly outcome. On the other hand, I also speak to people who are reluctant to ever change tools, even though they’d see incredible benefits.

Shift Left

Shift Left Testing: 4 Myths and Why They Matter

Shift-left testing has become one of the most talked-about software development ideas. It sounds deceptively simple: test earlier in the process to avoid late surprises. But while the phrase is repeated at countless conferences and stand-ups, it is often misunderstood, misapplied, or reduced to a box-ticking activity (like many other testing initiatives).

Insights

Search

Related Articles

To get other software testing insights, like this, direct to you inbox join the Calleo mailing list.

You can, of course, unsubscribe 

at any time!

By signing up you consent to receiving regular emails from Calleo with updates, tips and ideas on software testing along with the occasional promotion for software testing products. You can, of course, unsubscribe at any time. Click here for the privacy policy.

Sign up to receive the latest, Software Testing Insights, news and to join the Calleo mailing list.

You can, of course, unsubscribe at any time!

By signing up you consent to receiving regular emails from Calleo with updates, tips and ideas on software testing along with the occasional promotion for software testing products. You can, of course, unsubscribe at any time. Click here for the privacy policy.