Skip to content

Why Good Environments are Crucial for Successful Test Automation

Kasper Fehrend

Kasper Fehrend

Once a test automation strategy has been approved and the implementation plan has started, there will come a point where testers will ask: “Can we trust the results generated from automated testing?”.

Related reading: best practices for analyzing test automation results

A golden rule in testing is that the quality of your tests is not any better than the quality of your test environments. This is even more true when it comes to automated testing. So why are good environments crucial for successful test automation?

In test automation, the subjective, cognitive analysis of a human tester only takes place during the design and evaluation of test cases, not during execution. Robots will simply execute as instructed even if there are obvious issues present.

So, to give the robots, or test execution agents, the best possible conditions to work in, you need to set up test environments that are robust and predictable. However, this is not a simple task:

  • How do you determine if the system under test is actually testable – especially if the test environment is dependable on external systems?
  • If there are system failures, how do you re-provision the test environment, i.e. make it ready for testing again?
  • How to manage test data, including clean-up and baselining?

Three factors to consider when setting up test environments

When it comes to test environments, there are three main aspects to consider, each of which are covered in more detail in this post.

  1. Which architecture will make an application testable?
  2. How do the test environments fit into the DevOps pipeline?
  3. How to manage test data?

1. Make the system testable with the right architecture

Almost any application has connections and relations to third-party systems making it a challenge to test. If there are issues with a third-party service, the application might fail which presents the issue: Is the application testable? If not, don’t waste time testing it.

There are two ways to deal with the issue of third-party service dependence:

Testing an application independent of third-party services

You can encapsulate an application from a testing perspective by using a service-oriented architecture (SOA). SOA is based on loose couplings between the individual systems, typically based on REST APIs or similar. This makes it possible to intervene in the communication between parties for testing purposes.

The SOA approach provides you with at least two ways of making testing independent of the third party: mocking data inside the application and using a proxy service.

Mocking data inside the application

Relying on mock data inside the application under test means that any calls from the application to a third-party service are intervened by the application itself.

Instead of transferring data to the third-party service, the application will “return data to itself” enabling testing of it.

Encapsulating an application for testing purposes by using mock data

Figure 1: Encapsulating an application for testing purposes by using mock data

Using a proxy service

Alternatively, you can set up a proxy service that will serve as an intermediary between the application under test and the third-party service.

The interface of the proxy will resemble that of the third-party service, so from the application's perspective, interacting with the proxy is not distinguishable from interacting with the third-party service.

The proxy service can either store and return data or work as a “real” proxy and transfer data between the application and the third-party service. Most importantly, the proxy can handle situations where the third-party service is unavailable.

Encapsulating an application for testing purposes by using a proxy service

Figure 2: Encapsulating an application for testing purposes by using a proxy service.

Ensuring that the third-party service is available

In this scenario, the question “Is the system testable?” is answered by simply using the system. For example, imagine a website that integrates with several web services to generate a specific web page and/or functionality.

To tell if the website is able to run as expected, do an online check of the services involved. An easy way to do this is to set up a web page with the sole purpose to call all relevant external references and make sure that they respond as expected. This will give a good indication of whether the website is testable or not.

Web services status

Figure 3: An example from Microsoft Azure of a web page reporting on the status of relevant web services. To test if the services are responsive, you could build a single automated test case for this specific page, testing whether all systems report green lights, in this case, represented visually by checkmarks.

2. Make test environments fit with the DevOps pipeline

DevOps—the practice of mitigating risks before releasing to production—relies heavily on test automation. To get the most relevant feedback from automated test cases, and thereby reducing risk, it is essential that test environments resemble the production environment as closely as possible. For example, if a production environment utilizes SSL offload on the load balancer, then the test environment should have the same setup.

The table below lists some of the components to be considered when designing your test environment.

Component

Description

Load balancer: Session state

Most systems use a load balancer to distribute load across front-end servers. This means that the state of a user session is stored in a global repository, which can be a potential source of errors if not tested.

Load balancer: Deployment scenarios

Deploying to multiple servers behind a load balancer gives the opportunity of deploying/releasing without any outages. This must be practiced repeatedly in the test environment to reduce risks.

Load balancer: SSL offload

To remove the burden of decrypting all incoming traffic on front-end servers, this can be done by load balancers instead. This means all traffic from the load balancer to the application servers is unencrypted, but all traffic from the client to the load balancer is encrypted. This should also be included in the test environment to reduce risk.

Deployment: Install scripts

Reuse the exact deployment mechanism from production in the test environments. This means, for example, to reuse the same scripts (PowerShell, Bash, etc.) and to only have the environment configuration change between deployment to different environments.

Deployment: Build and configurations

Keep configurations external to the builds. The build that is deployed to the test environment should be the same build that is deployed to production. Only configurations should change.

Setup: Service accounts

Use the same structure for providing access to the system in both production and test environments. Assign service accounts where possible, to keep the human intervention to an absolute minimum and to have similar setups across environments.

Setup: Test environment installation

If possible, use the same script to set up both the test and production machines from scratch. If the application runs on virtual machines, these scripts are run on “vanilla” (default) installation and are intended to make the machines ready to install the actual application. So, this is the base configuration of the machines independently of the actual application.

A common issue when discussing test environments in relation to DevOps is whether to generate test environments on the fly or to rely on more permanent test environments spanning multiple releases.

Creating temporary custom environments (“snowflake servers”) when needed comes with the advantage of not having to keep test environments running when not in use.

However, as argued by Martin Fowler, snowflake servers are difficult to reproduce and can’t easily mirror your production environment for testing purposes. What’s more, with the introduction of test automation, the periods of inactivity in the test environment should be significantly reduced.

For this reason, we recommend permanent environments over snowflake servers unless this would come with significant disadvantages specific to your organization.

3. Managing test data

The value of software testing is highly dependent on the quality of test data. Especially in test automation, the data used need to have a certain level of predictability to be able to run deterministic test cases successfully.

There are several ways to ensure that test data meets the desired standard:

Baselining from production data

If the system under test allows it, baselining data from production into the test environment ensures that the data is up to date, relevant, and reflects real usage patterns. Furthermore, it allows the testers to reproduce errors from production into the test environment.

Not all systems allow for baselining this way, especially if the application is part of a bigger, interlinked setup. In these instances, baselining is typically done in regular intervals, i.e. months part, and testing efforts are usually affected during the baselining period.

If an application does allow for baselining directly from production, this should be part of the DevOps setup. This is done by creating a data backup from production and importing it to the test environment – preferably daily.

Make sure you have a plan for how to mask production data to not violate any privacy legislation related to personal data.

Also consider how data should be manipulated to include e.g. certain data variants more suited for testing. In other words, figure out if it is necessary to create or change any data records as part of the import to optimize the application for testing.

Bootstrapping the database

Another way to make sure that data is predictable is to bootstrap the data repository. This can be done in several ways, e.g. running scripts overwriting the entire data repository or injecting data on a scheduled basis. An alternative approach is to restore the same database backup, e.g. on a daily basis to ensure the system is testable.

Creating self-contained test cases

An easy and popular way to make sure that correct data is always present is to include the creation of data as part of the test case. This way the entire life-cycle of data can be controlled in the individual test case. The deletion and clean-up of data can also be part of the test case – or alternatively as part of a scheduled task.

All three scenarios of test data management are summarized in the figure below.

Production and Test automation environment

FIGURE 4: Data management in a test automation environment.

Conclusion

This article has presented some of the steps you can take to best prepare test environments for successful test automation. The primary requirements are:

  • Make the application in question testable by dealing with third-party system dependency. This can be done in one of two ways:
    • Encapsulate the application for testing purposes, either by relying on mock data inside the application or by using a proxy service.
    • Simply set up automated monitoring of e.g. a web page calling all relevant third-party services.
  • Make test environments fit your DevOps pipeline. This includes considerations related to load balancer, deployment, service accounts, environment installation, and more.
  • Decide how to manage test data. There are several ways to ensure that test data has a certain quality, including:
    • Baselining from production data
    • Bootstrapping the database
    • Creating test cases that contain the data generation in themselves

To learn more about automated testing, and finding the importance of good environments, check out the Leapwork whitepaper on finding the right agile test automation tool for reducing risk, lowering costs, and driving value with test automation.

New call-to-action