Test automation brings many benefits with it, such as increased execution and reduced risk. But these benefits may be diminished if the tests aren’t performing as intended. There are several reasons why tests become unstable, most of which you can turn around by following these best practices and guidelines.
Following best practices will get you a long way, but if there are external factors that aren’t behaving as expected, there’s a good chance you’ll end up with unstable tests that fail left and right, even though the application under test is actually performing as it should.
Some of the factors that can impact the behavior of the application under test are:
The question then becomes how you can build tests that are stable, despite the presence of the above-listed challenges.
Here are a few tips and tricks that you can follow to solve some of these challenges, in the order that they are most likely to occur.
When building your flow, the first thing you want to think into the equation is timing.
Timing can make or break your test flows, but by following a few simple steps as you set up your tests, you can cross this bridge early and minimize the likelihood that timing will be the source of your problems.
Start by running your test locally and try to find the places where tests are failing due to timeouts. Increase timeouts within failing blocks first, as these help ensure stability in case your network or test environment is running slowly. If the test fails here, consider adjusting timeout, using await requests, or configuring the proxy.
Next, schedule the same test to be executed 5 to 10 times in a row on your agent and analyze the results. If your tests are still breaking, you should look beyond timing and reconsider the logic of the flow as well as any performance issues in the environment that could have occurred at the time of the test failure.
Once you’ve identified and fixed the above issues where possible, run your tests on schedule again. Ideally, you should keep running your tests, analyzing them, and fixing any errors within them, until you see no failed tests.
If you don't reach this point within a reasonable time frame, go back to the list above and consider the external factors once again until you identify the error. Then, repeat your tests on schedule 10 times until all tests pass. At this point, you’re ready for production.
When running the tests 200 times, you should still expect to see a fail rate of about 1%, as it’s natural to see some failure that can be attributed to external conditions.
Although errors may occur as a result of external conditions that are out of your scope, it doesn't mean that you cannot do anything to minimize the impact of these.
Here are a few ways to troubleshoot, as well as some best practices that you can follow to lower the risk of failing tests:
Last but not least, you should always - as this is after all the point of test automation - check that the reason for the failed test is not in fact the application under test.
These issues should be reported to the development team, in a continuous manner if possible, so that bugs can be fixed before anything is moved into production.
To learn more about best practices for test automation, read our guide to reducing risk, lowering costs, and driving value with test automation and sign up for our webinar on successful test automation strategy to learn how you can build stable, maintainable and scalable test automation that is both result-oriented and cost effective.