I'm looking for a tool for test automation! It must be robust, reliable, and easy to maintain. Our testers, who are not coders, should be able to use it on their own. Can you help?

- Project manager in a pinch

 

A few years ago, the request above landed in our inbox at LEAPWORK. It came from someone looking for a better way to automate the testing of a large-scale software application. If this challenge sounds familiar, you have come to the right place.

We have written this guide because we see too many test automation projects fail. Very often this happens because of pitfalls that, with the right approach, are easy to avoid.

This requires a new way of thinking about and working with test automation. And that’s what this guide offers. 

Why Test Automation?

Today, digital transformation affects businesses in every market. Either they are driving it or being driven by it.

Any industry is at risk of disruption and market positions are at stake. As new business models emerge, and customer demands keep increasing, enterprises everywhere struggle to stay relevant. They must change the way they do business.

Technology is used in new and ever-more complex ways to drive value. These include automation of enterprise systems to cloud-based commerce and cross-channel user experiences.

Quality Assurance (QA) and testing are at the very core of digital transformation. They are key processes in software development, with good reason. Software defects, or bugs, are extremely costly to fix after release compared to catching them early.

Quality Assurance (QA) and testing are at the very core of digital transformation.

Proactive bug fixing requires testing and re-testing with each small change made to underlying code. As a software evolves, the need for repeated testing mounts.

And testing is costly. In fact, quality assurance and testing account for one fourth of total IT spending. By 2020, it is expected that this budget allocation will rise to 32%. QA teams point to increased inefficiency of test activities as a major factor contributing to rising testing cost.

Factors impacting the spike in QA and testing budgets

Figure 1: The factors impacting the spike in QA and testing budgets.

 

These are well-known facts in development teams around the world. What is less common, is the acknowledgement that not only is software testing costly - it is also very difficult. Testing requires understanding product owners and other stakeholders' expectations. Knowing the inherent limitations of the product and its environment is also a critical part of the tester profession.

For instance, if you are testing a video conference application, it is important to know various things. For example, that a regular voice call can interrupt the app when used on a phone, and that the app is highly dependent on a stable internet connection.

The regression issue: As features to be tested accumulate, and with a fixed amount of testing resources available, testers are forced to make compromises.

Figure 2: The common regression testing issue

 

Testers must also anticipate end-user behavior and balance that against requirements. This involves creating strategies for covering as much of the product as possible under time and budget constraints. Covering every single usage scenario imaginable, also when the software changes, is not a feasible task. Testers are constantly balancing the need for both exploratory testing and repeatable checklists. In fact, it is not uncommon for only about 15% of a product’s total capabilities to be covered in testing.

What's more, testers meet pressure from everywhere around them. Product owners want faster releases, developers want continuous delivery, and end-users want flawless new product features.

Product owners want faster releases, developers want continuous delivery, and end-users want flawless new product features.

The Cost of Doing Nothing

Consider the following scenario. A vendor of software solutions to the financial sector wants to release updates to a web application multiple times per year. Before each release, the product must go through careful testing.

The test team, consisting of three full-time testers, has identified and mapped out 300 test cases. 250 of these test cases are regression tests – re-testing existing functionalities. They are very structured, repetitive, and predictable - in other words, ideal for automation. The remaining 50 tests are more loosely defined and exploratory in nature and will continue to be performed manually.

Designing, planning, and executing the 250 test cases is extremely time consuming when done manually. On average, each test case requires one hour of manual testing efforts, amounting to approximately six weeks of testing. The 50 exploratory tests aside.

To meet a minimal coverage for a single release, the team of three testers must spend two full weeks designing and running their tests.

The company has six releases planned for the year ahead – one release every second month. This would require a total of 36 weeks of testing.

The 250 test cases for regression testing are the bare minimum required for an acceptable product quality. QA and product managers know they need more thorough testing to increase product quality and stay competitive. But there’s no budget to hire more testers.

To meet a minimal coverage for a single release, the team of three testers must spend two full weeks designing and running their tests.

With the current resources at their disposal, they have settled on a minimal level of sanity testing. This leaves the test team spending most of their time on regression testing, or product maintenance. Which happens at the expense of exploratory testing, i.e. product improvement.

Furthermore, there are at least four factors contributing to rising testing costs.

4 reasons why testing costs grow out of control:

  1. The demand for testing across devices, systems, and platforms is increasing. Extending the testing scope from one to two web browsers – or to include a single mobile device – would double the workload.
  2. The number of test cases accumulate. With each product update, the number of test cases required to cover more functionality grows by at least 10%. New functionality impacts existing features which require re-testing; the common regression testing issue.
  3. The release pipeline matures. Instead of doing regression testing just once, the team wants to run tests at several stages of the release pipeline. This helps provide developers with feedback as fast as possible.
  4. Management wants to increase the number of releases. To maintain market position with their state-of-the-art product, the company wants to do more product releases per year. This means they must speed up releases cycles.

These four factors combined create an unsustainable situation. Either the QA team must grow, or testers' productivity must be accelerated. Alternatively, the company must compromise on their product quality and consequently lose market share.

Testing workload grows exponentially over time.

Figure 3: Testing workload over time as scope and targets change. 

Either the QA team must grow, or testers' productivity must be accelerated. Alternatively, the company must compromise on their product quality and consequently lose market share.

Reduce Risk, Lower Costs, and Increase Execution

Automating as much test work as possible is the obvious solution. Fewer human errors, more coverage with less resources, and the ability to scale repetitive tasks are some of the gains of automation.

Read more: ”The Benefits of Test Automation”

When asked to list the reasons for process automation, including tests, businesses point to three drivers: To reduce risk, to lower cost, and to increase execution.

The Key Drivers for Test Automation

 

These drivers for automation manifest themselves differently in organizations:

  • An insurance company is threatened by increasing pressure from competitors and see that the market is not responding well to their online self-service application. To support better and more frequent releases of this web service, the company’s Digital Team implements agile testing. (Increased execution.)
  • A software provider is spreading their tester resources very thin and must rely on developers to help test during critical sprints. To avoid allocating costly and valuable developer hours to testing, the company automates regression testing. (Lower costs, increased execution.)
  • A health care provider has experienced highly critical outages of clinical systems. To increase patient safety, the organization’s IT department automates monitoring of clinical-critical systems. (Reduced risks.)

 

How to achieve automation success

Meet LEAPWORK users

What is Test Automation?

Test automation is the use of software (separate from the software under test) to control the execution of tests. It involves a comparison of actual outcomes with predicted outcomes.

Of course, test automation doesn’t mean completely automated testing. It’s about relieving testers from repetitive, time-consuming tasks.

This way they can use their skill set to design test cases that increase testing coverage - both quantitatively and qualitatively.

There are different ways of doing test automation. Before we dig into the most significant ones, as seen from the end-users’ perspective, it’s worth understanding that:

  1. there are different methods for testing, and;
  2. some types of test activities are better suited for automation than others.

There are two main methods used in testing:

Black box testing. This means testing an application without knowing its internal workings or implementation. As the name implies, the application is like a black box to the tester, who supplies input and examines output.

White box testing (or “glass box testing”) is the opposite. The tester knows the internal workings and implementation details (code, configurations, etc.).  Relying on this knowledge, the tester might test certain paths through the code.

Using one or both methods, testing is done at various stages of the software development life cycle to find bugs. There are several different types of software testing:

Unit testing means testing individual units of code or groups of units in a piece of software. Unit testing is only possible using the white box method because unit tests are written in code and is closely tied to software development work.

Integration testing means testing a chain of components that together are supposed to handle a process or business transaction, using the white box method. Integration testing also often includes the interactions between hardware and software as well as other infrastructural components.

Functional testing is used to ensure that functionality specified as part of the software’s requirements works as intended from the end-user’s perspective. For this reason, functional testing uses the black box method.

Regression testing can be described as “repeated functional testing”. It is used to test that functionality continues to work after parts of it has been modified with new code or configuration. For instance, when new features are built, regression testing ensures that old features of the software continue to work as intended.

Load testing (sometimes called stress testing) means testing how software behaves under increasingly unfavorable conditions, such as large inflows of users measured in logins per second, or spikes in business transactions such as shopping basket operations. Although load testing falls under the category of black box testing, analyzing and understanding the results of a load test is highly complex and typically requires in-depth code and infrastructure knowledge.

Performance testing means testing how different parts of software performs in terms of speed and effectiveness at processing required business transactions. Thresholds for the maximum number of seconds a certain feature must execute within under normal usage conditions are typically part of the software’s non-functional requirements and are tested using the black box method.

Usability testing means testing a piece of software from the end-user’s perspective with a focus on user-friendliness, aesthetics, and navigational effectiveness. It uses the black box method.

User Acceptance Testing is one of the few types of tests not performed by the vendor or producer of the software being tested, but rather by the customer receiving it. This type of testing is done to ensure that the software meets the requirements and works as expected from the customer’s perspective. It typically is used as the final gate before payment of the deliverables. UAT is performed using the method of black box testing.

Penetration testing (or “pen testing”) is used to simulate attacks by hostile parties to evaluate the security of the software. It’s hacking, basically. For instance, an e-commerce platform might be pen tested to ensure that hackers can’t access credit card numbers or other confidential data. It is typically performed using the black box method, but by highly skilled specialists with deep coding skills that attempt to “open up the black box” to exploit what is inside.

Exploratory testing is a hands-on and ad-hoc approach that tests a piece of software with little or no planning apart from scope. The tester relies on his or her experience and knowledge of the typical pitfalls and business requirements. Exploratory testing is generally considered a great foundation for creating repetitive test tasks to be performed later. It uses the black box method.

Beta testing is a late-stage test of a piece of software by a sampling of the intended end-users before the final release. It is used to flush out bugs that might not be apparent to internal testers because many real-life usage scenarios are not known beforehand. Beta testing relies on the black box method, without planned and structured inputs and expected outputs but instead, rather opportunistic bug finding.

API testing means testing the application programming interfaces (APIs) that are exposed by the developers to third-party developers. It is done to determine if the APIs meet the expectations for functionality, reliability, performance, and security. Although API testing requires code skills, it is typically performed using black box testing because only knowledge of the API’s input and output parameters is required.

 

All these different types of testing can add tremendous value to any software delivery process. They fit into both traditional and agile methodologies. Some of them, such as unit testing, are bound to the developers that produce the software under test. Others, such as penetration testing, happen elsewhere for the greatest effect: Outside the development department or even outside the company

Everyone who has worked on the business side of software products knows that there is one type of testing that uncovers far more bugs than any other. That is to let end-users use the software. After unit, integration, and performance tests have passed, a single end-user can make the whole system crash in minutes. This usually happens if the user does something the developers did not expect.

A common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools.

- Douglas Adams, Mostly Harmless


This is why human testers are so valuable: They try to anticipate and mimic the behavior of end-users. Testers act as users' “ambassadors” in the software development process. They do this by performing functional, regression, and exploratory testing on their behalf.

For the rest of this guide, we will focus on functional and regression testing. Specifically, how it works from the end-users' perspective by using the graphical user interface (GUI/UI) of the software itself.

To read more about the different kind of testing types not covered in this guide, we recommend the following resources:

“List of software testing types” by Software Testing Fundamentals

“The different types of software testing” by Atlassian

In the following, we'll focus on how to effectively approach GUI/UI test automation.

The Effective Approach to GUI/UI Test Automation

You're Doing it Wrong

Jogger falling in mud

“If it hurts, do it more often” is a mantra that makes a lot of sense to athletes. If your legs hurt after running 10 kilometers, do it more often and you’ll soon be able to run 20K. Then 30K - and eventually you’ll be able to compete in a marathon.

The concept applies to software development as well. Especially in the field of DevOps (the bridging of development, testing, and operations). Renowned developer and author Martin Fowler calls it “frequency reduces difficulty”. He breaks down his argument into three main points:

  • Decomposition: By dividing large tasks into smaller chunks, they become easier to perform.
  • Feedback: Short feedback loops speed up learning, which leads to quick improvements.
  • Practice: By repeating tasks often (i.e. with automation), the best approach will emerge.

Obviously, the saying is not a universal truth. Continuing to walk on a broken leg will not heal it, it will just increase the pain. Performing a flawed process repeatedly will not improve the process or make it easier to perform. Instead, it will lead to increased costs as well as higher risk of failure and poor quality.

In other words, sometimes a process hurts because you are doing it wrong.

So far, test automation has been synonymous with programming. Why? Because all available test automation frameworks and tools are dictating it. These tools are developed with a programming mindset. In much the same way that operating systems required programming before Windows 95 changed the game with their visual interface.

One example is the world’s most popular browser automation framework, Selenium. It is incredibly flexible, and it works with any type of web application, across all browsers on all operating systems. To work with it, though, requires programming.

Sometimes a process hurts because you are doing it wrong.

The problem is, testers are not programmers. Adopting the attitude that testers must learn how to code is problematic. With that mindset, we ignore the fact that coding takes many years of practice to master - just like any other craft. What's more, coding takes time away from testers’ primary function. As mentioned earlier, testers' invaluable skill set lies elsewhere. Nonetheless, “Get on board or get fired” is a prevailing sentiment.

The paradox is obvious: Test automation, which was supposed to free up resources for human testers, brings with it an array of new costly tasks. Yet, test directors and managers are often quick to assume that testers should “just” learn how to cobble some code together. Preferably by copying and pasting existing samples from online repositories. This approach is usually based on the sentiment that "our testers don’t have to become great programmers, they just have to know enough to make automation work.”

So, this faulty attitude to test automation is assuming that automation is a relatively simple task, and it is also blind to reality. Programming includes familiarizing yourself with an enormous amount of technical details and methodologies.

Doing it more often will not remove the pain, it will make it a lot worse.

3 Hard Lessons Learned in GUI/UI Test Automation

The existing paradigm in test automation causes serious headaches. In fact, it is at the heart of three common and painful lessons related to implementing test automation. Unfortunately, most enterprises learn these lessons the hard way during their agile transformation. In this section, we will go through these three lessons. You might find them to be eerily familiar, but don’t worry! In the next section, we’ll discuss how to avoid the common pitfalls and succeed with test automation.

Lesson 1: Learning to code is hard, but change management is even harder

The first lesson that enterprises often learn as they implement test automation is as following. Although writing good test automation code is hard, it is quite possible to train a group of tech-savvy users, or non-developers, to do it. Even with some success. With various training and a lot of trial and error, a group of skilled testers can begin creating test automation scripts that work in real life. This is usually achievable within 6 months. So far, so good!

However, the initial success comes with some costly realizations.

  • When writing automation logic, the tester must reach professional-level experience with a programming language.
  • When making a test interact with different systems, the tester must know about things like RegEx, CSS Classes, etc.
  • Finally, on top of this, the tester must also be able to manage change. This includes following best coding practices of “modularity”, “re-usability”, and more.

There is no way that a tester can learn all this in a short period of time without compromising their primary task; actual testing. As they make the attempt, the output is often inadequate. The resulting test automation cases are fragile, hard to troubleshoot and reuse, and impossible to update when the software changes.

Examples of factors that testers need to consider when working with programming-based automation.

Figure 5: Examples of factors that testers need to consider when working with programming-based automation.

 

It’s a common scenario that hundreds, or even thousands, of automation cases are created, and in the beginning, they work great. The cases run smoothly and successfully; green lights everywhere. But then, after a few months, as the software under test changes shape, the cases start failing. After 12 months, half of the green lights have turned red. Not because the automated test cases pick up a lot of bugs that need fixing, but because the cases simply no longer match the software under test.

Because the test cases were built by non-developers, they don't follow the best practices that would make updating the cases easier. No one in the organization knows how to make these improvised test cases work again. The consequence is that they are all scrapped, and a new batch are written from scratch to replace the old.

Tests cases over timeFigure 6: Code-based test cases fail over time because they can't be maintained.

And so, the first lesson is learned: It is hard to learn how to code test automation from scratch. But it is nearly impossible to manage changes in software without years of professional programming experience.

It is nearly impossible to manage changes in software without years of professional programming experience.

Lesson 2: Centers of Excellence and “Test Frameworks”

Enterprises usually learn the first lesson within a year of initiating automation roll-out. It takes a bit a longer with the second one, which has to do with Centers of Excellence and Test Frameworks. The intentions behind these initiatives are often good, and they are usually founded on reasonable thinking. But in the end, the results are no less catastrophic.

When automation projects fail because of testers' “lack of programming and change management skills”, it’s very natural to want to fix the problem on an organizational level.

A preferred solution is to create a Center of Excellence (CoE) for testing. The idea is to put together a team of competent people from across the organization with different skill sets, led by test director. The different roles could include testers, programmers, business analysts, project managers, and more.  The CoE will then establish policies and methods to unify testing efforts across the enterprise. This is an excellent idea, because it has the potential to bring with it a lot of benefits, such as:

  • Optimized use of existing resources: QA budgets, tools, environments, and people
  • Faster time-to-market: Test times are reduced, and test automation levels are improved
  • Cost efficiency: Potentially a significant resource cost reduction over a 3-year time frame
  • Tighter alignment: Aligning quality efforts more tightly with business needs using KPIs
  • Increased agility: Better responding to business challenges and prioritized allocation of testers’ time

The members of the CoE will help the rest of the organization with their testing efforts. Preferably not by actually doing the testing, but by guiding their colleagues. This is done by relying on best practices and test automation frameworks.

Beware of the concept of “frameworks” - it's an inevitable cause of disaster.

When automation projects fail, it’s very natural to want to fix the problem on an organizational level.

A typical scenario: An enterprise considers itself and their way of doing business completely unique. With this mindset, Centers of Excellence are inclined to build their own unique test automation framework.

The framework will be built by the expert team, often on top of Selenium or a commercial tool. It's likely to come with several layers of abstraction to hide the internal implementation from the user who don’t need to know the inner workings. The team develops the framework to match how the company releases software – not only in the present, but in the future as well!

18 months and tens of thousands of working hours later, the test automation framework is ready for roll-out. At this point, the CoE-team faces the intimidating prospect of supporting the testers who must work with the framework every day.

The CoE has experienced some staff turnover, and in fact, there are none of the original framework designers left on the team. Test automation cases start failing left and right.

Then something drastic usually happens. IT Management steps up and evaluates the situation. They conclude: The test automation framework was built “using the wrong architecture and the wrong automation tool”. New team members are brought in, the CoE is given a new task: Build a “better, more robust, and future-proof” framework using a new technology.

It takes most enterprises at least a couple of rides on this roller-coaster before they learn the second hard lesson. The Center of Excellence cannot and should not build their own test automation framework.

The Center of Excellence cannot and should not build their own test automation framework.

Lesson 3: “Outsourcing quality”

The third hard lesson is usually learned after failing with programming and frameworks. A cost-conscious IT Management team decides to outsource testing to an external provider. It’s a great deal: Three test automation engineers for the price of one in the local office, and complete control over the way they work.

However, outsourcing comes with challenges of its of own.

Common issues related to outsourcing QA activities:
  • Outsourced testers need extensive training in the software they are testing. This is much more difficult than with in-house testers because of distance and potential barriers. Both in terms of physical distance, language, and corporate culture.
  • Test procedures and standards must be explicitly documented down to the letter. Any deviations, such as minor changes to systems or procedures, might cause an immediate halt in the outsourced team. This stop-go kind of work is toxic to software development and might put the product quality at risk.
  • The entire test automation effort is now “owned” by someone outside the enterprise. This might result in an effective vendor lock-in and price trap. If at the same time, there is staff churn in the outsourced team it can lead to misunderstandings and poor execution.

To sum up, the third hard lesson is that “outsourcing quality” comes with a price, i.e. risk of poor execution and vendor lock-in.

Test Automation Reimagined

If programming and outsourcing of testing is off the table, then how can enterprises drive value from test automation?

To answer that, let’s take a step back to talk about what a test (automation) case is, as seen from the perspective of the tester.

The elements of a test case:

  • A name or other unique identifier
  • A process that must be followed step-by-step
  • A set of input (test data)
  • A set of expected results
  • A set of pre-requisites before executing the test

All these elements are worth going into details about, but there’s one that is at the core of how we should approach test automation. It has to do with how to describe a step-by-step process?

A test case is really a description of a business process. For example: “Log in to an application and check that the username is properly displayed”. Or “Put items in a shopping basket and check that the total including discount is correctly calculated”. 

A test case is really a description of a business process.

If you lead people to a whiteboard and ask them to illustrate one of their daily on-screen tasks, chances are that they will do it in one of two ways:

  1. They list each step of the task as bullet points from top to bottom. Then they realize that this approach is too restrictive because some steps interlink, and this is difficult to show in a bulleted list.
  2. They draw a flowchart with boxes representing each step or action being performed in the application’s user interface. Actions include clicking on a button, typing in a password, reading the text value from a field, etc.

 

A simple flowchart illustrating the steps in a common workflow or business process.

 Figure 7: A simple flowchart illustrating the steps in a common workflow or business process.

 

Drawing a step-by-step flowchart of user interface actions is an intuitive and flexible way to describe processes. Flowcharts are useful because they allow for branching logic, adding inputs from data sources, and much more.

For this reason, some test automation tools are developed entirely around the concept of visual GUI/UI flowcharts. For example, the LEAPWORK Automation Platform. In fact, there’s an entire industry standard for documenting business processes this way. It's called BPMN (Business Process Model and Notation).

Here’s an example: Testing a login form in a web application. It is simple to sketch out as a flowchart, where each “building block” consists of an action and an element in the application’s interface.

A standard login form.Figure 8: A standard login form.

 

The following flowchart illustrates what an automated test case would look like. It's a regular login process consisting of the following steps:

  • Clicking in an email address field,
  • typing an email address,
  • repeating the same steps for a password (not illustrated),
  • then clicking on the “LOG IN” button,
  • and finally, searching for the user’s name to appear on the screen.

A flowchart with each 'building block' representing a step in a test case.Figure 9: A flowchart with each 'building block' representing a step in a test case.

 

Note that the flowchart is not just a representation of an automated test case, it is a tool for actually activating and executing the test case.

Suddenly, the road to success with test automation seems within reach. An automation tool built on this visual approach empowers non-developers. It enables them to create, maintain, and execute test automation themselves without having to learn how to code.

Read more: ”How to Automate Tests and Processes across Applications”

There are several tools on the market with a visual approach to designing test cases, but a word of caution. Most of them are in fact based on programming disguised by a very superficial layer of visualization. Below is an example of how a “visual automation tool" turns to programmer jargon as soon as something unexpected happens.

In cases like these, the user must still understand programming terms and structures. For instance, when defining the parameters of an action or managing unexpected scenarios.

non visual tool

Figure 10: Example of how an automation tool relies on programmer jargon at the expense of intuitiveness.

 

Test Automation, Agile Development, and DevOps

Agile transformation helps businesses manage change and pursue emerging opportunities in any market situation. Test automation plays a vital part in achieving the desired state of agility. To learn more about the LEAPWORK approach to implementing automation as part of a company's agile transformation, we recommend the following the resources: 

"How Test Automation Supports Agile Transformation"

"How to Achieve Agile Testing by Automating Functional UI Tests"

DevOps—the practice of bridging software development and software operations—is a means to releasing high quality software into production.

Automation is a prerequisite for success with DevOps. Especially test automation is a key ingredient when it comes to providing fast and accurate feedback to testers and developers. Read more:

Blog post: "How to Automate Functional UI Tests in a DevOps World"

Whitepaper: "DevOps and Test Automation"

Documentation: "LEAPWORK and DevOps"

10 Considerations for Implementing Test Automation

 

Test team considerations

Automation considerations

1. Flexibility in testing

Manual testing allows for fast sign-off of features on an ad hoc basis. But flexibility decreases as the workload increases.

Gradually building automated cases on a sprint-by-sprint basis provides a high degree of flexibility.

2. Regression testing

Manual regression testing is time-consuming, causes uncertainty and creates bottlenecks. Inevitably the regression suite grows to a point at which it can’t be managed manually.

Automated regression tests can be executed 24/7, the same way every time. Building automation cases is a one-time effort, and once built, they can be reused indefinitely.

3. Scalability

Scaling manual testing requires more people and more hours.

Scaling automated testing is done by simply adding more test executioners (robots or agents).

4. The DevOps pipeline

Several aspects of the DevOps pipeline still require human cognitive skills.

Test automation perfectly supports DevOps principles of streamlining the pipeline.

5. Processes and ownership

Testers should have a say in defining the pipeline, esp. the areas related to test automation.

A test automation platform should support—not dictate—your DevOps pipeline.

6. Change management

In a QA team, both developers and testers should be empowered to make the changes required to improve the software development process.

Testers—not the automation tool—should decide on the processes related to test automation.

7. Tooling

Tools with long learning curves and unnecessary complexity will require that a lot of resources are spent on support.

An automation platform should be a good fit in three aspects: technology, processes, and organization.

8. Skills

The most important skill of the tester profession is to utilize one’s domain knowledge of the system under test when defining test cases, analyzing requirements, reporting results, etc.

 

Programming skills are not—and should not be—a default part of a tester’s skill set.

Some test automation tools require testers to code when building automation cases. Others let testers build automation cases by working with a visual designer. With the latter method, testers can fully utilize their domain knowledge, without having to worry about an application’s underlying code.

 

9. Domain knowledge

Domain knowledge will always be part of the testing profession – automation or not.

Implementing automation is about utilizing a team’s collective knowledge in the development of automation cases and making sure that automation becomes part of the daily work routine.

10. Organization

Organizing test teams around products results in dedicated, relatively static teams with a well-maintained pool of domain knowledge.

All members of a test team play an important role in automation – but not necessarily the same role. Tasks will vary depending on technical proficiency and level of domain knowledge.

 

Continued reading: A full walk-through of the ten considerations.

Experience the LEAPWORK Automation Platform

Start trial

Get Started with Test Automation

This guide has introduced a new paradigm in test automation. This approach:

  • lowers the barriers to building and executing automation, for both technical specialist and non-developers, and
  • helps teams avoid the common pitfalls in implementing automation.

To help you get test automation off the ground, we have published a short start guide. This quick read includes some key aspects to consider in relation to tool evaluation and implementation.

Continued reading: Get started with test automation.

Test Automation Best Practices

Most mistakes in test automation are predictable and can be avoided by following best practices. Here's a handful of guidelines to help you achieve success with automation:

1. Increase test coverage gradually with automation

Plan for gradually automating your test suite. We recommend starting out by focusing on the flows that are easiest to automate. In most cases, you will find that it’s the relatively simple and very repetitive flows that, by far, take up most of your testing time.

2. Build automated test cases that each test for one thing

Build test cases so that they logically only test one aspect. This way, there is no doubt about what goes wrong when a test case fails. Instead of bundling up multiple tests in one test case, it is best practice to build reusable components with your test automation tool. This way, it is easy to reuse the logic contained in other test cases, and the time required to create a new test case is minimized.

3. Build automated test cases that are independent and self-contained

This way, they can all be scheduled at once to be executed anytime and in parallel, i.e. across different environments.

4. Ensure collective ownership of test automation

Remember that success with automation is dependent on a team’s collective knowledge. Adopt a test automation platform that all testers can work with, so that automation becomes a natural part of the daily work of all team members.

5. Use a tool with a good technical fit

Implementing test automation is a long term strategic choice and should be handled as such. When evaluating automation tools, look across your organization and identify all the applications and technologies that could be potential targets for automation. Identify the scenarios where test cases need to move between technologies, e.g. both web and desktop applications, and select an automation platform that has matching capabilities. For an automation tool checklist, see below.

Continued reading: "5 Failures in Test Automation – and Best Practices for Tackling Them".

Developing a Test Automation Strategy

Once the decision has been made to roll out test automation, the next issue presents itself: “How are we actually going to do this? What’s the plan?” We've put together a checklist of items that a test automation strategy should include. In short, a test automation strategy should address the following:

  1. Project scope from an automation perspective
  2. Choice of test automation approach
  3. Analysis of automation related risks
  4. Definition and choice of test automation environment
  5. Execution plan of day-to-day tasks and procedures related to automation
  6. A decision on release control
  7. A plan for how to analyze failing test cases
  8. Procedures for reviewing the strategy and providing feedback

See the full test automation strategy checklist.

Automation Tool Checklist

For long-term strategic reasons, when choosing an automation platform, it makes sense to pick one that isn’t “just” made for test automation. Instead think of process automation more broadly, and research tools that can be used across the enterprise.

When evaluating automation tools, go through the following checklist of requirements:

  • Technical capabilities cover all your enterprise’s primary applications, e.g. Salesforce, SAP, Oracle, etc.
  • Strong governance features, incl. authentication, access control, versioning, and complete audit trails.
  • Jobs can be run both locally on desktop computers and remotely on virtual machines and in the cloud.
  • Reporting and analytics capabilities and/or integrations to PowerBI, Tableau, or similar.
  • Allows business users, who are not programmers, to visually design flows without having to write code.

Experience the LEAPWORK Automation Platform

Start trial

Go Further with Test Automation

Analyzing Test Automation Results

As test automation is introduced to the software delivery process, the amount of available test results explodes. Robots, or test execution agents, can run 24/7 without breaks, and, on top of this, the number of test cases accumulate during each sprint. As such, more results are produced to be managed and analyzed. This requires the right approach.

Here are four tips for analyzing test automation results:

  1. Set up automated monitoring to make sure testers spend their time most effectively.
  2. Figure out why test cases are failing by utilizing your test automation platform's logging, debugging, and reviewing functionalities. 
  3. Integrate with your release management platform either by pushing or pulling test results.
  4. Ensure fast and transparent feedback with shared dashboards of real-time test results.

Continued reading: "How to Analyze Test Automation Results".

Setting up Good Test Environments

Once a test automation strategy has been approved and the implementation plan has been initiated, there will inevitably come a point at which testers will begin asking themselves: “Can we trust the results generated from automated testing?”

In extension of the best practices for analyzing test automation results outlined above, it is important to consider how to ensure environments that produce reliable test results. A golden rule in testing is that the quality of your tests is not any better than the quality of your test environments.

The main requirements for good test environments are:

  • Make the application in question testable by dealing with third-party system dependency. This can be done in one of two ways:
    • Encapsulate the application for testing purposes, either by relying on mock data inside the application or by using a proxy service. 
    • Simply set up automated monitoring of e.g. a web page calling all relevant third-party services.
  • Make test environments fit your DevOps pipeline. This includes considerations related to load balancer, deployment, service accounts, environment installation, and more.
  • Decide how to manage test data. There are several ways to ensure that test data has a certain quality, including:
    • Baselining from production data
    • Bootstrapping the database
    • Creating test cases which contain the data generation in themselves

Continued reading: "Why Good Test Environments are Crucial for Successful Automation".

Applying Context-Driven Test Automation

Test automation is the obvious solution to the accumulating workload in quality assurance of software projects. Fewer human errors, more coverage with less resources, and the ability to scale repetitive tasks are some of the well-known gains of automation. However, to really benefit from automation, it is necessary to reduce complexity along the way.

Context-driven automation is a way to hide complexity, for the tester to only focus on what is essential in a test case.

Context-driven automation relies on data-sources in which the interrelation between data is pre-configured in the source itself. This way, the relations do not have to be specified when building automation flows, which makes it much simpler to re-use components, e.g. sub-flows in LEAPWORK.

Context-driven automation reduces a multidimensional project to three dimensions or less making it much more manageable for the human tester.

As a result, entire test suites can be reduced to a few archetypes of flows that are contextual and compatible to changes in the surrounding context, e.g. test environment, data sources, system customization, multiple product versions, etc.

Continued reading: "Reduce Maintenance Workload 97% with Context-Driven Test Automation".

Test Automation and AI

data pattern

Artificial Intelligence (AI) is an intriguing – and sometimes intimidating – phenomenon. It is no longer the stuff of a faraway future. With headlines like “Forrester Predicts That AI-enabled Automation Will Eliminate 9% of US Jobs In 2018”, it is only natural for professionals in any industry to think about how AI will affect their work.

This is especially true for repetitive work that is already the target for automation. One example is regression testing, as we have outlined in this guide. Still, allow us to deflate some of the hype around AI and whether it will ‘disrupt’ the testing profession.

When working in and with software, it is not hard to imagine how AI could drastically change software development as we know it. After all, AI is finding ways to identify and treat cancer, is driving our cars for us, and has beaten human contestants in Jeopardy.

But still, to answer the question, will AI or machine learning take over testers’ jobs: The short answer is “No”.

The reason being that the current and near-future manifestations of AI are not general-level, sentient intelligences resembling the human mind.

As aptly put by the World Economic Forum in a 'reality-check' of techno-enthusiasts' AI dreams:

"There are limits to what AI can do, and they are linked to how machine learning actually works. One of the most promising varieties of AI technologies is neural networks.  ... [But] simply adding a neural network to a problem does not automatically create a solution."

The longer answer to the question of whether AI will take over testers' job is still no, but there is definitely a place for AI/machine learning in testing. Think of it as a supplementary tool. 

Using Machine Learning in Test Automation

From a software and testing perspective, it is worth knowing that practical uses of machine learning are already in the works.

Over the next few years, test automation tools will start including a wide range of machine learning-based features, such as:

  • Assistance in test case generation, based on real-life user data
  • Analysis of results to detect monitoring anomalies, false positives and so on
  • More robust recognition of dynamically changing and static parts of an application

Currently, most of these features are at an early stage but will grow rapidly. However, we predict that these machine learning-based features will be assistive in nature and will not take over any testers’ jobs.

To illustrate that AI is not yet at a stage where it can take over a human profession entirely, consider this: IBM Watson doesn’t know it won Jeopardy. It doesn’t even know what Jeopardy is.

Continued reading: “Will AI Take Over Test Automation?”

Test Automation and Robotics Process Automation

person drawing a flowchart

In their market guide for Robotic Process Automation (RPA), Gartner defines RPA software this way:

Robotic process automation (RPA) tools perform "if, then, else" statements on structured data, typically using a combination of user interface (UI) interactions, or by connecting to APIs to drive client servers, mainframes, or HTML code. An RPA tool operates by mapping a process in the RPA tool language for the software "robot" to follow, with run-time allocated to execute the script by a control dashboard.

Curiously, this description sounds very similar to the concept of visual GUI / UI test automation. It also reveals that the two domains are identical on a technical level with only a few minor differences.

Both disciplines are about automating processes that:

  • are boring and repetitive;
  • cost too much to scale up, and;
  • have a high risk of human error.

Read more about the differences and similarities between test automation and RPA.

Most test automation tools are not suitable for RPA because they lack enterprise features. These include governance, change tracking, and audit trails. Interestingly, RPA projects are often kick-started with test automation tools! This is because they offer a pragmatic way to realize business-related automation potential.

Learn more about LEAPWORK’s out-of-the-box approach to RPA. 

Get Certified in Test Automation

When it comes to certifications within test automation, several of them are limited to specific frameworks or methods which means that they are quite narrow in how applicable they are. That’s why we offer our users to become certified in the LEAPWORK Automation Platform which relies on the same approach to designing automation flows across technologies. LEAPWORK users do no have to worry about underlying frameworks to become proficient in test automation.

LEAPWORK offers a comprehensive certification in test automation intended for consultants and other professionals who are experienced in working with the LEAPWORK Automation Platform.

Learn more about the LEAPWORK Professional Certification Program.

Summary

  • Quality Assurance (QA) and testing is at the very core of digital transformation. They are key processes in software development. 
  • The nature of software development combined with the forces of market competition create an unsustainable situation. Especially when it comes to software testing. Either more manpower is needed, or the existing testers’ productivity must be accelerated. Alternatively, product quality must be compromised with the risk of losing market share.
  • There are three key drivers for test automation: risk, costs, and execution.
  • There is wide range of software testing methods, and some types of test activities are better suited for automation than others.
  • Learning how to code test automation from scratch is hard. But managing change in software without professional programming experience is much harder.
  • Testers are not programmers. Coding takes time away from testers’ primary function and their invaluable skill set lies elsewhere.
  • The business that wants to implement test automation cannot and should not build their own test automation framework.
  • Outsourcing software testing comes with a price, i.e. risk of poor execution and vendor lock-in.
  • A test case is essentially a description of a business process, and flowcharts are a useful way to illustrate such processes. For this reason, the LEAPWORK Automation Platform is based on visual GUI/UI flowcharts.
  • Choosing a platform for test automation should be a strategic choice. We recommend picking a platform for process automation in general that can be used across the enterprise.

Additional Resources

Further your knowledge with these handpicked resources: 

The LEAPWORK Learning Center: Become a do-it-yourself automation expert.

Web automation – collection of articles

Data-driven automation – collection of articles

Cap Gemini World Quality Report 2017 - 18

Gartner: Critical Capabilities for Software Test Automation

The LEAPWORK Professional Certification Program

The Problem with ’Record and Playback’ in Test Automation

How to use the Data Builder Pattern in Test Automation

Start a LEAPWORK trial
14 days, free of charge, unlimited access

Stay up to date on automation

Our blog, The LEAP, offers unique automation insights and productivity tips.