A few years ago, the request above landed in our inbox at LEAPWORK. It came from someone looking for a better way to automate the testing of a large-scale software application. If this challenge sounds familiar, you have come to the right place.
Today, digital transformation affects businesses in every market. Either they are driving it or being driven by it.
Any industry is at risk of disruption and market positions are at stake. As new business models emerge, and customer demands keep increasing, enterprises everywhere struggle to stay relevant. They must change the way they do business.
Technology is used in new and ever-more complex ways to drive value. These include automation of enterprise systems to cloud-based commerce and cross-channel user experiences.
Quality Assurance (QA) and testing are at the very core of digital transformation. They are key processes in software development, with good reason. Software defects, or bugs, are extremely costly to fix after release compared to catching them early.
Proactive bug fixing requires testing and re-testing with each small change made to underlying code. As a software evolves, the need for repeated testing mounts.
And testing is costly. In fact, quality assurance and testing account for one fourth of total IT spending. By 2020, it is expected that this budget allocation will rise to 32%. QA teams point to increased inefficiency of test activities as a major factor contributing to rising testing cost.
Figure 1: The factors impacting the spike in QA and testing budgets
These are well-known facts in development teams around the world. What is less common, is the acknowledgement that not only is software testing costly - it is also very difficult. Testing requires understanding product owners and other stakeholders' expectations. Knowing the inherent limitations of the product and its environment is also a critical part of the tester profession.
For instance, if you are testing a video conference application, it is important to know various things. For example, that a regular voice call can interrupt the app when used on a phone, and that the app is highly dependent on a stable internet connection.
Figure 2: The common regression testing issue
Testers must also anticipate end-user behavior and balance that against requirements. This involves creating strategies for covering as much of the product as possible under time and budget constraints. Covering every single usage scenario imaginable, also when the software changes, is not a feasible task. Testers are constantly balancing the need for both exploratory testing and repeatable checklists. In fact, it is not uncommon for only about 15% of a product’s total capabilities to be covered in testing.
What's more, testers meet pressure from everywhere around them. Product owners want faster releases, developers want continuous delivery, and end-users want flawless new product features.
Consider the following scenario. A vendor of software solutions to the financial sector wants to release updates to a web application multiple times per year. Before each release, the product must go through careful testing.
The test team, consisting of three full-time testers, has identified and mapped out 300 test cases. 250 of these test cases are regression tests – re-testing existing functionalities. They are very structured, repetitive, and predictable - in other words, ideal for automation. The remaining 50 tests are more loosely defined and exploratory in nature and will continue to be performed manually.
Designing, planning, and executing the 250 test cases is extremely time consuming when done manually. On average, each test case requires one hour of manual testing efforts, amounting to approximately six weeks of testing. The 50 exploratory tests aside.
To meet a minimal coverage for a single release, the team of three testers must spend two full weeks designing and running their tests.
The company has six releases planned for the year ahead – one release every second month. This would require a total of 36 weeks of testing.
The 250 test cases for regression testing are the bare minimum required for an acceptable product quality. QA and product managers know they need more thorough testing to increase product quality and stay competitive. But there’s no budget to hire more testers.
With the current resources at their disposal, they have settled on a minimal level of sanity testing. This leaves the test team spending most of their time on regression testing, or product maintenance. Which happens at the expense of exploratory testing, i.e. product improvement.
Furthermore, there are at least four factors contributing to rising testing costs.
1. The demand for testing across devices, systems, and platforms is increasing. Extending the testing scope from one to two web browsers – or to include a single mobile device – would double the workload.
2. The number of test cases accumulate. With each product update, the number of test cases required to cover more functionality grows by at least 10%. New functionality impacts existing features which require re-testing; the common regression testing issue.
3. The release pipeline matures. Instead of doing regression testing just once, the team wants to run tests at several stages of the release pipeline. This helps provide developers with feedback as fast as possible.
4. Management wants to increase the number of releases. To maintain market position with their state-of-the-art product, the company wants to do more product releases per year. This means they must speed up releases cycles.
These four factors combined create an unsustainable situation. Either the QA team must grow, or testers' productivity must be accelerated. Alternatively, the company must compromise on their product quality and consequently lose market share.
Figure 3: Testing workload over time as scope and targets change
Automating as much test work as possible is the obvious solution. Fewer human errors, more coverage with less resources, and the ability to scale repetitive tasks are some of the gains of automation.
When asked to list the reasons for process automation, including tests, businesses point to three drivers: To reduce risk, to lower cost, and to increase execution.
Figure 4: The key drivers of automation
These drivers for automation manifest themselves differently in organizations:
Test automation is the use of software (separate from the software under test) to control the execution of tests. It involves a comparison of actual outcomes with predicted outcomes.
Of course, test automation doesn’t mean completely automated testing. It’s about relieving testers from repetitive, time-consuming tasks.
This way they can use their skill set to design test cases that increase testing coverage - both quantitatively and qualitatively.
There are different ways of doing test automation. Before we dig into the most significant ones, as seen from the end-users’ perspective, it’s worth understanding that:
1. there are different methods for testing, and;
2. some types of test activities are better suited for automation than others.
There are two main methods used in testing:
Black box testing. This means testing an application without knowing its internal workings or implementation. As the name implies, the application is like a black box to the tester, who supplies input and examines output.
White box testing (or “glass box testing”) is the opposite. The tester knows the internal workings and implementation details (code, configurations, etc.). Relying on this knowledge, the tester might test certain paths through the code.
Using one or both methods, testing is done at various stages of the software development life cycle to find bugs. There are several different types of software testing:
All these different types of testing can add tremendous value to any software delivery process. They fit into both traditional and agile methodologies. Some of them, such as unit testing, are bound to the developers that produce the software under test. Others, such as penetration testing, happen elsewhere for the greatest effect: Outside the development department or even outside the company.
Everyone who has worked on the business side of software products knows that there is one type of testing that uncovers far more bugs than any other. That is to let end-users use the software. After unit, integration, and performance tests have passed, a single end-user can make the whole system crash in minutes. This usually happens if the user does something the developers did not expect.
- Douglas Adams, Mostly Harmless
This is why human testers are so valuable: They try to anticipate and mimic the behavior of end-users. Testers act as users' “ambassadors” in the software development process. They do this by performing functional, regression, and exploratory testing on their behalf.
For the rest of this guide, we will focus on functional and regression testing. Specifically, how it works from the end-users' perspective by using the graphical user interface (GUI/UI) of the software itself.
To read more about the different kind of testing types not covered in this guide, we recommend the following resources:
In the following, we'll focus on how to effectively approach GUI/UI test automation.
“If it hurts, do it more often” is a mantra that makes a lot of sense to athletes. If your legs hurt after running 10 kilometers, do it more often and you’ll soon be able to run 20K. Then 30K - and eventually you’ll be able to compete in a marathon.
The concept applies to software development as well. Especially in the field of DevOps (the bridging of development, testing, and operations). Renowned developer and author Martin Fowler calls it “frequency reduces difficulty”. He breaks down his argument into three main points:
Obviously, the saying is not a universal truth. Continuing to walk on a broken leg will not heal it, it will just increase the pain. Performing a flawed process repeatedly will not improve the process or make it easier to perform. Instead, it will lead to increased costs as well as higher risk of failure and poor quality.
In other words, sometimes a process hurts because you are doing it wrong.
So far, test automation has been synonymous with programming. Why? Because all available test automation frameworks and tools are dictating it. These tools are developed with a programming mindset. In much the same way that operating systems required programming before Windows 95 changed the game with their visual interface.
One example is the world’s most popular browser automation framework, Selenium. It is incredibly flexible, and it works with any type of web application, across all browsers on all operating systems. To work with it, though, requires programming.
The problem is, testers are not programmers. Adopting the attitude that testers must learn how to code is problematic. With that mindset, we ignore the fact that coding takes many years of practice to master - just like any other craft. What's more, coding takes time away from testers’ primary function. As mentioned earlier, testers' invaluable skill set lies elsewhere. Nonetheless, “Get on board or get fired” is a prevailing sentiment.
The paradox is obvious: Test automation, which was supposed to free up resources for human testers, brings with it an array of new costly tasks. Yet, test directors and managers are often quick to assume that testers should “just” learn how to cobble some code together. Preferably by copying and pasting existing samples from online repositories. This approach is usually based on the sentiment that "our testers don’t have to become great programmers, they just have to know enough to make automation work.”
So, this faulty attitude to test automation is assuming that automation is a relatively simple task, and it is also blind to reality. Programming includes familiarizing yourself with an enormous amount of technical details and methodologies.
Doing it more often will not remove the pain, it will make it a lot worse.
The existing paradigm in test automation causes serious headaches. In fact, it is at the heart of three common and painful lessons related to implementing test automation. Unfortunately, most enterprises learn these lessons the hard way during their agile transformation. In this section, we will go through these three lessons. You might find them to be eerily familiar, but don’t worry! In the next section, we’ll discuss how to avoid the common pitfalls and succeed with test automation.
The first lesson that enterprises often learn as they implement test automation is as following. Although writing good test automation code is hard, it is quite possible to train a group of tech-savvy users, or non-developers, to do it. Even with some success. With various training and a lot of trial and error, a group of skilled testers can begin creating test automation scripts that work in real life. This is usually achievable within 6 months. So far, so good!
However, the initial success comes with some costly realizations.
There is no way that a tester can learn all this in a short period of time without compromising their primary task; actual testing. As they make the attempt, the output is often inadequate. The resulting test automation cases are fragile, hard to troubleshoot and reuse, and impossible to update when the software changes.
Figure 5: Examples of factors that testers need to consider when working with programming-based automation
It’s a common scenario that hundreds, or even thousands, of automation cases are created, and in the beginning, they work great. The cases run smoothly and successfully; green lights everywhere. But then, after a few months, as the software under test changes shape, the cases start failing. After 12 months, half of the green lights have turned red. Not because the automated test cases pick up a lot of bugs that need fixing, but because the cases simply no longer match the software under test.
Because the test cases were built by non-developers, they don't follow the best practices that would make updating the cases easier. No one in the organization knows how to make these improvised test cases work again. The consequence is that they are all scrapped, and a new batch are written from scratch to replace the old.
Figure 6: Code-based test cases fail over time because they can't be maintained
And so, the first lesson is learned: It is hard to learn how to code test automation from scratch. But it is nearly impossible to manage changes in software without years of professional programming experience.
Enterprises usually learn the first lesson within a year of initiating automation roll-out. It takes a bit a longer with the second one, which has to do with Centers of Excellence and Test Frameworks. The intentions behind these initiatives are often good, and they are usually founded on reasonable thinking. But in the end, the results are no less catastrophic.
When automation projects fail because of testers' “lack of programming and change management skills”, it’s very natural to want to fix the problem on an organizational level.
A preferred solution is to create a Center of Excellence (CoE) for testing. The idea is to put together a team of competent people from across the organization with different skill sets, led by test director. The different roles could include testers, programmers, business analysts, project managers, and more. The CoE will then establish policies and methods to unify testing efforts across the enterprise. This is an excellent idea, because it has the potential to bring with it a lot of benefits, such as:
The members of the CoE will help the rest of the organization with their testing efforts. Preferably not by actually doing the testing, but by guiding their colleagues. This is done by relying on best practices and test automation frameworks.
Beware of the concept of “frameworks” - it's an inevitable cause of disaster.
A typical scenario: An enterprise considers itself and their way of doing business completely unique. With this mindset, Centers of Excellence are inclined to build their own unique test automation framework.
The framework will be built by the expert team, often on top of Selenium or a commercial tool. It's likely to come with several layers of abstraction to hide the internal implementation from the user who don’t need to know the inner workings. The team develops the framework to match how the company releases software – not only in the present, but in the future as well!
18 months and tens of thousands of working hours later, the test automation framework is ready for roll-out. At this point, the CoE-team faces the intimidating prospect of supporting the testers who must work with the framework every day.
The CoE has experienced some staff turnover, and in fact, there are none of the original framework designers left on the team. Test automation cases start failing left and right.
Then something drastic usually happens. IT Management steps up and evaluates the situation. They conclude: The test automation framework was built “using the wrong architecture and the wrong automation tool”. New team members are brought in, the CoE is given a new task: Build a “better, more robust, and future-proof” framework using a new technology.
It takes most enterprises at least a couple of rides on this roller-coaster before they learn the second hard lesson. The Center of Excellence cannot and should not build their own test automation framework.
The third hard lesson is usually learned after failing with programming and frameworks. A cost-conscious IT Management team decides to outsource testing to an external provider. It’s a great deal: Three test automation engineers for the price of one in the local office, and complete control over the way they work.
However, outsourcing comes with challenges of its of own.
To sum up, the third hard lesson is that “outsourcing quality” comes with a price, i.e. risk of poor execution and vendor lock-in.
If programming and outsourcing of testing is off the table, then how can enterprises drive value from test automation?
To answer that, let’s take a step back to talk about what a test (automation) case is, as seen from the perspective of the tester.
All these elements are worth going into details about, but there’s one that is at the core of how we should approach test automation. It has to do with how to describe a step-by-step process?
A test case is really a description of a business process. For example: “Log in to an application and check that the username is properly displayed”. Or “Put items in a shopping basket and check that the total including discount is correctly calculated”.
If you lead people to a whiteboard and ask them to illustrate one of their daily on-screen tasks, chances are that they will do it in one of two ways:
1. They list each step of the task as bullet points from top to bottom. Then they realize that this approach is too restrictive because some steps interlink, and this is difficult to show in a bulleted list.
2. They draw a flowchart with boxes representing each step or action being performed in the application’s user interface. Actions include clicking on a button, typing in a password, reading the text value from a field, etc.
Figure 6: A simple flowchart illustrating the steps in a common workflow or business process
Drawing a step-by-step flowchart of user interface actions is an intuitive and flexible way to describe processes. Flowcharts are useful because they allow for branching logic, adding inputs from data sources, and much more.
For this reason, some test automation tools are developed entirely around the concept of visual GUI/UI flowcharts. For example, the LEAPWORK Automation Platform. In fact, there’s an entire industry standard for documenting business processes this way. It's called BPMN (Business Process Model and Notation).
Here’s an example: Testing a login form in a web application. It is simple to sketch out as a flowchart, where each “building block” consists of an action and an element in the application’s interface.
Figure 7: A standard login form
The following flowchart illustrates what an automated test case would look like. It's a regular login process consisting of the following steps:
Figure 8: A flowchart with each 'building block' representing a step in a test case
Note that the flowchart is not just a representation of an automated test case, it is a tool for actually activating and executing the test case.
Suddenly, the road to success with test automation seems within reach. An automation tool built on this visual approach empowers non-developers. It enables them to create, maintain, and execute test automation themselves without having to learn how to code.
There are several tools on the market with a visual approach to designing test cases, but a word of caution. Most of them are in fact based on programming disguised by a very superficial layer of visualization. Below is an example of how a “visual automation tool" turns to programmer jargon as soon as something unexpected happens.
In cases like these, the user must still understand programming terms and structures. For instance, when defining the parameters of an action or managing unexpected scenarios.
Figure 9: Example of how an automation tool relies on programmer jargon at the expense of intuitiveness
Agile transformation helps businesses manage change and pursue emerging opportunities in any market situation. Test automation plays a vital part in achieving the desired state of agility. To learn more about the LEAPWORK approach to implementing automation as part of a company's agile transformation, we recommend the following the resources:
DevOps—the practice of bridging software development and software operations—is a means to releasing high quality software into production.
Automation is a prerequisite for success with DevOps. Especially test automation is a key ingredient when it comes to providing fast and accurate feedback to testers and developers. Read more:
Test team considerations: Manual testing allows for fast sign-off of features on an ad hoc basis. But flexibility decreases as the workload increases.
Automation considerations: Gradually building automated cases on a sprint-by-sprint basis provides a high degree of flexibility.
Test team considerations: Manual regression testing is time-consuming, causes uncertainty and creates bottlenecks. Inevitably the regression suite grows to a point at which it can’t be managed manually.
Automation considerations: Automated regression tests can be executed 24/7, the same way every time. Building automation cases is a one-time effort, and once built, they can be reused indefinitely.
Test team considerations: Scaling manual testing requires more people and more hours.
Automation considerations: Scaling automated testing is done by simply adding more test executioners (robots or agents).
Test team considerations: Several aspects of the DevOps pipeline still require human cognitive skills.
Automation considerations: Test automation perfectly supports DevOps principles of streamlining the pipeline.
Test team considerations: Testers should have a say in defining the pipeline, esp. the areas related to test automation.
Automation considerations: A test automation platform should support—not dictate—your DevOps pipeline.
Test team considerations: In a QA team, both developers and testers should be empowered to make the changes required to improve the software development process.
Automation considerations: Testers—not the automation tool—should decide on the processes related to test automation.
Test team considerations: Tools with long learning curves and unnecessary complexity will require that a lot of resources are spent on support.
Automation considerations:An automation platform should be a good fit in three aspects: technology, processes, and organization.
Test team considerations: The most important skill of the tester profession is to utilize one’s domain knowledge of the system under test when defining test cases, analyzing requirements, reporting results, etc.
Programming skills are not—and should not be—a default part of a tester’s skill set.
Automation considerations: Some test automation tools require testers to code when building automation cases. Others let testers build automation cases by working with a visual designer. With the latter method, testers can fully utilize their domain knowledge, without having to worry about an application’s underlying code.
Test team considerations: Domain knowledge will always be part of the testing profession – automation or not.
Automation considerations: Implementing automation is about utilizing a team’s collective knowledge in the development of automation cases and making sure that automation becomes part of the daily work routine.
Test team considerations: Organizing test teams around products results in dedicated, relatively static teams with a well-maintained pool of domain knowledge.
Automation considerations: All members of a test team play an important role in automation – but not necessarily the same role. Tasks will vary depending on technical proficiency and level of domain knowledge.
Continued reading: A full walk-through of the ten considerations.
This guide has introduced a new paradigm in test automation. This approach:
To help you get test automation off the ground, we have published a short start guide. This quick read includes some key aspects to consider in relation to tool evaluation and implementation.
Most mistakes in test automation are predictable and can be avoided by following best practices. Here's a handful of guidelines to help you achieve success with automation:
1. Increase test coverage gradually with automation
Plan for gradually automating your test suite. We recommend starting out by focusing on the flows that are easiest to automate. In most cases, you will find that it’s the relatively simple and very repetitive flows that, by far, take up most of your testing time.
2. Build automated test cases that each test for one thing
Build test cases so that they logically only test one aspect. This way, there is no doubt about what goes wrong when a test case fails. Instead of bundling up multiple tests in one test case, it is best practice to build reusable components with your test automation tool. This way, it is easy to reuse the logic contained in other test cases, and the time required to create a new test case is minimized.
3. Build automated test cases that are independent and self-contained
This way, they can all be scheduled at once to be executed anytime and in parallel, i.e. across different environments.
4. Ensure collective ownership of test automation
Remember that success with automation is dependent on a team’s collective knowledge. Adopt a test automation platform that all testers can work with, so that automation becomes a natural part of the daily work of all team members.
5. Use a tool with a good technical fit
Implementing test automation is a long term strategic choice and should be handled as such. When evaluating automation tools, look across your organization and identify all the applications and technologies that could be potential targets for automation. Identify the scenarios where test cases need to move between technologies, e.g. both web and desktop applications, and select an automation platform that has matching capabilities. For an automation tool checklist, see below.
Continued reading: "5 Failures in Test Automation – and Best Practices for Tackling Them".
Once the decision has been made to roll out test automation, the next issue presents itself: “How are we actually going to do this? What’s the plan?” We've put together a checklist of items that a test automation strategy should include.
In short, a test automation strategy should address the following:
1. Project scope from an automation perspective
2. Choice of test automation approach
3. Analysis of automation related risks
4. Definition and choice of test automation environment
5. Execution plan of day-to-day tasks and procedures related to automation
6. A decision on release control
7. A plan for how to analyze failing test cases
8. Procedures for reviewing the strategy and providing feedback
For long-term strategic reasons, when choosing an automation platform, it makes sense to pick one that isn’t “just” made for test automation. Instead think of process automation more broadly, and research tools that can be used across the enterprise.
When evaluating automation tools, go through the following checklist of requirements:
As test automation is introduced to the software delivery process, the amount of available test results explodes. Robots, or test execution agents, can run 24/7 without breaks, and, on top of this, the number of test cases accumulate during each sprint. As such, more results are produced to be managed and analyzed. This requires the right approach.
Here are four tips for analyzing test automation results:
1. Set up automated monitoring to make sure testers spend their time most effectively.
2. Figure out why test cases are failing by utilizing your test automation platform's logging, debugging, and reviewing functionalities.
3. Integrate with your release management platform either by pushing or pulling test results.
4. Ensure fast and transparent feedback with shared dashboards of real-time test results.
Once a test automation strategy has been approved and the implementation plan has been initiated, there will inevitably come a point at which testers will begin asking themselves: “Can we trust the results generated from automated testing?”
In extension of the best practices for analyzing test automation results outlined above, it is important to consider how to ensure environments that produce reliable test results. A golden rule in testing is that the quality of your tests is not any better than the quality of your test environments.
The main requirements for good test environments are:
Test automation is the obvious solution to the accumulating workload in quality assurance of software projects. Fewer human errors, more coverage with less resources, and the ability to scale repetitive tasks are some of the well-known gains of automation. However, to really benefit from automation, it is necessary to reduce complexity along the way.
Context-driven automation is a way to hide complexity, for the tester to only focus on what is essential in a test case.
Context-driven automation relies on data-sources in which the interrelation between data is pre-configured in the source itself. This way, the relations do not have to be specified when building automation flows, which makes it much simpler to re-use components, e.g. sub-flows in LEAPWORK.
Context-driven automation reduces a multidimensional project to three dimensions or less making it much more manageable for the human tester.
As a result, entire test suites can be reduced to a few archetypes of flows that are contextual and compatible to changes in the surrounding context, e.g. test environment, data sources, system customization, multiple product versions, etc.
Artificial Intelligence (AI) is an intriguing – and sometimes intimidating – phenomenon. It is no longer the stuff of a faraway future. With headlines like “Forrester Predicts That AI-enabled Automation Will Eliminate 9% of US Jobs In 2018”, it is only natural for professionals in any industry to think about how AI will affect their work.
The world of software quality assurance dreams of artificial intelligence. The promise of a scalable, self-improving, digital workforce is too good to ignore. And in theory, there are endless ways in which AI can be utilized in, for example, test automation.
But, to date, no one has developed a computer program of general intelligence able to think on its own. The more advanced applications of AI in quality assurance, e.g. robots designing test cases on their own, are still fiction.
What does exist are clever statistical data analysis methods. We might call these artificial intelligence algorithms. Some of them are machine learning (ML) algorithms that can build clustering and predictive models out of even small amounts of data. Other algorithms mimic human decision-making, such as the way humans interact with software. Image recognition, for example, imitates humans' visual cognitive processing of what is on a computer screen. Algorithms like these are incredibly powerful when applied in the right ways.
Existing examples of AI are “Narrow AI”. Gartner defines Narrow AI as: “Tightly scoped machine learning solutions designed to perform a specific task.” You don’t need a supercomputer to run these applications. Most of them can be hosted on a cloud server or even on your own laptop.
In test automation, such applications of Narrow AI already exist and are in use. Below are some examples, based on how we at LEAPWORK apply AI in automation.
Intelligent capturing involves algorithms pre-trained to create “strategies” for software robots. The robots use these strategies as guidance for how to find elements used in automation flows. For example, the buttons, text, and fields that make up the UI of web and desktop applications. These algorithms can be combined with ML-based visual recognition into “smart recording”. This is a functionality that captures sequential actions performed in an UI for the robot to repeat when automation is executed.
When applying intelligent strategies to automation, the execution becomes self-corrective, or self-healing. This means that software robots know which of several strategies to pursue to execute automation flows as intended. This is a critical functionality for automating systems that undergo changes. For example, if a button or other UI element changes appearance, a self-healing robot can correct itself to find the element regardless.
Now consider utilizing these self-healing strategies for locating UI elements across technologies. If an element captured using visual recognition is mapped to other ways of identifying the same element in an application, a robot can rely on different technologies to find and interact with that element.
With trained neural networks, like the ones provided by ABBYY, it is possible to recognize text and numbers from screen pixels. This is useful for automation of virtual environments, such as Citrix applications, and graphical-heavy applications used for games, 3D production, broadcasting, and more.
A common application of AI is the processing of large amounts of texts or natural language, e.g. chat messages, to do sentiment analysis. This is possible with leading AI cloud services such as IBM Watson, Google TensorFlow, and Microsoft Azure ML. With data-driven automation, you can feed information from AI cloud services into automation flows. When executing these flows, software robots can choose which tasks to perform based on e.g. the outcome of a sentiment analysis.
Separately, these examples of narrow AI might not seem like groundbreaking applications. But in combination, they are a great help when designing software automation. With ML-driven automation in their toolbox, testers can wield the power of AI without technical training.
In their market guide for Robotic Process Automation (RPA), Gartner defines RPA software this way:
Curiously, this description sounds very similar to the concept of visual GUI / UI test automation. It also reveals that the two domains are identical on a technical level with only a few minor differences.
Both disciplines are about automating processes that:
Most test automation tools are not suitable for RPA because they lack enterprise features. These include governance, change tracking, and audit trails. Interestingly, RPA projects are often kick-started with test automation tools! This is because they offer a pragmatic way to realize business-related automation potential.
When it comes to certifications within test automation, several of them are limited to specific frameworks or methods which means that they are quite narrow in how applicable they are. That’s why we offer our users to become certified in the LEAPWORK Automation Platform which relies on the same approach to designing automation flows across technologies. LEAPWORK users do no have to worry about underlying frameworks to become proficient in test automation.
LEAPWORK offers a comprehensive certification in test automation intended for consultants and other professionals who are experienced in working with the LEAPWORK Automation Platform.
Further your knowledge with these handpicked resources:
The LEAPWORK Learning Center: Become a do-it-yourself automation expert.
Web automation – collection of articles
Data-driven automation – collection of articles