The LEAP

Automation insights and productivity tips from LEAPWORK.

All Posts

AI in Test Automation: Hype or Help?

AI is taking up an increasing amount of space in the automation field. This is mainly due to the need to increase automation coverage, free up resources, and speed up processes in software development and quality assurance.

Where automation has for a long time been the only way to do this, improved AI capabilities are now enabling businesses to increase coverage even further.

Automation in testing is one area in which AI can contribute with various capabilities.

This has for some time raised a question of whether or not testers will be replaced by these new technologies.

To answer this question, and to understand how AI does (and does not) contribute to test automation, as well as if AI is here to help or if it is in fact hyped, we’ll have to take a closer look at testing, automation, and AI one by one and try to shed some light on how they are intertwined.

How does automation contribute to testing?

To understand how automation contributes to testing it might be useful to view testing as a big puzzle with lots of pieces, where test automation is half of the puzzle while human testing (or manual testing, as it’s perhaps more popularly known) is the other half of the puzzle.*human and automated testing

*(In this drawing we’ve depicted automated testing as one half, and human testing as the other for the sake of simplicity, but in real life, the amount of tests that are automated vs. manual will of course depend on the case being tested.)

This depiction of testing and test automation is based on a perception of automation as something that supports testing, not something that replicates it. This is an important point, also in the context of AI based automation (which we’ll get to in a minute).

That being said, automated testing certainly is an invaluable part of the puzzle. Why? Because robots add something humans don’t – they do exactly as they’re told, they do it at high speed and with accuracy, and they don’t get bored of it.

This is why some types or aspects of tests, such as regression testing or test data creation, are typically great candidates for automation.

But robots also don’t do more than they’re asked to do. They don’t have the ability to think critically and creatively, which is why they’re not so good at evaluating or improving tests. This is something humans can and should keep doing.

In that way, you could argue that human and automated testing live in symbiosis – a mutually beneficial type of co-existence.

We have an in-depth blog post about test automation that will give you much more insight into what automation can and cannot do for testing if you’re interested, but the essence of it all is that there will always be areas of automated testing that require a ‘human touch’ and equally so will there be areas of human testing that can benefit from being partially automated.

So the answer to the question of how automation contributes to testing is that it helps free up resources and minimize time spent on things that automation robots might as well be doing, so that humans can spend more time on things that robots cannot do.

And that’s about as close as we’ll get to a general answer. The type and degree of actual contribution depends on what it is you’re testing. It’s a matter of understanding the problems in the context they need to be solved and then seeking out the best technology to solve that problem.

Now that we’ve established what role automation plays in testing, next question to answer is how AI contributes to test automation.

 

How does AI contribute to test automation?

The purpose of AI is to replicate what humans do at a cognitive level. ‘Real’ artificial intelligence means that a robot is programmed to think, do, or say as a human being.

However, the AIs that exist today aren’t quite there. Rather, they are what we call ‘narrow’ artificial intelligence. This means that although they can do lots of really cool things such as tell you what the weather forecast is for tomorrow, or give you an update on sports, they can only do that because they’ve been trained to do so. Although IBM Watson can play Jeopardy really well, it can only do that because it’s been programmed to. It can’t win a game of Chess.

So what does this mean in the context of test automation?

It simply means that with AI, businesses can achieve more with their automation, but they still can’t achieve everything. And they certainly can’t replace testers.

Human skills are still a very necessary piece of the puzzle. To understand why, let’s borrow a lesson from tester Richard Bradshaw.

Below is two drawings of a rocket. Your task is to spot five differences between the two.

spot 5 differences 1Can you find them? Great. Now remember those differences. Imagine scripting in your brain how you found them.

Done? Now cover up the image on the screen and take a look at the next image.

Spot 5 differences 2Now what do you see? Run that same script in your mind and ask yourself if you see any other differences and if that means that you would hesitate to let this one pass, given your task still was to spot the differences.

If you’re like most people, you’ll spot that there’s 6th difference. For that reason, you won’t approve it.

Now let’s have a look at this same image through the eyes of a robot.

Remember, you’ve asked the robot to spot five differences.

This is what the robot sees:

Robot spots 5 differencesThe robot still sees those five differences. Nothing more, nothing less. The robot will therefore have no problem approving this. But any human tester will see the difference and think twice about approving it.

This lesson goes for test automation in general, but it certainly also serves an important lesson when understanding the benefits and limitations of AI.

As humans, we’ll always have the ability to look beyond, to understand things in context, and to think critically and creatively. AIs just aren’t that smart (yet).

Despite this, many still talk about AI as being something magical that has capabilities beyond human beings. In that sense, AI is certainly hyped.

But AI can also help. Particularly in test automation.

Optical Character Recognition (OCR) is one type of AI that contributes immensely to test automation.

OCR is the technology that allows a computer to identify certain visual elements, and thereby verify their presence or absence on e.g. a website.

The uses of OCR not just in test automation, but also in RPA is virtually endless, and is an essential part of many business’ digital transformation, as it allows computers to read images of typed, handwritten or printed text and enter them as data into systems so that they can be edited, searched, stored more compactly, and more.

To learn more about OCR and other types of AI, read our User Guide on AI in automation.New call-to-action

Maria Homann
Maria Homann
Content Marketing Manager

Related Posts

Best Practices for Building Maintainable and Scalable Test Automation

There’s a major difference between automating twenty test cases and automating 2000 test cases; while it’s completely possible to take an ad-hoc approach when there are only a few test cases, it becomes an entirely different story when test cases run into the thousands.

How to Build Stable Test Automation

Test automation brings many benefits with it, such as increased execution and reduced risk. But these benefits may be diminished if the tests aren’t performing as intended. There are several reasons why tests become unstable, most of which you can turn around by following these best practices and guidelines.

How to Effectively Analyze Test Automation Results

If you have introduced test automation as a means to achieve more efficient testing, you’re probably also interested in making the test result analysis process as efficient as possible. This requires the right approach and the right set of tools.