Artificial Intelligence (AI) is an intriguing, and sometimes intimidating, phenomenon. It is no longer the stuff of a faraway future. With headlines warning about job elimination, it is only natural for professionals in any industry to think about how AI will affect their work.
AI is often brought up in relation to repetitive work that is already the target for automation. One example is regression testing, as we have outlined in this guide. Still, allow us to deflate some of the hype around AI and whether it will ‘disrupt’ the testing profession.
When working in and with software, it is not hard to imagine how AI could drastically change software development as we know it. After all, AI is finding ways to identify and treat cancer, is driving our cars for us, and has beaten human contestants in Jeopardy.
But still, to answer the question, will AI or machine learning take over testers’ jobs: The short answer is “No”.
The reason being that the current and near future manifestations of AI are not general-level, sentient intelligences resembling the human mind.
"There are limits to what AI can do, and they are linked to how machine learning actually works. One of the most promising varieties of AI technologies is neural networks. [But] simply adding a neural network to a problem does not automatically create a solution."
"Simply adding a neural network to a problem does not automatically create a solution."
To illustrate that AI is not yet at a stage where it can take over a human profession entirely, consider this: IBM Watson doesn’t know it won Jeopardy. It doesn’t even know what Jeopardy is.
The longer answer to the question of whether AI will take over testers' job is still no, but there is definitely a place for AI/machine learning in testing. Think of it as a supplementary tool.
Artificial Intelligence (AI) is intelligence displayed by machines, as opposed to living organisms. It is software that mimics cognitive functions that we as humans associate with the human mind. Other than that, there really isn’t a clear definition of AI, which could be one of the reasons why there is some confusion and a great amount of hype around the concept.
"Any sufficiently advanced technology is indistinguishable from magic." - Arthur C. Clarke
The lack of a clear-cut definition has resulted in “AI” being used as collective term for “whatever hasn’t been done yet” – and jokingly, “Amazing Innovations”.
In fact, there is something called “AI Effect”: As soon as hyped technologies or innovations become commonplace they lose their “AI” label. And this happens even though most real-life implementations of AI are taking care of relatively simple, everyday tasks that we’re taking for granted.
For instance, e-mail spam filters are usually based on machine learning algorithms that continuously improve themselves based on the constant influx of emails. Fraud detection systems in credit card processing are hardly considered to be AI, even though they are heavily based on advanced machine learning algorithms that grow “smarter”, or more accurate, every day.
Whereas the concept of AI is a bit fluffy at times, the term “machine learning” seems more tangible. Here’s a starting point from Stanford professor Andrew Ng: “Machine learning is the science of getting computers to act without being explicitly programmed”.
Machine learning is a type of software technology that uses mathematical models to store and perform statistical analysis on data, and then use the results of the analyses to refine the models.
Neural networks, which sound like human-made models of the human brain, are really “just” mathematical ways to classify data in clusters and then improve the method for doing so over time. It’s smart - but at the end of the day, it’s just statistical data analysis.
The reason why Watson doesn’t know it won Jeopardy is that Watson isn’t a general intelligence.
It’s an impressive piece of software, running on some very impressive hardware from IBM, that ties together a range of different machine learning algorithms to analyze and turn soundwaves into text.
It then uses more algorithms to parse and turn the text into a form of natural language “understanding”. From there, even more algorithms are used to calculate probabilities that the text matches something in the database, which can be served as the answer to a query (or, in Jeopardy’s case, the question to an answer).
The impressive part is that all these algorithms weren’t programmed specifically to respond with X when Y is used as input; they “learn” by analyzing the data itself.
We experience more and more machine learning in our lives each day, from computer games and speech recognition to e-commerce recommendations and self-driving cars. But to be clear: An artificial, general intelligence has not been invented yet.
From a software and testing perspective, it is worth knowing that practical applications of machine learning in the testing profession are already being experimented with.
Over the next few years, test automation tools will start including a wide range of machine learning-based features, such as:
Currently, most of these features are at an early stage but will grow rapidly. However, it’s important to understand that machine learning-based features will be assistive in nature and will not take over any testers’ jobs.
You can learn much more about AI and the role it has in automation in our user guide to AI and automation.