Testing and "quality assurance" are key processes in software development, accounting for up to 40% of development spending. That's because bugs are extremely costly to fix after software has shipped, compared to catching them early and so everyone needs a proactive approach to manage risk and quality. That means testing and re-testing for all changes made. The problem is, testing software is difficult.
Testing requires a firm understanding of the requirements and expectations from product owners and other stakeholders as well as understanding the inherit limitations of the product and its environment. For instance, knowing that an iPhone app can be interrupted by a regular voice call and that the internet connection can come and go is important when you are planning how to test a video conferencing app.
Testers must also anticipate end-user behavior and balance that against requirements, creating cost-effective strategies for covering as much of the product as possible under time and budget constraints. They must strike the right balance between creating repeatable checklists and doing explorative testing, knowing that only about 15% of the product's total capabilities will be covered; it would be an enormous task to cover everything and keep it covered as changes happen, and no-one is willing to pay for that.
Testers are pressured by the fact that everyone around them seem very keen on speeding up the product development and release process; product owners want faster time to market, developers want to be "agile" and do "continuous delivery" and the operations people are adopting "DevOps" methodologies. And of course, the end-users -- usually, paying customers -- want bugs fixed and new features released yesterday.
Automation is the answer – sort of
Automating as much test-work as possible seems to be a good answer for testers, because it would mean less human error, more coverage and the ability to just automatically repeat checklists over and over. But there's a problem. Up until now, getting automation to work in real-life scenarios has been synonymous with programming.
And so, testers all over the world are forced to learn some form of programming, usually without proper guidance, formal training or even a personal interest in learning it for that matter.
I once read a book on how to get the most out of a large, commercial test product. The chapter about automation said (paraphrasing) "there are lots of great resources on the internet you can copy and paste (the programming language) examples from". That was it. Now this may be the way a lot of programmers get into the business, diving in at the deep end, but it's usually followed by years of junior work and formal education before they become truly productive developers, armed with the right patterns, practices and experience.
Testers who are tasked with automation, on the other hand, typically only have a few weeks or months to either "get on board or get fired". I heard a test director from a large financial software vendor say just that at a conference this summer -- and everyone in the audience nodded. It's a common management decision and it's usually accompanied by "don't worry, you don't have to become a great programmer, you just have to know enough to make the automation thing work."
It still means having to deal with a huge number of technical details and methodologies, such as:
The result is almost always poorly made automation cases that are fragile, hard to troubleshoot, hard to re-use and impossible to update when the system under test changes. Hundreds or even thousands of automation cases are created, and in the beginning, they are all green. But within a few months they start failing because the product has changed and after a few more months half of them are red. Not so much because there are so many bugs in the product, but because the automation cases no longer match the product -- and no-one knows how to make it work again. So they are not trusted, they get turned off, and a new batch of automation cases are created to replace the old. And the developers shake their heads and roll their eyes at the testers for coding such unstable tests.
It's a dysfunctional process; testers spend 2/3 of their time doing bad programming.
At LEAPWORK, we're changing the way things are done so that testers can focus on doing their actual jobs, based on a simple realization: Testers are not programmers.
We've built a powerful visual designer, controller and agent infrastructure that makes it easy to create even complex test cases that work across applications, operating systems and devices in a matter of only a few minutes. We're committed to making automation a business discipline and put it in the hands of not just testers and developers but also business analysts, product owners and IT operations.