How Machine Learning just made software testing smarter

By Eran Kinsbruner, Chief Evangelist, Perfecto (by Perforce).

  • 4 years ago Posted in

Testing has evolved from what was seen as a necessity in the software development process into something that tangibly adds value, particularly in DevOps projects, where automated and continuous testing have been widely adopted as part of the process of delivering high quality software rapidly. 


However, the volume of tests today means that testing technology and processes need to continue evolving, otherwise they risk hindering the pace of innovation. To give some perspective here, a large global airline runs 1.3 million test executions for every version cycle across a wide variety of tests types, test frameworks, and engineers. Sounds extreme?  It’s far from it. In complex and large-scale projects, this amount of test activity is quickly becoming the norm. 

Imagine an app that has to run across mobile devices and different web platforms, given the volume of devices, operating systems, and browsers. Then add on the different versions of all those that need to be supported. Then add in all the functional areas within that app that need to be tested across all those variations. Ideally, this is repeated every time that app — or the devices, operating systems, and browsers it supports — are updated. That’s a lot of testing tasks.  

It is not the volume of tests that is the issue — it is making sense of the results. On their own, they have no value. It is not until their impact is understood that they matter. Otherwise, test results are just a whole lot of noise. Continuous and automated testing have helped hugely, but they are not a solution on their own. Testing needs to get even smarter. And testers need to find ways to deal with the ever-escalating volume of test results and understand their real meaning. 

The answer to this is machine learning, which as well as transforming so many other aspects of software, can also create a smart environment that can manage tests at scale.

 

Goodbye to writing test scripts

Test scripts are error-prone, time-consuming, and do not make the most of anyone’s skills. By applying machine learning (ML), creation, maintenance, and changes to test scripts happen agnostically and automatically, with self-healing abilities. Plus, ML-based testing ‘just happens,’ so it does not put testing at risk of becoming a bottleneck in the software delivery process. 

Taking mobile or web apps as an example, ML keeps the many test platforms visible, so any problems — such as something in the test lab being disconnected, or something being outdated — are unearthed far more quickly. Plus, ML helps provide the data to understand why it happened. 

This latter point is where the ultimate value of ML in testing processes really lies, because it builds pictures of what really happened in recent weeks or months. Sure, testing tools include a host of smart analytics and those already help to make sense of test patterns and trends, but ML takes it one step further. ML is a lot faster than human interpretation at identifying which functional areas caused the most problems, or which mobile or app platforms tend to have the most errors. 

Arguably, with the sheer scale of test data now evolved, it is going to become increasingly more difficult to make sense of test data results — both at the time and later on — without using ML. ML can also help development teams have greater visibility into what is happening in the continuous integration (CI) pipeline, while making sure that testing does not create bottlenecks and instead contributes towards continuous improvement.

Catalyst for change

Alongside the move towards low-code testing, ML-based testing is going to shape the way testing is carried out within teams. While some may be concerned it is a threat to their jobs, the reality is that it will remove mundane test script writing and management and instead let them focus on other activities (ones that cannot be fully automated). With ML, testing also becomes accessible to a far wider audience, which given the massive skills shortage in the market versus the ever-growing volume and complexity of software projects, that is most definitely good news. 

However, it is important to note that ML-based testing is in its early stages, with adoption levels varying widely. There are early adopters having success, but equally, there will be others who will not have such positive experiences. Plus, there are huge variations in the types of ML-based testing tools being introduced, and this is a market where there will be considerable evolution, and probably consolidation, over the next few years. Regardless of what happens, ML-based testing is here to stay. 

 

By Krishna Sai, Senior VP of Technology and Engineering.
By Danny Lopez, CEO of Glasswall.
By Oz Olivo, VP, Product Management at Inrupt.
By Jason Beckett, Head of Technical Sales, Hitachi Vantara.
By Thomas Kiessling, CTO Siemens Smart Infrastructure & Gerhard Kress, SVP Xcelerator Portfolio...
By Dael Williamson, Chief Technology Officer EMEA at Databricks.
By Ramzi Charif, VP Technical Operations, EMEA, VIRTUS Data Centres.