The more complex and distributed software becomes, the more potential failures appear. Test automation, therefore, is becoming ever more important for static analysis, regressions, performance, and functional validation. However, if too much test automation is applied, for the wrong reasons and at the wrong times, an organisation’s software delivery, and its business agility, can be pushed out of balance.
According to recent research, approximately 67 percent of the test cases being built, maintained, and executed are redundant and add no value to the testing effort.
It’s vital, then, that test engineering and development practices achieve a healthy balance of test automation - automating the right kinds of tests, at the right place, at the right time, and with the right resources.
Bad habits
As agile software development and delivery accelerate, it would seem obvious that we should conduct continuous automated testing as much as possible. After all, without it, there’s no way of knowing whether software will meet requirements. But, without a strategic approach, an organisation can fall into bad habits that could cause test automation to become counterproductive and undermine its business agility.
For example, test automation goals should always be tied in with customer goals. Every instance of test automation that’s introduced should directly align with a customer’s need for better software functionality, reliability, security, and performance. Otherwise, resources and money will be wasted on little more than a box-ticking exercise.
Test automation can also result in a false sense of security. The execution of hundreds of thousands of static code checks, unit tests, data comparisons, and regressions can inspire claims of 99 percent or higher levels of test coverage. This doesn’t necessarily translate to a better user experience, though, and even that level of coverage may not be entirely adequate across a complex application portfolio.
Furthermore, if a test strategy isn’t architected for change, then every new update, component, or contribution will make test automation unusable, test data invalid, and test results hard to reproduce. And given that brittle tests, unable to survive changes, are responsible for between 60 and 80 percent of the false positives and negatives seen by testers, teams will soon come to see repairing existing tests and building new ones as a wasted effort, and simply give up.
Finally, the reflexive response to imbalanced test automaton is to create more of the easy tests, or slight variations of existing ones. But such test bloat can result in higher system costs and cloud expense for running test workloads, and gathering and cleansing test data. It can also result in a higher labour burn rate. Integration partners, incentivised to make more tests, can quickly make their way through an organisation’s budget, while its internal testers can experience a higher burnout rate.
These challenges all consume costly resources, and can erode an organisation’s confidence in testing. Ultimately, this can have a serious impact of its ability to rapidly release software of the quality its customers expect. It’s possible, however, to apply balanced test automation, and break this vicious cycle.
Right time, right place, right resources
Perhaps the most important thing is to test early and often. While it will always be necessary to carry out performance testing and acceptance tests on the right side of the software development lifecycle, the “shift left” approach is key to agile development. That said, ways should be found to weave critical functional, regression, integration, performance, and security testing into every phase of the lifecycle in order to avoid an imbalanced or potentially unhealthy situation for any one type of testing at any one phase of development.
Flexibility matters, too, to ensure testing remains resilient and useful wherever there is risk throughout the application architecture. UI testing, for example, needs to be more business context-aware, requiring test authors to validate business outcomes rather than specific on-screen results. An end-to-end data testing approach can help provide that business context, enabling data to be validated as part of a workflow, rather than only testing for specific results. The test automation technology also needs to be flexible and intelligent enough to recognise a range of success and failure conditions.
Lastly, there’s the question of resources. The best-performing companies are always looking to improve the productive capacity of all their team members, balancing a cultural combination of professional achievement, organisational design, education, and skill development with the procedures, tools, and infrastructure they need to ensure their success.
Smart, strategic automation
Over-engineering and over-automation are natural human responses to the level of chaos inflicted upon applications in the real world. But test automation needn’t be unhealthy.
Indeed, smart, strategic automation is both the best preventative measure and the best treatment for all the issues facing software delivery. A properly incentivised team, taking a balanced approach to testing, can overcome the burnout, bloat, and bad habits that can impact an organisation’s agility. What’s more, by creating a balanced software test automation practice that uses intelligence to concentrate on the more crucial challenges will free up human resources and lead to less risk, more - and faster - output, and greater innovation.