Select Page

Testing in software development is expensive. I’ve seen estimates ranging from 20% to 80%, some more reliable than others, none really truly reliable. And that’s the problem. Nobody knows how many or what kinds of tests to write, how much testing is too much or not enough, or how much tests should cost. In business speak, the question is: what’s the ROI of testing?

This question usually reflects a flawed understanding of software development. It puts writing code at the heart of the process with veins feeding into it and arteries feeding out of it. Tests are a benevolent parasite, a necessary evil, that feeds off the code. Coding is deemed productive because it pumps oxygen into the blood while tests are deemed extraneous like vestigial extremities that do little more than dilute it. It’s not really surprising that, under this paradigm, one of the most common targets for cutting costs, and often the first, is testing.

This view of software development is romantic and heroic, but it’s wrong. Not only is writing code never the only challenge, it is hardly ever even the biggest one. The biggest and most important challenge is managing complexity. It’s a silent challenge and a pervasive one, in the sense that everybody involved in the project is affecting it even if they don’t realize it. And it can make or break a software project, much the same way managing cholesterol (“the silent killer”) can affect a living body.

On the surface, testers are looking for bugs. They could be manual testers, UI automation testers, performance testers, unit testers, integration testers, or any of the other kinds of testers. And bugs could be blatant or subtle errors, inefficiencies, bottlenecks, unintuitive UIs, or a myriad of other issues and problems.

But what the testers are really doing is exposing unanticipated complexity: situations that product designers didn’t foresee, code paths that coders overlooked, user behavior that UX experts didn’t account for, network topologies that didn’t exist, configurations that didn’t make sense, and so forth.

Tests are there because they’re useful and necessary, but only because people are fallible. Software development is complicated and software is complex. Reducing complexity is difficult and expensive. Complexity is a disease and tests are a symptom.

Where managing complexity fails, the most obvious approach is to add a test. Sometimes the tests are planned and sometimes they’re reactive, but tests are always a failsafe. Whether it’s 20% or 80% of the cost of development, it’s a simple tradeoff: tests vs. the repercussions of not having them. It essentially boils down to fear.

And calculating the ROI of testing is really quite difficult. It’s supposed to be a lump sum or percentage that reflects the overall cost of designing, writing and running all the different types of tests. And maybe it should also include the cost savings of all the damages that were avoided. And maybe it also includes the cost of fixing the issues it brings to the surface. And maybe it also includes the cost of support. And maybe…

But that’s not even all of it. Suppose you calculate, hypothetically, that testing is 50% of the cost of development. Does that number actually help you reach any decisions? Should you write more or fewer tests? Different types of tests? What if it’s not 50%, but rather 20% or 80%? Would that change any of your decisions? Should you invest less or more? Whatever number you pin as the cost of testing is neither indicative nor actionable. You’re basically back to square one.

The truth is that tests are just one of the tools we use to manage complexity. They’re the most obvious tool and perhaps the most visible tool, but they’re usually a brute force tool. More importantly, they’re often the only line of defense instead of being the last line of defense.

There are other tools, many of which are more subtle and less visible, such as design patterns, wireframes and mockups, better communication, employing real domain experts, integrating tests in the coding process, running spikes, iterative programming, continuous integration, and lots of other techniques and practices. These tools can be employed earlier in the development process to add more lines of defense and reduce fear.

Just as you can’t group all the different types of tests under one lump sum, you can’t lump all the different tools together either. Each type of test and each tool has its own cost. Aggregating them all, for the most part, makes no sense. You can often make better analytical decisions based on the ROI of each tool irrespective of the rest.

For example, if you determine that your unit tests are too expensive, you can decide to improve them, limit them to certain areas of your code, remove them altogether or find some other solution. You might find a deeper analysis, such as speeding up the continuous integration test run or refactoring your tests more aggressively, to be more effective. But you can’t make any of those decisions based on a single number that incorporates all the different types of tests. You don’t have to throw out the baby with the bathwater.

To sum things up, the ROI of testing is a meaningless aggregate that indirectly measures little more than fear. You’re better off adopting a strategy for managing software complexity in piecemeal. It takes more effort and it forces you to make more minute decisions, but it replaces magic and heroism with a reasoned, engineered approach to managing cost.