When asked by management how your testing is going. Rather than responding with a vague statement that is open to misinterpretation. It is instead easier for others to digest if you have a quantifiable response to their question.
Something that they can visualise, understand and then communicate to their clients if asked. Commonly known as a metric.
When implemented effectively. Metrics for software testing efforts can be a great tool to assist managers in not only tracking various statistics like test effectiveness, performance, or team efficiency. But also helping others to understand the overall quality of the product. The risks that have been identified, those which have been tested and any outstanding items.
With software development being a long journey to take, and with many actions to complete. It can be easy to think that the best types of statistics to track are ones related to percentages, or measured in fractions of a larger number.
Unfortunately, it turns out that these types of statistics are not very helpful in a software testing context. They can be too high level, too vague at times and not take into account important quality factors.
Let’s start by looking at what a metric is, some examples of common metrics that are collected and reported on. And finally, I’ll share some tips to construct better, more useful metrics.
My definition of a testing metric
Metrics in software testing are a way for us to track certain attributes of our ongoing, and past testing activities. We can then communicate these to others in the form of conversations or status updates. Providing others with the ability to understand our current actions, future targets and previous results.
Metrics are the result of various insights which are then used to drive change, improve processes and evolve peoples understanding.
“Measurement is the first step that leads to control and eventually to improvement. If you can’t measure something, you can’t understand it. If you can’t understand it, you can’t control it. If you can’t control it, you can’t improve it.”H. James Harrington
Unfortunately, when most metrics are created. The author can often fall into the trap of creating statistics that serve little purpose, aren’t very helpful or simply misleading.
Number of test cases executed and defects found
I can see these metrics being helpful if you wanted to track the status of the testing that your software product is undergoing.
But unfortunately, software testing is rarely easy to measure in a length of time. Like a loading bar that slowly creeps its way up to 100% where everything is complete.
It also doesn’t separate those test cases that are critical for the software to operate. Such as the admin area of a CRM application. Or those that aren’t critical, and related to non-core functionality.
Nor does it separate those bugs which are non-critical in nature like a typo in a greeting message, or the login box being the wrong shade of grey.
Number of defects found per tester
Testers should be working in collaboration with each other (especially in an Agile project). Not feel that are in competition with their colleagues to uncover an arbitrary number of defects, no matter their impact on the software.
One tester may find more bugs due to defect clustering, leaving others to test more stable parts of the system.
Also, it can provide management with an unfair opinion of a tester who will be getting an incomplete view of the testers contributions.
Anything that measures one tester against another is a bad metric in my view. It discourages teamwork and promotes a “me vs you” mentality in testing teams.
Percentage of automated tests
Having every single test case automated means nothing if it provides the same feedback as having only a few of your tests automated.
Plus, if your software is prone to change. Or has a fast development cycle. Your automated checks will easily break. Losing valuable time as a result because of maintaining an automation suite that is not providing value.
Automation should be used to support testing efforts. Not be a replacement entirely.
Metrics aid your navigation
Different levels of an organisation keep track of and use metrics in different ways. For example, the development teams might keep metrics related to how many features they’ve coded or the number of defects they’ve fixed.
Stakeholders might track metrics related to the efficiency of a department. Or those related to revenue-creating activities.
Testers need to find the right metrics that they can report on that make the most sense for them. Moulding them around software quality. And not on meaningless figures like ‘number of tests executed’.
Because as we have seen. Not only are these figures meaningless in a quality context. They can also be quite damaging to the testing team.
When we use metrics correctly. Creating ones that are meaningful to others and define what your goal is. Other people will better understand where you are trying to get to, the obstacles you are facing. And will be in a better position to help you on your journey.