‘Quick question, Can we add in some tests for x?’
‘How about making sure y works?’
I’m sure other software testers have received these kinds of questions before. And if senior management has made these requests. The response has most likely been:
‘Sure, no problem.’
We all know the negatives of saying yes to every request. Aware that we should be instead asking questions to discover their intent. But when it comes from a senior manager. Sometimes it just hard!
But as the saying goes, ‘If I asked you to jump off a bridge. Would you do it?
I recently treated myself to a new pair of running shoes. After seeing the pair I wanted online. I headed to my local running store to try them out and see how they felt on my feet.
This experience enabled me to conduct a test on the shoes to provide an answer to a question I had. But also, because I wanted an excuse to geek out on the latest running gadgets.
But that’s beside the point.
In software testing. Our tests should ideally have the same purpose. Providing us with the ability to answer such questions as:
- Is the software ready for release?
- Are there any issues?
- Does this software do what it’s supposed to?
As software testers. We often need to remind others that more tests do not mean better test suites. Without that knowledge, it’s easy to be persuaded into believing false concepts. Such as the illusion that more tests equal a greater chance of catching errors.
Therefore, we need to ensure that any tests we include have a specific purpose. It’s not good enough to just say: ‘I’ve added a test’.
Why have you added it? What are you trying to get from its execution?
How do I define the purpose of a test?
We design our tests by anticipating risk to the software product. Knowing which parts of the software are at most risk comes from the software requirements document.
Therefore, using those as a basis for your design process is the first step to defining what you need to test, and how.
But once you have your collection of required features to test. Discovering the ideal tests to perform will require a bit of brainstorming.
I’ve collected a few questions to consider in this process. I use these to help with optimising my tests to ensure that those added to my test suites are as effective as possible. Giving each one a specific purpose and reason to be included.
Is this test providing me with valuable feedback?
Tests should provide you with feedback to reach a conclusion.
That could be feedback that shows that something unpredictable is occurring. Or that everything is as expected.
If the test you want to run doesn’t help in providing you with the information needed. Does it need including?
How would I know if the test passes or fails?
If a username field is being tested for example. Would a passing test be that a user could only input text? What about numbers, special characters, or upper/lower case letters?
You may need to put questions to the product owner/project manager if you need clarification on these questions.
Is this what a user would do?
If there’s one thing for certain that we can say about human users. It’s that their actions are impossible to predict.
From completing half of a registration process before quitting. Or filling in half a form before submitting it. People do strange things.
But as software testers. Even though we can’t possibly test everything (exhaustive testing is impossible). We need to at least make sure that our tests cover the most likely of user scenarios.
Is this test being covered by a previous test?
In his book, Clean Code. Robert Martin (aka Uncle Bob), repeats a concept popular in software development. ‘Keep your code DRY’.
DRY, in this case, means Don’t Repeat Yourself.
Software testers need to be aware of this. I would discourage the repetition of test steps where possible.
Instead, we should try to modify a similar test to include new conditions.
This won’t work for every test, and some tests will require a fresh environment for their execution. But that’s OK. The DRY principle is just a guide. It doesn’t apply to every situation. And is up to an individual to implement the best strategy for their needs.
Is this a good candidate for automation?
At a test’s inception. It’s a perfect opportunity to look ahead and explore the possibility of automating the test (if it makes sense to).
Early testing saves time and money. But making these kinds of decisions will allow you to maximise the returns on your testing efforts.
I highly encourage you to ask questions like the above when designing your test cases. Defining their purpose will help you design a targeted and highly effective test strategy.
Use a similar set of questions to go through your existing test cases. If you’re not sure on its purpose, or the value it provides. Maybe it’s time to delete it without hesitation.