Time has evolved since the dawn of software testing and giant leaps have been made in the techniques employed and the technologies utilised to prevent defects in the software that we use every day.
Over that period, the below principles have been shown to act as a general guideline for software testers to effectively use their time, and shape their mindset during the software testing process.
Testing at its core is the process of designing and executing a set of test cases to show that defects are present in the system that has been developed. Not their absence.
Testing significantly reduces the probability of undiscovered defects present in a system. But even if there are multiple rounds of testing, the claim of ‘this software is bug-free’ would be false.
Testing every feature of a piece of software is impossible (unless of coarse the application is incredibly basic).
Take a relatively simple application such as a calculator for example. Testing every single combination of inputs would take millions of test cases and thousands of hours of a testers time to execute.
Instead, this is where testing techniques such as risk-based testing, and priority testing are employed to focus efforts on the more important and riskier parts of the application.
A piece of software is developed to a set of requirements which have been derived from the client. But if those requirements contain an error and it remains unnoticed. The defect will make it into the developed piece of software and once caught will be far more expensive to correct then it would be at the start of the process.
Testing efforts should begin as early as possible to catch errors when they are cheapest to correct.
This principle states that most of the defects in the system being tested can be located in a small number of modules (otherwise known as the 80/20 rule, or Pareto analysis).
Test teams, therefore, need to ensure that their test cases envolve as parts of the system become increasingly stable overtime to ensure that they are constantly on the lookout for new bugs.
Imagine for a moment that you are a gardener. You’re the talk of the local neighbourhood for having such a well-kept garden, beautiful flowers in full bloom and not one nasty weed in sight. You attribute this wonder of horticulture to using one type of pesticide which you have been using for years and wouldn’t dream of using another brand.
Then one day you wake up and there are weeds EVERYWHERE. What happened to your beautiful garden? Well, it seems that you became a victim of the security that you thought your favourite weed prevention solution gave you.
The same false fallacy can be given if your test cases are constantly repeated overtime. They may discover some critical bugs initially, but eventually, they will stop finding defects and their effectiveness won’t be as high as they once were.
Therefore it is important to review your test cases regularly and update/change them to increase the effectiveness of your defect prevention process.
The software testing process is driven by the nature of the application being developed.
For example, a banking application is far more complex than a mobile game and would, require additional test cases and additional risk factors to consider.
So you have run your test cases, discovered some highly critical bugs and after confirming these issues have now been successfully corrected, the application is defect-free and ready to be released, yes?
Well, in short. No.
Going back to the first two software testing principles above. Testing only shows that defects are present in a piece of software. Not that it is defect-free. Additionally, we can’t test everything. So its quite possible that there are defects are presently undiscovered.
So as testers, we can not say that the software is free of errors and ready for release.
What we can do however is provide confidence to stakeholders that the end product meets the needs of the business and user requirements and we are delivering a high-quality product free of known defects.