What is regression testing?


Let’s say you are the creator of a popular camera filter app for the Android platform. You introduce a new feature that you test thoroughly and are sure your current users will love. So you publish the change for everyone to download and wait for the feedback from your users.

The new version goes live and you see people are downloading the new version. Then the emails starting coming in.

‘My app doesn’t work.”

“I can’t access my photos. ”

Was this tested?

Uh-oh. 

The source code looks right, and it compiles without issue. What could be the problem?

You try the application on your own device to see if you can replicate the issues that your users are having, and sure enough, the app doesn’t work.

But you tested it right? You made sure the new functionality was working by performing tests on your code. The change to the existing code was small enough that you didn’t see the need to test it again. But as a sanity check, you run some tests on the previously working source code and they all fail.

It may have only been a small and insignificant alteration to the applications already working source code. But any changes made can and often ripple out in ways and affect parts of the application that may be unrelated to the modification.

How do I do regression testing?

The definition of regression is when a thing (in this case software) returns to a less developed and primitive state.

The software should never really regress in that way (unless you are deploying an earlier version for example), so to help detect any unintended regressions in the application. We have a set of regression checks that are run every time the software changes and we add new functionality.

We can perform regression testing using automated tools or with manual effort using already written test cases and user scenarios as a basis to test certain features of the application whenever a regression in the software might have been introduced to the software.

Is regression testing the same as retesting?

This is a common mistake frequently made by those who are new to testing, and I can see why because one seems like a fancy way of describing the other.

But they are both entirely different things, and it’s important to understand why and when each one should be used.

Regression testing is the testing of previously working parts of the system and the trigger for running regression checks is when changes to an already working system (upgrades, server changes etc) are made.

So, for example. If they upgrade your banking application with additional functionality to accept a new credit card, They should be perform testing on that new feature, and regression testing of the applications core functionality (logging in, making payments etc).

Retesting is the testing of a previously failed test case, which the result was a bug raised. We perform retesting when that bug has been corrected and redeployed to the testing team.

So, for example, you are testing a login form and discover that it doesn’t function correctly. It does not load the correct page and you log a bug report. The developers then fix the issue and release it back to you for retesting.

What tests are in a regression suite?

This really depends on the risk to your application from regressions being introduced and the time/effort that you are willing to spend on performing regression checking.

Ideally, you would have some automated solution to assist with these, along with manual exploratory testing.

A lot of companies seem to push for 100% automated regression checks and forgetting about the manual checks. Which I don’t feel is the best way to go about trying to test for regressions in all situations, as automated checks can only check what you have coded previously. They can’t tell you if a regression has been introduced, only tell you something has failed which should be a cue for manual exploration of the system to discover the source of the problem.

There’s a useful heuristic for this problem which was formulated by  Karen Nicole Johnson which helps with explaining the thought process behind this problem and is a good reference for helping to decide what to check.

Recent: new features, new areas of code are more vulnerable
Core: essential functions must continue to work
Risk: some areas of an application pose more risk
Configuration sensitive: code that’s dependent on environment settings can be vulnerable
Repaired: bug fixes can introduce new issues
Chronic: some areas in an application may be perpetually sensitive to breaking

Original blog post



Posted by Kevin Tuck

Kevin Tuck is an ISTQB qualified software tester with nearly a decade of professional experience. Well versed in creating versatile and effective testing strategies. He offers a variety of collaborative software testing services. From managing your testing strategy, creating valuable automation assets, or serving as an additional resource.

Leave a Reply