Our ability to predict human behaviour in various contexts is remarkably accurate. From measuring the likelihood of us buying a certain number of items based on our past shopping history. To whether we might want to go on holiday by analyzing our past Google search results (I’m sure you’ve seen the targeted ads).
And even though we can do amazing things with software and hardware to help us in predicting behaviour. We still run into issues in certain situations.
Think of a scenario in which an automatic car is driving down the road, and we throw an obstacle into its path. Will the car detect what is going on and take corrective action? Or will the car carry-on and cause a potential accident?
The answer is a favourite response of mine: ‘It depends’.
Automatic cars commonly use an amount of Artificial Intelligence to ‘learn’ about their surroundings and guide their ability to react in various scenarios. AI is a complex and fascinating subject, so if you are interested in the material, I highly recommend watching this video (note: you might need to get some snacks first).
It ultimately comes down to the programming of the car. Will it only react to those events that could occur? Or does it look at those that we have exposed it to during its learning process?
The best-case scenario is that is a little of both and we can prevent the occurrence of a nasty accident.
Autonomous cars are super complicated though, and for the rest of us. Designing software to take into account unpredictable or unintended behaviour is just as challenging.
Users do strange things
Think of a simple location finding web application with a few fields that a user can input data in with and some functions to display some pictures and maybe a graph of recent tourism data. As a software tester, you might immediately be drawn to the ‘happy path’ and not only validate that users can perform intended actions. But also that it all makes sense and the data that is fed back is as expected.
And while validating the features of a piece of software is important. It’s not the only way that the vast majority of your users will use your software application. That’s not to say they will look at your software with destructive mindsets or objectives to test the limits of the code.
But sometimes many users will see an empty text field in the same way as some of us see an empty box. We try to fit anything in it we can hope that it fits, and there are no unintended consequences. Like the walls buckling, or that closing the lid breaks the contents we have packed in.
I would argue that it is just as important, and maybe even critical to perform tests that stretch expectations to their limits.
Expecting a name of a town? How does it handle the longest town in the UK? Or even the shortest? Will tourism data be found? If not, will it cause the application to crash?
A picture paints a thousand words. So Ithink this illustrates my point appropriately.
Unconsidered outcomes are an oppertunity to learn
Not knowing how something will react is a marvellous opportunity for testers to delve into the inner workings of a system. Uncover how it operates and handles situations that a developer might not have previously thought about.
It’s also fun to find the limits of our understanding and try to push beyond that point.
Sometimes it’s a case of doing things that might at the time seem absurd and would never happen in a real-life scenario. Like typing international characters into a telephone number field. Or passing a giant image file to an upload box.
But you also have to realise that if you thought of testing it. It’s probably likely that someone else will think of it too.
I hope this article was useful to you. Let me know about the last interesting bug you found or tested for in the comments below.