Change is inevitable for the GUI of a web application, or any part of it.
For a visual example of this, look at the Google homepage from the mid-2000s. And you will see a site that has the same features of today (minus references to several Google products), but if a test automation solution was written then that depended on the layout and overall design of the page. It probably wouldn’t work with the design of today.
I know this is probably a bit of an extreme view of things. The design of a web application from over a decade ago won’t be the same today. But thinking in extremes when faced with a problem is a useful tactic to adopt to unlock the solution.
Offtopic, but if you are interested. I’d recommend watching this TedX Talk on the subject.
Using this thinking, we can then work backwards and make informed decisions about the design choices we make, the selectors we target. And the features that we will create automated solutions for.
If you want to learn more about the kinds of tests that should be automated. Check out this blog post. But we don’t want to automate something if it’s a feature that is not going to be used going forwards, or it makes little sense to automate because of time constraints.
First, let’s define the terms and the problem
The terms flaky and brittle both refer to something that is unreliable and/or prone to breakage. So in my opinion at least, they can both be used interchangeably.
The question then becomes, what makes an automated test flaky or brittle?
There are a few things that contribute to the overall reliability of an automated solution and below are just a few:
- Testing services or features outside our control. Be this testing that emails are received using a third-party API. Or relying on services to trigger from a third party before you can continue with your testing.
- Locator strategy. Using XPath’s for example might be easy to write and have a similar performance to other locators. But if your XPath is written in a way that depends on another element being present. You’ve just introduced a future problem that could have been avoided.
- Relying on the GUI in situations where it really makes little sense. Each version of a browser acts slightly differently and with frequent updates to the most popular browsers, trying to handle these various quirks can be a never-ending exercise.
So, what is the solution?
The Cypress team has a handy list of best practises on their website which no matter your framework of choice. Makes for a good rule of thumb to loosely follow. I’m sure there are other types of lists for your automation tool. But test automation from a top-down approach is less about the technical details you are going to be using. And more about the design of the tests that you are going to implement.
Below is a list of my key takeaways and advice that can be applied no matter the tool you are using.
Make use of data tags or flexible selectors
Injecting test data into specific inputs or form fields is a fairly regular task perfect for automation. However, just because an ID or class name for your form is named a certain way today. Doesn’t mean that when the code is updated or refactored that it won’t change just enough that your automation code will no longer function. For example, if class name
data-container-body, then everywhere in your code that uses that old class name will need to be updated. And if you are referencing the previous class name in multiple tests. Then it’s an expensive use of your time to update every single one.
I know, I’ve made that mistake before.
The solution to this is to either speak to your developers and ask that to put a special attribute on these key parts of the DOM that are used in your testing. I recommend the use of
<button id="main" class="btn btn-large" data-cy="submit">Submit</button>
Alternatively, you can make use of flexible selectors. This will match the first part of the text to an element on the page. And then anything that comes after is automatically included. This is perfect for dynamic elements or those which are likely to change.
Don’t rely on the GUI for everything
If you can run a test at a layer that isn’t the GUI. You most definitely should.
For example, logging in and logging out of an application. The login form in most cases is only there for users and to provide validation on what they submit to the API. So instead of logging in through the GUI, why not save some time and complexity by using the API?
I have written a separate blog post on this subject which goes more in depth into the problems of over-engineering the GUI for testing, some common pitfalls and the solutions that can be employed.
Test only what you can control
The biggest tip that I can give to tackle unreliable test automation is to focus on testing what you can control. With the example of checking the flow of a user receiving an automated email. Even though the application you are testing might make use of a service like SendGrid to handle your application’s emails. Do you really need to test that an email has been received? Instead, is it possible for you to test that SendGrid’s API has been called with the correct content and to the correct address?
In the modern age of applications consuming APIs and services which are outside our control as testers. We instead need to take stock of what is inside our sphere of control and ensure we are focusing our testing efforts where they are best placed.
Do you have any more useful tips to combat unreliable automated tests? Post them in the comments below.