How many times have you looked at a page you've worked on and everything looks perfect? Then release day comes and someone notices: “this page is for testing purposes only” watermarked in the background of the page.
When did that get there? You’ve never seen it before. When doing a root cause analysis of the issue, you learn that it has been there for the last four weeks of the project.
Testing bias has struck again; we call this instance familiarity bias. We get so familiar with what we are testing, we simply don’t see new, different, or unexpected things.
What is testing bias?
Testing bias is a real thing, and it manifests in myriad of ways. Some that we’ve experienced include:
- the expectation that the feature works as specified,
- the specifications are comprehensive,
- the functionality is enabled by that work-around we put in to enable access to the network (that our users may not have),
- the developers deliver a feature that handles error conditions,
- the users will use the application the way we’ve envisioned,
- users will only follow the workflows described in the specifications (that they never see); and
- the automated regression tests will catch all the regressions.
There is also the bias about needing to test every permutation, or every button on every page.
All of these biases lead to inadequate or inefficient testing and verification. This can lead to issues found late, and schedule or quality impacts. These are not intentional misses and are not a reflection of the type of tester you are. Biases, by their very nature, are mostly unconscious.
How can we work with biases?
Biases help us process the world into a framework that assists us to learn and comprehend what is happening around us. Biases can assist us to expedite our tasks and responsibilities. Biases allow us to sleep at night.
However, biases can also be a choice and can be managed to allow us to apply them appropriately. In software testing, we can choose to have processes in place that allow us to pick and choose what biases are appropriate for our project.
At WWT Application Services, we have the whole team participate in testing to help us minimize the biases inherent in software production. When we are drafting our user stories, we have whole team conversations with product owners, developers, user experience specialists and quality advocates to try to envision multiple aspects of the functionality.
Happy and not happy paths
We ask about the perceived way a user might use the feature — the happy path of the functionality. We try to have at least one or two “not happy” path ideas captured in the story acceptance criteria. This approach helps manage the biases around assumed specification completeness and error handling.
The conversations continue through our pair programming practices, where developers and quality advocates pair throughout the day. Pairs are developer-to-developer, quality advocate-to-quality advocate, developer-to-quality advocate and sometimes even user experience specialists, product owners and stakeholders get to pair.
Pair programming allows us to have a better picture of what testing is in place, what permutations have been covered and what has specifically not been covered. This addresses the bias towards too much or duplicate testing and missed regression tests.
Test, test and test again
Through our rigorous unit and component test guidelines in the development environments and our bias toward automating end user testing, we are able to have automated tests perform the mundane, yet important things. We can time-bomb test work-arounds so that we fail the tests after an agreed upon date, thus forcing us to examine the work-around and remove it or engineer a different solution. We can programmatically verify that specific strings exist in one environment (“this page is for testing purposes only”), but not in production.
Through these processes and practices, we minimize the testing biases by consciously addressing the specific biases that we have encountered. By being aware that biases exist, we can keep looking for when they rear their heads. Eventually, we will have efficient and effective test coverage. Until then, keep testing.