3 min. read
I am sure every tester has reported that one bug early on in their career that was never fixed. It’s frustrating and hard to understand, after paying so much attention to detail. Having actually found that needle in the haystack. And then? Nobody seems to care. A knife in the back. Next, a Product Owner or Program Manager will come forward and ask to close the report, because it’s constantly in the way of planning. A more reflected version of myself would not feel offended. It would contemplate the value it thought to have brought to the product in finding this issue and rethink its testing strategy.
A few years ago, I stumbled across the term “adequate quality” for the first time. It seemed to indicate that aiming to achieve high quality is not the proper thing to do. According to the ISTQB foundation level, testers should be aware, that exhaustive testing is not possible. But still QA strives cover all possible cases, including edge cases and even the most ridiculous edge cases. Testers see this as their purpose and passionately design the most unlikely scenarios. And there is generally nothing wrong in doing that. But, after discussing the created cases with product owners, business analysts or even actual users, they can be grouped by risk and probability. The cases previously thought of being most obscure suddenly turn out to be extremely valuable. Other obvious cases lose value, because nobody actually cared if they failed.
So, what does adequate quality imply for QA? As described, test case design should not be influenced. The creative process of designing test cases, generating ideas and contemplating scenarios should not be undermined. It still yields all (or at least most of) those cases, that are actually possible. The full test case catalogue lays the foundation to be able to filter out the most valuable cases. In this case, the most valuable cases are the cases, that would uncover the issues, you would want to find before going into production. By limiting the test execution to the number of filtered cases, the workload is reduced, without losing out on relevant quality. If your test execution still detects an issue, that will never be fixed, you should question your selection. Speak with product owners and business analysts, and try to understand why you chose to execute it, although it obviously has “no value”. This won’t change the current result, but it will improve your next selection.
Having said, or rather written, this, please don’t ever blindly accept. Always rethink and get reassurance. Maybe you actually have found something valuable, but no one else sees it. Yet.
But wait, isn’t that risk-based testing? Simply put; no. So, what’s the difference? Risk-based testing aims at finding the areas most likely to be defective, whereas adequate quality aims at finding defects, that actually matter. Both methods are based on the same input, such as risk, importance and likelihood, but will yield very different result. Risk-based testing will reveal much more errors. But still, nobody will care. Just slightly exaggerating... Whereas keeping adequate quality in mind the result will be optimal, even lean, if you will.
Applying the described method depends heavily on the form of collaboration within the development team. If the tester is seen as an integral part of the team and thus quality is a shared goal, adequate quality helps calm the waves. On the other hand, if the tester is acting as an independent entity, being responsible for reporting on quality, it is advisable to favor an alternative, such as the before mentioned risk based testing.
Aiming at adequate quality reduces the test execution time, allowing for shorter cycles and earlier feedback. If the term adequate is too vague, think of it as relevant quality. Ask yourself what actually matters and plan your work accordingly. Don’t constrain the creative process of test design, but effectively reduce the number of tests to be executed. I promise, you will never report a never-fixed-bug again.