× Christian Nissen's blog Directror's Cut iOS app Christian Nissen on Xing

ⓘ This blog article has been moved to medium.com

You will be redirected shortly

Stylized code

Using assume() in test automation

by Christian Nissen

4 min. read

Every now and then, I get involved in a repeating discussion in the context of test automation. When should an assume() be used instead of an assert()?

Presuming that a test case verifies one functionality, each automated test case should include one assertion. The assertion is the last statement of the test code (apart from downstream tasks, such as cleaning up). Let’s put this into a simple example by using a typical login test case.

Precondition
- User A is registered to the application

Test steps
1. Navigate to the login
→ The login is displayed
2. Enter valid credentials of user A and submit
→ The login was successful
→ and user A’s account is displayed

The equivalent test automation is as follows

@Test
def user = registerNewUser()
executeLogin(user)
assert(login = successful)

In case the registration is not successful, the automated test will fail at the login. Without further analysis, the test result will be interpreted as the login functionality being broken. Let’s assume (no pun intended) that the test case is executed manually. What would be the result in case the user registration was not successful? There would not be a result. The test case would remain untested, until the user registration is possible. In order of completeness, an additional test case ‚Register a new user‘ is necessary, that verifies the successful user registration. In this case, out of the two test cases, one would fail and the second stays untested, or rather not executable.

So how can using an assume() help?

An assume statement implements a precondition. Ergo, establishing the existence of necessary test data, configurations or systems.

When adding the assume, the example is extended to this:

@Test
def user = registerNewUser()
assume(user =! null)
executeLogin(user)
assert(login = successful)

If the user registration is not successful, the actual assertion will not be executed. Using JUnit, this will mark the test as skipped. If - as before - an additional test case would verify the user registration, it would fail. The two tests results summed up give you get a complete picture of where the defect is located.

So what insight can we gain from a bunch of skipped tests? None. It’s as simple as that. But what you also don’t get is a ‘wrong’ result. If you cannot test the login, you have no idea, whether it works or not.



Failing assumption on precondition

Fig. 1: Failing assumption on precondition.


It goes without saying that the login can also be tested with a previously registered user, but keep in mind that an automated test should be as independent as possible and not have to rely on existing test data.

Combining test cases

But wait! What about combining the two test cases, as suggested by ISTQB? When dealing with functional overlap, “[…] consolidation of several manual tests into a larger automated test may be the appropriate solution.” (ISTQB Certified Tester - Test Automation Engineer Syllabus 2016)

Using an assertion on the registration as well as the login, the automated test will fail if the registration is not possible. What about the result regarding the login? In this case there will also be no result for the login. But per definition a test can have only one result. Thus, the result regarding the login will be swept under the table. This emphasizes the value of a skipped test in a report. During the manual execution of this test, the tester would most likely adapt and deviate from the defined test steps. By using a previously registered user to execute the login, a valid result would be achieved.

So how can test cases be combined within test automation without losing value?

By combining cases of the same functionality, the maintenance effort is reduced and the execution time optimized without creating non-transparent loss of coverage. For example, combining the login with invalid credentials with its counterpart with valid credentials will in fact also improve reliability. In case the login with invalid credentials is unexpectedly possible, the case will fail and the login with valid credentials will not be executed. When manually executing these two cases, you would act identically, not executing the case with valid credentials if the negative test fails. This being due to the fact that it is worthless to assume that every log in to be successful regardless.



Combining two dependend tests

Fig. 2: Combining two dependend tests.


Wrap up

A precondition can be implemented using the assume() method, making sure a test case only fails if the actual verification fails and avoiding false negatives based on unfulfilled preconditions.

On the other hand, the amount of preconditions can be reduced by combining test cases and using the stricter assert() method instead. In this case, it is very important to carefully combine test cases and not losing their individual value.