Blog

Eating our own dogfood 3 – How do we plan what to test

Joonas Palomäki | 28 September, 2020

The article series

Bark. This is the third article in our dogfooding series, and in this one, I’ll describe how we plan what to test – Design our manual test cases. If you haven’t yet read the previous articles, you should check them first. The whole series is planned as follows:

If you haven’t yet read the first article, you should check it first – this article makes more sense when you know the background.  The whole series is planned as follows:

How do we come up with what to test

For us, designing a needed set of test cases is a relatively easy task, as we have a well-designed set of requirements – and usually also test requirements. When we design test cases, the first thing we do is find out what are the new and changed features. This we find by filtering the requirement tree using the target milestone. We can then further filter the list with “Not covered” checkbox to find those requirements, that we have not yet handled ( not covered with test cases, as its name says).

Now that we know which requirements need test cases to be designed, we read other requirements there are and how they are already being tested to get a better overall picture. Once we’ve absorbed this knowledge, then the next thing is either to update existing test cases or create new ones. So we rely quite heavily on requirements/user stories when designing the test cases. As a result, we get links from requirements to test cases “for free”. This is very valuable information as it helps us know how far we are in test case design and where the effects of problems currently are.

 

Test case tree structure

We store our manual test cases in a test case tree that has a pretty similar structure than the requirements – the tree reflects the functional features of the user interface.  We have, for example, a folder called “Test case design” and under that, we have folders that are used to test that view. One of them is “Test case tree” and there we store test for testing different features for test case tree. The main idea in this is that from this tree we find our test cases easily. We just drill the tree structure down. This kind of structure stands the test of time – we always find test cases easily that test certain functional areas. We do not store new test cases under any one folder – we find new test cases using “Testable in” -milestone field. The information about what kind of testing we plan to do ( system testing, regression testing, etc. ) is not stored into a tree – that information is on test sets and test runs which we cover in later articles.

Classifying tests

So now that we can find the test cases from the test case tree. Is it enough? For all uses, no. We have a few more:

  • Testable in: When designing or updating tests we also set the Testable in -field to the latest one. This way we always find the complete list of new or changed test cases using just one filter.
  • Status: We have a pretty straightforward workflow for test case design. Most of the time, most of our test cases are ready. The other natural statuses, “In design” and “deprecated” are used when appropriate. Our workflow still has a status for waiting for approval, but we have found that for our purposes formal approval process is not the most effective.
  • Priority: We also have the priority field visible, but we’ve found that its values do not guide us as much as the execution status and target milestone. We probably hide that field soon. The less unused data there is visible, the more relevant the remaining content is.
  • Tags: We also sometimes tag test cases when we want to create arbitrary lists of test cases.

Storing automated test cases

We used to have one folder where we stored our automated acceptance / system tests. It worked fairly well for us, but when we created more test automation features and added a different icon for automated tests, we moved the tests in the same tree structure with manual test cases. The reason for this was to make reporting ( and inspecting test coverage ) easier. Manual and automated tests are used to test similar things, so for seeing where the problems are, it does not matter if the test was run by an automation tool or a person.

The main course – test steps

The most important content of test cases is in the test case steps. We have used various techniques to define test contents. Most of our test cases are defined somewhat loosely – they describe what should be done when testing, but not exactly how. This leaves more room for the tester to decide what is the most efficient way to find possible bugs. This also allows tester for executing the test case in a bit different way each time. In some cases, it is not anticipated – but for us, it definitely does more good than harm.

The other possibility – writing really detailed test cases would require more time maintaining test cases and in effect would mean less time testing ( as, like every single project, we do not have unlimited resources and time ) Some of our test cases also refer to test requirements that are linked to corresponding test cases – this way we do not have to write same text again. The tester will always be able to see the description during the testing anyway.

The future of test design for us

Test design is in the pretty center of our tool, so we’ve done most of the enhancements that our own testing has required. ( That’s one of the perks when you can design the tool you use! )
Still, no tool is ever ready. What we think of implementing is creating new revision automatically when updating test case ( to save some clicks ), ordering test cases in the tree, and showing execution comments when editing test case steps. One idea is having a multi-project real-time test repository. ( Now it’s possible to transfer it through indexes only ).