Blog

Eating our own dogfood – testing a test management tool

Joonas Palomäki | 8 June, 2020

The task at hand


Before going deeper into canine culinary, let me open up our company’s background and tell you about our testing and QA a bit. Next to no company says it does not care about the Quality of the product it does, but there are differences in how rigorously companies allocate resources towards honing the quality of their product. Our company, Meliora, is in a position where it must deliver a test management tool for Quality Assurance experts. We do love you, the QA experts, but I must say you are not always the easiest audience.  When something does not work, we get a bug report. ( And it’s good that way! ) To summarize – we really need to test our well, better than average product, or our customers get unhappy.

Our release schedule is average – Meliora delivers four major releases each year. We have a roadmap, with bright plans for the future, but we want to stay agile in picking the features to be released on the go. We focus on the features that we feel give most to our customers. Often we know the contents of the next release quite well, but after that plans are open.

Upcoming articles

We also try to taste our dogfood as often as possible, so when we get new features to production, we start using them. This gives us the best possible feedback on how the features work and how they should be developed further. We also experiment with different ways of using the Testlab – this also broadens our view on how the tool works in different situations.

This is the first article in the dogfooding series. In this article, I open up our development process in general, and how we work within our development cycle.

The later articles open up the following subjects of our tool development and QA aspects.

  • Feature design – working with requirements
  • Test design – How we plan what to test
  • Execution planning – How we plan the whole testing
  • Issues – How we work with bugs and other issues
  • Reporting – What we track and how we benefit from it
  • Wrap up – What have we learned, what happens next

Managing releases as milestones

As mentioned, Meliora has a quarterly release schedule. We’ve kept this from the beginning, and have seen it working well for us. Many tools/services have a longer release cycle, but it has become more common to switch from long, heavy releases to smaller, more continuous releasing. Both have their advantages, but we feel that for such a central tool that our Testlab is, there is no benefit from any shorter release cycle. Even though ways of working rarely changes much with Testlab releases, constant change in the tool would confuse some of the users.

For each release, we pick features to be developed, design the features, do the development, plan the testing, do the testing, and in the end, do the release. Does this sound like a V-model with a 3-month release cycle? For some of the features, it actually works pretty much like it, but different features are not always tied together. Having able to postpone the features when needed is important as for our product it is better than the other options – postponing the whole release or having to rush the feature.

In Testlab, we manage the development cycles with Milestones. We always have two active milestones – the latest release that is currently installed into the production and the upcoming one. Should someone find a bug ( gasp! ), we assign it to the current release. The planning and testing work targets the next release. When doing pretty much anything when working it is good to sometimes stop and think if the thing that you are doing is worth it. Our tool is about maintaining the design specifications ( requirements ), test cases, planning the testing, and it certainly is possible to spend too much just planning instead of doing. ( I’ve done it myself, I know ). So what about maintaining these milestones – is it worth it? Absolutely! Overhead for clicking when something is to be developed or tested is minimal, and it allows us to see where the project is going. More about how the features ( requirements) are maintained is explained in more detail in the next article.

History

When Testlab was young, until late 2014, there was no milestone feature to manage our releases. The tool had only concept of versions, which work well for tracking what are you testing, and in what version issues are found, but it wasn’t really good for planning. Back then we didn’t track the whole releases for requirements.

How does this work in Testlab

This chapter is about how we work in Testlab’s Milestones view. How we define features for Milestones we cover in the following articles.

We create a new milestone and mark one milestone completed each quarter. That is it. We feel that for our purposes keeping this simple works best for us. Another option could be, for example, to differentiate our SaaS and On-premise variants with different milestones. Even though they share the vast majority of the code, they have some differences. The On-premise variant is released a bit later for customers, so it could act as a reason to separate the milestone. We have, however, judged that simplicity has more advantages in this matter.

Not much to chew on Meliora Testlab specifics here on this article, sorry! More, much more on later articles though. Stay tuned.