Testlab – Oscar Bug release

We are proud to announce Meliora Testlab – Oscar Bug – with long-awaited VCS integrations. This release also enhances the test automation related features introduced in Testlab – Automaton. Read on for more information on new features in this release.


Issue situation radiator
Git and Mercurial VCS integration

VCS integration allows you to track and link changesets in your Version Control System (Git or Mercurial) to (assets in) your Testlab project. This enables better DevOps workflows and brings visibility to your testing efforts via source code changes.

  • The VCS changesets can be tracked in Testlab’s requirement editing views, issue editing views and via a VCS specific Dashboard widget provided.
  • A new field “Latest commit” has been added for requirements and issues and if enabled, can be used to track the latest commit related to the requirement and/or issue.

A VCS integration is set up via hook scripts supported by the chosen VCS tools. You can read more about setting up the integration from the appropriate documentation (Git, Mercurial).


Issue situation radiator
Reference and act on assets via commit messages

When a changeset is committed to the version control system, the committer provides a message to describe the content of the commit. When the VCS system is integrated with a Testlab project, the commit message may be included with specially crafted commands to

  • link the changeset to requirements and/or issues and
  • resolve or close issues.

Also, a new notice scheme event type “Version control commit” has been added to Testlab. This enables you to react when a changeset has been linked to an asset.


Link failures from Jenkins to VCS changesets

With Oscar Bug, a new version of Jenkins’s plugin is released. This version includes support for passing built commit ID with the test results pushed. This way, when an issue is added for the failed build, the issue

  • gets linked with the changeset the Jenkins job was building and
  • possible culprits reported by Jenkins are included in the issue description.




Issue situation radiator
Automated test run history

A new history panel has been added to present the history of execution results. This pane can be found on the “Automated test runs” tab in “Test automation” view. The pane shows three different time-based graphs to inspect how the mapped test cases have passed and how the raw results have passed. In addition, the pane holds various calculated statistics for the run.

Please keep in mind that to inspect the result history of a run, the run must have multiple execution results available. For this to happen, you should push multiple results to the same run with the matching title, milestone, version, and source.



More new features in Testlab’s UI

In addition to the above, the following changes have been made to various views:

  • Report generation has been enhanced with queuing and caching improvements. This means, that when requesting the same report which is fresh enough (newer than 12 hours), the user is immediately served this report with no delay. The user has an option to request the refresh for the report if a fresh one is needed.
  • On “Test coverage” view, you now have an option to filter in manual or automated tests only.
  • When copying test cases via test case tree with “Paste special…”, you have an option to copy test cases as a specific type (as manual or as an automated test case).
  • On Dashboards
    • “Current test cases” widget, automated test cases are shown as a separate group,
    • “Latest test runs” widget, you can now configure the widget to show manual and/or automated test runs only,
    • “Milestone summary” widget, the number of automated tests is shown if milestone holds any.


Changes to reporting

The following changes have been implemented to the reporting features:

  • “Requirement coverage” report features a field selector to choose reported test case fields. Also, the type of test case (manual/automated) has been added as a supported filter.
  • “Results of run tests” report features a field selector to choose reported test case fields. As such, the checkbox option to include test case steps to the report has been removed due to the fact that steps of the test case are now as a choosable field in the field selector.
  • “Risk analysis” report features a field selector to choose reported test case fields. In addition, the type of the test case can be used as a filter which has an effect for included test cases and for interpreted results of requirements.
  • “List of test cases” report now includes a “Latest execution status” filter. When chosen, the report will list all test cases whose latest execution result matches the filter.
  • “Testing progress for requirements” includes a “Test case type” filter (manual/automated).
  • A “Latest commit” field (related to the new VCS integration support) has been added as a reportable field on applicable reports: Issue & requirement listing and grouping reports.
  • A “Version control” field has been added as a reportable field on “List of issues” and “List of requirements” reports.


Thanking you for all your feedback,

Meliora team

Oscar Bug

The Oscar Bug is a member of the species Culiseta longiareolata, and appears under direct observation as a normal specimen of that species. 

When a female bug bites a mammal, the affected person or animal begins a physical transformation lasting several days. First, the subject loses consciousness and enters a comatose state. Then, beginning at the site of the bite, the tissues of the victim begin to change, starting with the dermis and progressing inward toward muscular and skeletal structures. All subjects take on the same physical appearance and attributes: that of a Caucasian human male named Oscar Peleschak, age 57.

(Source: scp-wiki, picture of the specimen from the same site)


Tags for this post: announce features jenkins plugin product release screenshots 


Testlab – Automaton release

A new Testlab version long in the making has been released, codenamed Automaton. This major new release includes industry-leading features for managing and analyzing your test automation results, integration to MantisBT and various smaller enhancements and new features.

With the upgrade to Testlab – Automaton, you have an option to migrate your existing (automated) test results to the new way of working. You should read more about this option in the post published earlier.


Issue situation radiator
New test automation view for your automation results

Business challenges related to interpreting results of automated tests are obvious. More often, tests are created by developers with various different motives, the number of tests is often numerous and traceability to the specification is often lacking. Testlab – Automaton includes features to make sense of these results and gain the business benefits outside the scope of regression or smoke testing – which in practice, often tends to be the use for automated tests.

In Automaton, a new main view called “Test automation” is introduced. This view holds the automated test runs and details of the results in them. The view also holds a workbench for mapping the results to tests via automation rules.

Scoping your automation results for more insight

The new automation features include a workbench for which you can create rules on how your test automation should be mapped to your project. This makes it possible to get insight with the correct level of detail on your mass of automation results.

Tule rules in your automation rule set can be

  • “Add and map” rules, which automatically create the needed test cases to your project and map the incoming results to them,
  • “Map” rules, which map the incoming results to existing test cases,
  • “Ignore” rules, which ignore a set of tests from the results and
  • “Map to test case” rules, which directly target a specific test case(s) for your results.

Using the rules, you can define which parts of the incoming automation result tree target which tests in your project. Having a correct level of detail to your tests is crucial for understanding the situation of your testing efforts and for easy management of your testing.


Jenkins plugin updates

The plugin used to publish test automation results from your Jenkins jobs has been updated. This update makes it possible to use the rule sets and sources provided by the new automation features with the plugin. When using Jenkins with Testlab – Automaton, update of the plugin is required.

Please familiarize yourself with the documentation of the plugin. Keep in mind that if you are still using the previous Lost Cosmonaut version of Testlab, you should continue using the previous version of the plugin as long as your Testlab installation is not updated.




MantisBT integration

Testlab Automaton can be integration to MantisBT for issue management. This makes it possible to manage your testing in Testlab and report the found issues in your MantisBT installation.

You can read more about MantisBT integration at the documentation page provided.


Thanking you for all your feedback,

Meliora team

Lost Cosmonaut

The ancient history of self-operating machines, Automatons, is fascinating. In the modern computer-age, having machines doing things for us operating on our instructions is a bit given. In earlier times, many automatons were designed to give the illusion that they are operating under their own power: The word “automaton”, is the Latinization of Greek word meaning “acting of one’s own will”.

In the 13th century, Al-Jazari was the first inventor to create human-like machines to serve their masters. He even designed an automated toilet where a human-like automaton would hand you soap and towel as you are washing your hands. Much earlier, the Greek had designed various complex mechanical tools and machines which could be described as automatons. One of the more famous is Heron of Alexandria (10 AD – c. 70 AD) who invented various machines from the first vending machine to steam-powered devices.

(Source: Wikipedia, picture of the book “About automata” by Hero of Alexandria, public domain)


Tags for this post: announce features product release screenshots 


Getting started : Bringing automation test results to Testlab

This post is an introduction of how to bring your automation results into the Testlab tool. We also introduce options and questions relating to migration from earlier Testlab versions to the new version: Testlab – Automaton. You can read more about your migration options below.


Why bother – what is the motivation behind bringing the results into Testlab?

Before the test results are brought to Testlab, it is common to discuss the reason for doing that – why go through all the trouble? The test results most probably are already somewhere. Possibly in Jenkins, possibly in HTML report in the disk – anyway they are somewhere in a readable format. Why would you set up a system to bring the results into Test management / ALM tool, Testlab? The most common reason for bringing the results in to test management tool is the better findability, usability, and ability to connect the test results to requirements/use cases. When the testing information is brought to everyone to see the project can work more efficiently – and thus make better decisions. Also combining the testing situation of Manual and automation testing into one view help understand the overall quality situation better. If your test results are only on CI tool you usually see the latest testing situation well, but you can not see what requirements/user stories they test, what they do not test, and how the situation develops over time.


Screenshot of viewing test results in Testlab’s Test automation view


Bringing the test results to Testlab – what does it do?

If you are familiar with manual testing in Testlab, it probably helps you if we tell that you’ll be able to see similar test runs as for manual test cases, and see similar test cases for automation as for manual tests. You can use the same reports you are familiar with to report automation results as well. So this is the basic case which is only the beginning. Testlab has a rule-engine that allows you to define how you wish to see and use the test results. The reason why you want to define rules is that it will be easier for you to handle the results you get to the management tool – to communicate the project situation for example.


Central concepts of test automation

Before we hop into the details of how this test automation importing works in details, let’s go through the central concepts and terms we use in our tool and in this article.

Result file The original result file that your test automation tool produced and from where you want to import results to Testlab.
Ruleset Set of rules you use to handle the result file to be utilized in Testlab. If you have multiple different kinds of automated test results, you can have multiple rulesets to handle them.
Test run A test run is a container in Testlab where test results are saved – very much like for manual tests.
Imported results Test results in your test run that have been handled by ruleset from your result file from your test automation tool.
Source As a concept, it groups test results that you wish to compare together. Jenkins job is one example of a useful source.
Rules Individual rules in a ruleset that dictate how you import test results from your result files.
Identifier The identifier is a unique identifier which is distinct for each test case. It consists of a folder structure and name where the test resides in the tree.
Required test cases In Testlab the test results are always saved for a test case. Thus, when importing test results, if there isn’t a test case in Testlab yet where the results would be saved, they are considered as required test cases. You can then create these test cases automatically.


Importing my first test results

You probably have your automation tool in place already. Quite likely it is able to save the test results in xUnit XML format which is a standard for test automation tools. Testlab reads this format, as it does Robot frameworks output and TAP result files. If the tool you use does not produce any of these formats, let us know and we’ll see if we can add support for your used format. So when it comes to Testlab, the story starts from the point where you have run the tests using your automation tool and have the results in a file.

When automating testing, the tests are often launched from a CI tool like Jenkins or using some other automated method. Jenkins can be configured to send the test results to Testlab automatically after the tests have been run, or the results can be sent using an API. You can also drag and drop result files to Testlab manually from your computer so you can try how it works pretty easily.


Do it yourself!

We assume you have your Result file at hand. If not, and want to try how it works, grab one from the demo project that came with your Testlab ( Instructions a a bit later). In Testlab, head to “Test automation” view and select “Automation rules” -tab. This is the place where you can define what Testlab does with your automation test results. Let’s start by creating a new ruleset for reading your automation test results. Do that by clicking the + in “Rulesets” pane.
Add a ruleset

You can use this ruleset for reading this file and similar ones you might import to Testlab later, so you should name it accordingly. Now you can call it “My ruleset” or “Mithrandir” if you wish. Another required field is the Title of the test run that will be created by default using this ruleset. It can be overwritten later, but in case you do not give it a name, this one will be used. Give it a name now, you can use for example “My results”. You can also give defaults to Milestone, Version, and Environment if you wish, and mark some tags to be added to the test run. You can also define if you wish to create issues automatically if test runs contain failing tests, and how the issues are created. Once you save your changes, you have the ruleset ready where you can add your rules.

Second thing is to import your result file into Testlab. Find it in your computer and drag & drop it to the Testlab window. You should now see your test results in raw format, and you probably recognize the test names and results from this. The test cases are not yet permanently saved to Testlab yet – you still have to decide how you wish to do that. If you don’t have your own result files but want to try it out how it works, select a preloaded result file from the demo project. The results on the demoproject are already loaded in to the Testlab, but you can still follow the same process. See picture below.


Preloaded result file

The third phase is this decisionmaking. Basically you have two things to decide. What are the test cases you want to bring in, and how you want to see them in Testlab. Let’s start first with the simplest case – you wish to bring all test results in and have one test case in Testlab per original test result. We’ll go to other options and to the reasons what you might want to do them a bit later in this article. So to add a rule that gets you these test results you just need to add a rule by right-clicking over your test results (in the middle of the screen), keep the rule type as “Add and map” and click “Add a new rule”. This is the rule that gets your results in and also creates test cases automatically. It matches test cases that have identifier staring with text in “Value” box. Changing that dictates which test cases this rule picks.

Next, you need to click “Apply rules” because Testlab does not do that for yourself. You’ll see the “Preview test run” on the right part of the screen. This part of the screen shows the situation from Testlab’s point of view – what will be the name of the created test cases, test results and so on. For this case where you created one test for one, the values on the right should be really familiar for you.

The fourth and final phase is confirming what you see and saving the results. Click “Save results…” on the bottom right corner. You get a window where you can enter details for the test run to be created. The values are prefilled from ruleset you used, so you can just click save results and have the results saved right away, or you can modify them if you wish. Title of the test run is used to distinguish different test runs, so give it a recognizable name (or go with a one inherited from ruleset if it is fit for you). The source is prefilled to be your name when importing results manually. Let’s leave the other values as they are for now – we’ll cover those later. Just click “Save results”.


How you see the results

Now you have the results in – Great! Main place to peruse the automated test results is in “Automated test runs” tab which you should be seeing now. The top part that consists of imported test runs shows their relevant statistics. Result mappings show light green for those test cases that you have mapped and grey for those test cases you have chosen to ignore. This is what you want to see. If this bar shows pink it means there are some unmapped tests for which you probably want to add rules to map them. This can happen for example when you push results from an outside source with tight rules and somebody creates new tests that do not match the rules. For clarity – you don’t have to update the rules every time new test cases are pushed in – only when the existing rules do not catch them. So if you create new test cases with your automation tool, create a file from that and upload it to Testlab. If the identifier of the new test cases starts with the value entered, your existing rule will work just fine.

You’ll also see the test cases and their results in other parts the Testlab user interface. If you go to test case design, you’ll find the newly created automated tests cases there. You can also add additional information to these test cases like a description of the purpose of the test case or other fields to classify your test cases. You can also define which requirements/user stories the test cases verify in “Test coverage” view. This way you’ll see the coverage status for your requirements. To try it out just drag & drop test case over the requirement to mark the test case to cover the requirement. You can also use all the powerful reports to share the quality situation through automated testing.


Best practices

So now you have seen how we get test results in with a simple case. Sometimes this is exactly what you need and want, but it is also good to take heed of some of the best practices, especially if you are going to import results for a long time. When you have relevant test data organized well, it is easier for you to see the quality situation.


General practices

Classify, classify, classify! The same thing that applies to manual testing is also important for automated testing – the more Testlab knows what is the target of testing, the more relevant information you get out of it. The more relevant the information you get, the better decisions you can make. So if you have some sort of release schedule in use, create releases as milestones in Testlab and mark those Milestones to be used for test automation. This allows you to see what and how much you tested in certain milestone. Same goes for versions and environments.

Less is more: First, it feels intuitive to get all the possible automated test results to Testlab – as it has powerful reports and radiators that shows you your test execution data nicely. There is a reason for not to get all test results as they are. The reason is that we humans are bad at interpreting really huge loads of data in one go. If you would run a thousand tests every half an hour you would get 1,5 million results in a month. The detail when certain test failed in the past is rarely that relevant in that mass. Testlab offers you a few ways to keep the data relevant for you:

  1. Ignore irrelevant results: This is the most straightforward – if you bring result files that have irrelevant tests for you just ignore them completely so that you never get the results in the Testlab. This might make sense for example in a case where you have test cases that your build runs but are not relevant for your project.
  2. Aggregate test results: You might have a huge number of small test cases that you run, and each of them runs just a small detail. In the end, you are probably interested which detail it is only when the test fails. When the test runs successfully, you are probably happy just knowing the whole group of tests we run successfully. For this case, Testlab offers the possibility to map many imported test results to one Testlab test case. The advantage doing this is that you get a more human approachable amount of Test cases visible to your Testlab, it is easier to spot the area when something fails and if you map your requirements to the Test cases, maintaining the mapping gets a lot easier. As the test case details contain the information which imported test cases were run, you will not lose any information either. Also, if any of the imported test cases fail, the Testlab test case fails so you will not miss any fail situation.
  3. Save results on relevant intervals: It is not always wise to keep all the test results in Testlab. It is enough to act when something fails and know the overall situation. This holds true if you run automated tests, for example, many times a day. It is not likely that you will want to check the exact results of an individual run from the past anyway. To save the results on relevant intervals, you can overwrite the test runs in Testlab. Thus you will always import the results when the tests are run but keep only the latest. You can control this by naming conventions from for example Jenkins plugin.


Importing results from Robot framework

Testlab supports reading Robot framework own output format which enables getting a bit richer output than Xunit XML. Thus it is advised to send the results in Robot Frameworks own XML format instead of Xunit XML. This way you will get to see more information about what kind of test was run.

While generally, the anticipated outcome is to have Jenkins run the Robot tests, remember that you can also send the results directly to Testlab by dragging and dropping. This makes sense, especially when developing the tests and you want to share the progress to your team. Quite often the test done using Robot Framework are depicted on a high level (like usually acceptance tests are). This means you probably want to see the results in Testlab as they are in Robot Framework. Also, the suite structure is usually well suited for Testlab usage. This means that bringing the results in is easy – try using the Add and map for all results.


Automating manual system testing

When automating system testing, you might want to consider keeping the existing manual testing tree in place and put your automated test cases in the same tree. On some cases, this makes reporting and managing requirement linking easier. You can always make the distinction between the tests using the “type” field when needed.


Migrating from the previous versions of Testlab

Compared to earlier versions of Testlab, Testlab – Automaton brings in significant changes how the automated tests are stored to the system and presented in the user interface.

In Testlab – Automaton, test cases and test runs are typed as manual or automated. Results of manual tests are recorded to manual test runs and results of automated tests are stored to automated test runs. In the user interface, there are different views to manage and present these assets:

  • “Manual testing”: This view presents all manual test runs and offers an option to execute (manual) test cases. This view is basically the view “Test runs” in previous Testlab version.
  • “Test automation”: This view shows all automated test runs and offers a mapping workbench to defined rules how the tests should be mapped when the results are imported. This is a new view in Testlab – Automaton.

You can read more about these features in the Help manual of Testlab.


Test case mapping identifiers

Earlier, test cases could be automated by defining a custom field with a value that maps to the identifier of your automated test (or part of it). In Automaton, this changes so that a custom field is not anymore necessary and all this happens under the hood. When upgrading to Automaton, you have an option to migrate all test cases which have a test identifier mapping in a custom field to Automaton-version’s “automated test case” type. The mapping identifier will be moved from the custom field to the “Automation rule value” field in the migrated automated test case. How, when and pros and cons of this, please see questions in Migration FAQ below.


Existing test automation results

If you have pushed test results from outside sources such as Jenkins, you most likely have test runs in your projects which hold results for these automated test cases. When upgrading to Automaton, you have an option to migrate all these test runs to the new model. This way the test runs will be converted to automated type and the results are shown in the new Test automation view of Testlab. How, when and pros and cons of this, please see questions in Migration FAQ below.


Migration FAQ

a) I just want to continue the using Testlab without doing anything, can I skip all this migration stuff?

Yes, you can.

The only thing you should consider is that if you have automated tests in your projects (for example, you are using Jenkins to push results in), there might be some downsides for you doing so. See question c.


b) I have only manual test cases in my projects. As far as I know, we are not using automation at the time. What should I do?

Great, you probably don’t have to do anything. The only thing we recommend is that you should make sure that you are not using automation. For example, if you know you are using Jenkins integrated to your Testlab for some projects, you most likely are using automation. You also could ask your administrator if there are any automation features in use. If it happens that you are using automation, please see question c for your options to proceed.


c) We have automated test cases with test identifiers in our project. What are our options?

Your options are as follows:

  • Continue using Testlab as you are without migrating any data
    If you plan to continue pushing automated test results to your project(s), we don’t recommend this approach. If you still wish to do so, there are some downsides:

    • New results pushed will most likely create new test cases for your automated tests. This is because mapping via custom field value is not supported anymore and the (default) ruleset will create new test case stubs for your tests. Your old results (and test cases linked to them) will exist as expected, but you will end up with duplicate test cases for the results added via the new test automation logic. If this is to be avoided, we recommend you migrate your test cases.
    • If the project has duplicate test cases for automated tests, care must be taken when reporting data to filter in relevant tests (old tests or new tests).
    • The (default) ruleset will create the full depth test case tree for your automated test hierarchy. This means, that if your current automated tests are scoped to package level, new test hierarchy will not match the old one. This means that the new tests created will need some manual work to be organized to the same hierarchy matching the old test cases.
  • Migrate your automated test cases to the new rule-based system
    This means that your current (ready-approved) test cases which have mapping identifiers in a custom field, will be converted to automated (typed) test cases and the mapping identifiers from the custom field will be removed. Keep in mind, that if you Jenkins is configured to create new test cases for the automated tests, after the migration, you will need to update your ruleset to also do so. By default, after the migration, the default ruleset will not create new test cases – it will only map results to your existing test cases via the “Automation rule value” field. This migration has the following benefits:  

    • When pushing new results, the results will be mapped to the previously existing test cases in the project.
    • Reports will most likely work as they are. Only the type of the test case will be changed.
  • Migrate your automated test cases to the new rule-based system and migrate your existing test results
    As explained above, you have an option to migrate your test cases to the new system. Keep in mind though, that that does not automatically convert the results of these tests to the new system. You have an option to also migrate your results.
    When done so, all existing test runs which have only automated test cases in them will be converted to automated test runs. All test runs which have manual tests in them will be left unconverted. After the migration, manual test runs will be found in “Manual testing” view and automated test results will be found in “Test automation” view. This conversion has the benefit of migrating the data to the format of the new system so that the current and all the future features tied to the automated test runs will also work with your old result data.


d) We prefer not to migrate anything but continue using Testlab. Is there still something I have to do for automation to work? 

The old way of mapping test identifiers via custom fields is not supported anymore and the API where the results are pushed in has changed. Because of this, in minimum

  1. If you are using Jenkins, you must update your Testlab Jenkins plugin and reconfigure it. In earlier versions of Testlab, there are various settings in Jenkins Plugin side which have been moved to the settings of a ruleset. Please refer to Testlab’s documentation on how to manage rulesets and it’s settings.
  2. If you have some custom code pushing results via Testlab’s REST API, you most likely need to update your integration to work with Testlab’s rulesets.

Please see question c for any downsides you might have by skipping the migration of your existing automated test cases.


e) We are using Testlab as a Service and I would like to migrate. What do I do?

You should read these questions thoroughly and plan your migration with the administrative user (and persons responsible for the projects in your Testlab) and decide which projects you want to migrate in your installation. You will also have to check which custom field(s) you are using to map the automated tests.

When you know

  • which projects to migrate and
  • what is the title of the custom field used to map the automated tests in each project,

please submit a ticket to Meliora’s support and ask for the migration of your data. Please note that we will not automatically migrate any of your data and you need to submit a ticket for this to happen.


f) We have an on-premise installation of Testlab. How do we migrate the data when we upgrade to Testlab – Automaton?

You should read these questions thoroughly and plan your migration with the administrative user (and persons responsible for the projects in your Testlab) and decide which projects you want to migrate in your installation. You will also have to check which custom field(s) you are using to map the automated tests.

When you know

  • which projects to migrate and
  • what is the title of the custom field used to map the automated tests in each project,

you can use the migration scripts provided in the upgrade package to migrate each project. Information on how to (technically) run these scripts after the upgrade of the Testlab is included in the additional notes inside the upgrade package.


g) I have questions and need help deciding, what do I do?

Please contact Meliora’s support and just ask, we are happy to help!


Meliora Testlab Team


Tags for this post: automation features integration product release usage 


Upcoming releases with Automation workbench

In this post, we will discuss a bit about our upcoming release schedule and offer a glimpse to the features in upcoming releases.


Challenges with test automation

Business challenges related to interpreting results of automated tests are obvious. More often, tests are created by developers with various different motives, the number of tests is often numerous and traceability to the specification is often lacking. Features are needed to make sense of these results and gain the business benefits outside the scope of regression or smoke testing – which in practice, often tends to be the use for automated tests.

If you are able to map and scope the tests efficiently, you

  1. have a better understanding of what is currently passing or failing in your system under testing,
  2. have your team collaborate better,
  3. ensure that key parties in your project can make educated decisions instead of being reactive and
  4. have a tool to really transition from manual to automated testing as existing test plans can be easily automated by mapping automated tests to existing test cases.


Automation in Testlab

In Testlab, results from automated tests can be mapped to test cases in your project. This way of working with your automated tests has benefits as the results can be tracked and reported in exactly the same way as the manual tests in your current test plan. What happens is that when results for automated tests are received (for example from a Jenkins’ job), a test run is added with the mapped test cases holding the appropriate results. The logic how the mapping works is currently fixed so that the test cases can be mapped with pre-defined fixed logic: If the ID of the automated test starts with an ID bound to a test case, it will receive results of this automated test.


Upcoming Automation workbench

Future versions of Testlab will include a new Test automation view with a workbench aimed at managing the import and mapping of your automated tests. A screenshot of the current prototype can be seen below (click for a peek):

Issue situation radiator


The workbench

  • allows you to define “sources” for your tests: a source is a new concept which allows you to configure how the incoming results from such source are handled. You might want to think the source as your “Jenkins job” or your Jenkins installation as a whole if your jobs follow the same semantics and practices. A source is defined with “rules”, which define how the automated tests are mapped. The workbench
  • features a rule engine, which will allow comprehensive mapping rules to be defined with
    • ignores (which tests are ignored),
    • creation rules (how test cases for tests are automatically created) and
    • mapping rules (such as “starts-with”, “ends-with”, regular expressions – defines how the results for the tests are mapped to your test plan) and
  • the workbench allows you to
    • easily upload new results by dragging and dropping,
    • execute any rules as a dry run to see how the results would be mapped before the actual import,
    • create any test cases by hand – if preferred over the creation rules – and
    • use pre-built wizards which help you to easily create the needed rules by suggesting appropriate rules for your source.

We feel this is an unique concept in industry to help you get better visibility in your automation efforts.

The prototype shown is work-in-progress and is due to be changed before release.


Upcoming releases

As our roadmap implies, The new major automation related features are planned to be released in Q3 of our 2019 release cycle. The team will be hard working with these new features which means that the Q2 release will be a bug-fix release only. That said, we will be releasing maintenance releases in between, but the next feature release will be the version with automation features in Q3/2019.

Meliora Testlab Team


Tags for this post: automation features integration product release usage 


Changes to authentication with Atlassian plugins

When integrating Testlab with your Atlassian products such as JIRA, there might be credentials related at your Atlassian side. Previously, for example with JIRA, the integrations required you to create an user account to JIRA and configure the credentials (including the password) to your Testlab project. Atlassian has made some changes and at least with JIRA Cloud, using passwords in this kind of scenario has been deprecated.

Documentation for setting up our integrations have been updated. In this post, we also provide instructions what to do to get your integrations up and running.


Replacing passwords with API tokens

Atleast for some operations, the integrations rely on operations provided by JIRA’s so called REST APIs. It might vary a bit depending on your JIRA version, but at least with the current JIRA Cloud, you should replace all passwords with so called API tokens. What you should do is that for the (JIRA) user you are integrating with, you should create an API token and use the token as the password at Testlab side. For technical details, you can read more about JIRA’s authentication via the links provided at the end of this article.

When using JIRA Cloud, to create an API token,

  1. With the user account you are integrating with, log in to https://id.atlassian.com/manage/api-tokens
  2. Click Create API token
  3. Give your token an label and click Create
  4. Click Copy to clipboard and paste the token as a “password” when configuring the integration at Testlab side

That’s it. Your server installed JIRA might also require use of API tokens. If so, refer to your JIRA administrator or JIRA documentation on help how to do this.




Tags for this post: integration jira security usage 


Testlab – Lost Cosmonaut release

The Meliora team is proud to announce a new version of Meliora Testlab codenamed Lost Cosmonaut. A major addition in this release is the new radiator for real-time tracking of your issues. In this version, you also might see a speed boost due to a new server protocol between the UI and the server. Read more about the changes below.


Issue situation radiator
Issue situation radiator

Radiators are a new reporting related concept introduced in the earlier release to view your project situation in always up-to-date view. Lost Cosmonaut features a new ‘Issue situation’ radiator to give you a fast glance at how the issues in your project are currently being handled.

The radiator provides multiple different statistics to help you improve your issue handling work and to identify possible bottlenecks in your processes. A more detailed description of what statistics the radiator provides and how the radiator should be configured can be found in the help manual of Testlab.


Saveable table views

In earlier versions of Testlab you had the opportunity to save the current filters in your table views. Lost Cosmonaut improves this by introducing table views.

In addition to filters, the function now saves the whole configuration of your table including the visible columns, column widths, sorting and grouping settings. You also have an option to reset the view back to the original settings. Keep in mind though that when resetting the view you lose all your current settings in the table.







“Show non-inactive” assets in trees

A new control “Show non-inactive” has been implemented to trees representing test cases and requirements. By default, this checkbox is unchecked which hides all deprecated assets from the tree. To show all deprecated assets, check the control.

Keep in mind that this option also controls if the deprecated assets are shown in the appropriate table views (test cases or requirements). This control makes it easier to hide and show the deprecated assets without the need for using the status related filters to achieve the same.


Performance improvements via a new server protocol

A new server protocol long in the making is introduced which the UI uses to transfer data from the server. This should bring performance improvements and make the UI snappier in most places. Keep in mind though, that the speedups gained depend highly on the amount of data in your project, on the browser used and on the speed of your network connection.  


In addition to the above

In addition to numerous smaller changes and the major ones above,

  • Milestones view has been added with “Hide completed” checkbox to hide the already completed milestones,
  • milestones have been added a new start date field (currently an informative field used only in radiators),
  • asset tagging has been improved by adding a new “tag.manage” permission to control who can add completely new tags and remove existing tags completely,
  • during testing, when a new issue is added instead of only failed steps of test case, all steps are included the description of the issue,
  • by using the search field in the top right toolbar you can now search for test cases verifying specific requirements (“verifying:REQ”) and
  • if the “unique ID” of test cases has been configured to be visible in the project, the ID is automatically shown in the views related to the execution of tests.


Thanking you for all your feedback,

Meliora team

Lost Cosmonaut

Yuri Gagarin became the first man in space in 1961. This was an irrefutable major feat for whole humankind. But at what price?

During the so-called Space Race, between 1955 and 1972, both the Soviet Union and the United States competed on which of these parties would be the first to conquer space. As said, on April 12, 1961, was the first recognized successful space flight made with a person onboard. But theories exist saying that there many incidents in space programs that were kept secret, especially in the Soviet Union’s space program. These theories spark from the recordings made by amateur radio operators featuring cosmonauts on failed space missions before Gagarin.

(Source: HowStuffWorks, Wikipedia, illustration from Open Clip Art Library / public domain)


Tags for this post: announce features product release screenshots 


Official support for Jenkins Pipelines

A continuous delivery pipeline is an automated process for delivering your software to your customers. It is the expression of steps which need to be taken to build your software from your version control system to a working and deployed state.

In Jenkins, Pipeline (with a capital P), provides a set of tools for modeling simple and complex pipelines as a domain-specific language (DSL) syntax. Most often this pipeline “script” is written and stored to a Jenkinsfile stored inside your version control system. This way the actual Pipeline definition can be kept up-to-date when the actual software evolves. That said, Pipeline scripts can also be stored as they are to Pipeline-typed jobs in your Jenkins.


Meliora Testlab plugin for your Pipelines

Meliora provides a plugin to Jenkins which allows you to easily publish your automated testing results to your Testlab project.

Previously, it was possible to use the plugin in Pipeline scripts by wrapping the plugin to a traditional Jenkins job and triggering it with a “build” step. A new version 1.16 of the plugin has been released with official support for using the plugin in Pipeline scripts. This way, the plugin can be directly used in your scripts with a ‘melioraTestlab’ expression.

When the plugin is configured as traditional post-build action in a Jenkins job, the plugin settings are set by configuring the job and entering the appropriate setting values via the web UI. In Pipelines, the settings are included as parameters to the step keyword.


Simple Pipeline script example

The script below is an example of a simple Declarative Pipeline script with Meliora Testlab plugin configured in a minimal manner.

pipeline {
    agent any
    stages {
        stage('Build') {
            // ...
        stage('Test') {
            // ...
        stage('Deploy') {
            // ...
     post {
         always {
             junit '**/build/test-results/**/*.xml'
                 projectKey: 'PRJX',
                 testRunTitle: 'Automated tests',
                 advancedSettings: [
                     companyId: 'mycompanyid',
                     apiKey: hudson.util.Secret.fromString('verysecretapikey'),
                     testCaseMappingField: 'Test class'

The script builds, tests, deploys (with the steps omitted) the software and always as post stage, publishes the generated test results and sends them to your PJRX project in Testlab by storing them in a test run titled ‘Automated tests’. Note that advanced settings block is optional: If you configure these values to the global settings in Jenkins, the plugin can use the global settings instead of the values set in the scripts.


Pipeline script example with all settings present

The example below holds all parameters supported (at the time of writing) by the melioraTestlab step.

pipeline {
    agent any
    stages {
        // ...
    post {
        always {
            junit '**/build/test-results/**/*.xml'
                projectKey: 'PRJX',
                testRunTitle: 'Automated tests',
                comment: 'Jenkins build: ${BUILD_FULL_DISPLAY_NAME} ${BUILD_RESULT}, ${BUILD_URL}',
                milestone: 'M1',
                testTargetTitle: 'Version 1.0',
                testEnvironmentTitle: 'integration-env',
                tags: 'jenkins nightly',
                parameters: 'BROWSER, USERNAME',
                issuesSettings: [
                    mergeAsSingleIssue: true,
                    reopenExisting: true,
                    assignToUser: 'agentsmith'
                importTestCases: [
                    importTestCasesRootCategory: 'Imported/Jenkins'
                publishTap: [
                    tapTestsAsSteps: true,
                    tapFileNameInIdentifier: true,
                    tapTestNumberInIdentifier: false,
                    tapMappingPrefix: 'tap-'
                publishRobot: [
                    robotOutput: '**/output.xml',
                    robotCatenateParentKeywords: true
                advancedSettings: [
                    companyId: 'mycompanyid', // your companyId in SaaS/hosted service
                    apiKey: hudson.util.Secret.fromString('verysecretapikey'),
                    testCaseMappingField: 'Test class',
                    usingonpremise: [
                        // optional, use only for on-premise installations
                        onpremiseurl: 'http://testcompany:8080/'

If you wish to familiarize yourself to the meaning of each setting, please refer to the plugin documentation at https://plugins.jenkins.io/meliora-testlab.


(Pipeline image from Jenkins.io – CC BY.SA 4.0 license)


Tags for this post: automation best practices features jenkins plugin release usage 


Testlab – Plumbbob release

New year, a new version of Testlab. We are glad to announce a new version of Meliora Testlab – Plumbbob. This release includes multiple new features including Radiators. A Radiator is a new reporting related concept which aims to help you track the progress of your testing in real-time. Please read more about the new features and changes in this release below.


Restricting the visibility of assets

Radiators – a new reporting related concept introduced in this release – are real-time views to your project data refreshed at frequent intervals. Radiators differ from regular reports in a sense that the regular reports are intended to be ‘printed on paper’ as radiators are intended for real-time tracking from a display screen.

A typical usage scenario for a radiator is a so-called feedback display your team might have in their break room.  Plumbbob release includes two radiators:

  • Testing feedback radiator, which shows the current situation of issues and testing in progress.
  • Multi-radiator, which allows you to configure multiple (single-)radiators to be shown in regular intervals.

More radiators for specific tracking purposes are to be added in future releases.

Note: A Radiator is protected by a password which is needed to display it. Opening up a radiator (or multiple radiators from a single browser) consumes a single license from your license pool. For the time being, Radiators are also usable in Self-service subscriptions – so feel free to try them out! 


Coverage filtering

Test coverage view has been added with a search-word based filtering function which can be used to filter in relevant data to the coverage grid. The filtering itself works similarly as it works in the requirement or test case trees. For example, in Test case coverage, filtering with “priority:high” filters in coverage only for “high” prioritized test cases.


Grid context menus

Most grids presenting data have row-specific controls which are represented by small buttons shown at the end of the row when the mouse cursor hovers on the row. In Plumbbob, you can also access these functions via a context menu which is shown when you click the right mouse button on the row.



Special pasting

When managing test cases you can copy test cases around in your project using the left-hand side test case tree. The tree has been added with a “Paste special…” functionality, which enables you to select the content copied for selected test cases. For test cases, you have an option to choose if you want to copy the steps, attached files, linked requirements and optionally, copy the test cases by retaining their workflows status instead of copying the test cases to “In design” status.


Assigning tests to individuals

To ease managing your testing, test cases in test runs can now be assigned to individual testers. When a test case to be run is assigned to a tester, Testlab automatically selects these test cases for testing when this tester continues his/hers testing.


Categorized project events

When your team works in Testlab events are shown in the top right corner of the UI. In Plumbbob, you have an option to hide these event messages but at the same time, go through them at the later time. All events received are stored and categorized behind a red indicator. When clicked, you can read through them and/or dismiss them permanently.


In addition to the above

In addition to numerous smaller changes and the major ones above,

  • test case tree can now be filtered with requirements the test cases are currently verifying. Using search word such as “verifying:something” brings up test cases which are verifying matching requirements by searching requirements’ name and ID,
  • in Test execution, it is now easy to run test cases still not approved as ready. Running these test cases requires appropriate permissions (permissions to run test cases and to change the test case status to ready),
  • reports, which allow you to choose the fields listed, now don’t render the listings at all if no fields are selected,
  • data imports – such as test cases, requirements, … – now add timeline events to the dashboard and
  • most grids and reports now have an option to filter in assets by defining “Milestone: No milestone” criteria.


Thanking you for all your feedback,

Meliora team

Canis Majoris

At the Nevada test site, in the mid 1957s, United States conducted the biggest and longest series of nuclear tests ever in the continent. The operation was called Operation Plumbbob.

The scientists were worried about the radiation spills to the atmosphere. To go around this issue the Operation consisted of nuclear blasts conducted underground, in deep boreholes. These were to become the world’s first underground nuclear tests.

In a test codenamed Pascal B, the team experimented on how the air pressure affects the explosion and radiation spread. The borehole was welded shut with a 900 kilogram (2000 lb) steel cap. Then the a-bomb at the bottom of the shaft was detonated.

The steel cap over the detonation was blasted off at a speed of more than 240 000 kilometers per hour (66 km/s, 41 mi/s, 150 000 mph) which has been the topic of some discussion later on. As the cap was never found, it has been speculated that as the cap easily exceeded the escape velocity of the Earth, the cap (or part of it) would be the first manmade object to orbit the Earth. This would beat the Sputnik launched in October 1957.  

(Source: The Register, Wikipedia, photo by National Nuclear Security Administration / Nevada Site Office (Public Domain))


Tags for this post: announce features product release screenshots 


Testlab – Circleville release

We’ve released a new version of Meliora Testlab – Circleville. Please read more about the new features and changes in this release below.


Restricting the visibility of assets
Choosing fields to be reported

In Circleville, when rendering reports, you have an option to choose the fields you wish to include on your report. This applies to most reports which feature a table or listing of some kind. In previous versions, the report templates included listings with a pre-defined set of fields present.

When choosing the fields you also have an option to arrange the fields to the order desired. The fields and the corresponding order of them are saved as you configure your report.


Configurable requirement classes

You now have an option to configure your own classes for requirements, if needed. When you configure one, you enter a title and choose an icon for it. This information is used to present the requirements in the UI. The classes can also be used in reporting.

By using customized classes you have an option to choose the custom fields which are specific to each class. See below.


Different custom fields for different types of assets

When configuring custom fields you now have an option to choose which type of issue or which class of requirement the field applies to. This way, for example for issues, you can have a different set of fields for “defects” and different set of fields for “new features”. As said above, this applies to different classes of requirements too.


Help manual with inbuilt search

The help manual incorporated to Testlab now has a searching function inbuilt. Searching the manual is easy: Just enter a search term to the field in the lower left-hand corner. The contents index of the manual gets highlighted for pages with hits and in addition, the current help page open gets highlighted with yellow for any possible hits.


In addition to the above
  • Workflow changes: Deprecated assets such as deprecated requirements or test cases cannot be edited anymore. To edit them, use appropriate action to transfer the deprecated assets back to design.
  • Workflow changes: By default, closed issues cannot be edited anymore. To edit closed issues a new permission “defect.edit.closed” must be granted to the user.
  • When test cases in a test set of Planning view are hovered on, the details of test cases are presented in a tooltip.
  • Table views of requirements and test cases now show the number of assets presented.
  • Links to open issues in Testlab can now be formatted to include the issue ID instead of the primary key.
  • Reporting: “List of issues” and “Issue grouping” report templates now support a new field “Requirements” which allows you to report the requirements linked to the issues via the test cases the issues are linked with.


Thanking you for all your feedback,

Meliora team

Canis Majoris

A small town in Ohio US, Circleville, is best-known today as the host of the Circleville Pumpkin Show held to celebrate local agriculture since 1903. This picturesque town has a sinister history of its own, though.

A mystery still unsolved spans from sometime in 1976 to late 90s when local residents started receiving personal and threatening letters with details of their personal life included. Thousands of these letters, called Circleville Letters, were sent to citizens and local city officials. The letters were written in block letters and were sent by an anonymous sender.

Finally, a man thought to be responsible for the letters was apprehended on a case related to few recipients of these letters. He was found guilty for an attempted murder and sentenced for years in prison. 

The letters kept on coming, though. The officials put the man under solitary confinement which did not stop the letters and they were certain that this man could not be sending the letters. A lot later, even a team of television producers working on a television document received one. The letters kept on coming till the late 90s and suddenly stopped. 

(Source: Gizmodo, Reddit, photo by Aaron Burden)


Tags for this post: announce features product release screenshots 


Testlab – Globster release

We’ve released a new version of Meliora Testlab – Globster. Please read more about the new features and changes in this release below.


Restricting the visibility of assets
Restricting the visibility of assets

Your projects can now be specified with rules which restrict the visibility of certain assets in your project. This can be applied for requirements, test cases and issues and applied against their workflow status or values in customized fields.

For an example: If you have a 3rd party users in your project for which you’d wish to hide a set of requirements in your project, you can define a rule which limits the visibility of these requirements only to your own users in a certain user role.


Restricting the visibility of customized fields

Similarly to the assets, for all custom fields, it is now possible to choose the user roles for which the fields are visible for. This makes it possible to hide some information from your assets from a certain group of users.

When the field is restricted for certain roles, only users in these roles have access to the information in this field. Please refer to the help manual of Testlab for more details on how the data is visible and/or hidden.


Rich-text custom field type

A new type of custom field has been added which enables you to add a field with long richly formatted text for your assets. This new type of field differs in logic from other custom fields in a way that all rich-text typed custom fields are always presented on a separate tabin the design view.


More custom fields

The maximum number of custom fiels per asset type has been increased from 10 to 150.


Updates to plugins

Jenkins-, Confluence-, and JIRA-plugins have been released with bugs fixed and minor enhancements. Please update accordingly.


In addition to the above
  • Run Tests in … option in test case menu now has a filtering picker for picking the test run the tests should be executed in.
  • Selecting a report to be viewed is now easier in the UI as the listing of reports is now configurable and easier to filter.
  • With Execution history tab in Test design view it is now possible to inspect combined execution history for all test case’s revisions.
  • Reports can now be generated also in Finnish language.


Thanking you for all your feedback,

Meliora team

Canis Majoris

A globster is an unidentified organic mass that washes up on the shoreline of an ocean or other body of water. The term was first coined by Ivan T. Sanderson for the so called Tasmanian carcass found in 1960.

Globsters may present such a puzzling appearance that their nature remains controversial even after being officially identified by scientists. Some globsters lack bones or other recognisable structures, while others may have bones, tentacles, flippers, eyes, or other features that can help narrow down the possible species. The picture on the right is the “St. Augustine Monster” that washed ashore near St. Augustine, Florida, in 1896. It was first said to be the remains of a gigantic octopus but in 1995 analysis it was concluded that the globster in question was a large mass of collagenous matrix of whale blubber, likely from a sperm whale. 

(Source: Wikipedia)


Tags for this post: announce features plugin product release screenshots 


Testlab – Canis Majoris release

We’ve released a new version of Meliora Testlab – Canis Majoris. This version includes major additions to the REST interfaces of Testlab and several UI related enhancements. Please read more about the changes below.


REST API enhancements
REST API enhancements

The REST based integration API of Testlab has been enhanced with a number of new supported operations. With Canis Majoris, it is possible to add, update and remove

  • requirements,
  • test categories,
  • test cases and
  • issues.

Add operation maps to HTTP’s POST-method, update operation maps to PUT-method (and in addition, supports partial updates) and remove operation maps to DELETE-method. More thorough documentation for these operations can be found in your Testlab instance via the interactive API documentation (/api).


Test categories with fields

In previous versions of Testlab, test cases were categorized to simple folders with a name. In Canis Majoris, test categories have been enhanced to be full assets with

  • a rich-text description,
  • time stamps to track creation and updates,
  • a change history and
  • a possibility add comments on them.


In addition to the above
  • Date and time formats are now handled more gracefully in the user interface by respecting the browser sent locale.
  • Test cases of a requirement can now be easily executed by choosing “Run tests…” from
    • the tree of requirements or
    • from the table view of requirements.
  • Similarly, the test cases linked to a requirement can be easily added to a your work (test) set by choosing “Add to work set” from the table view of requirements.
  • Test case listing -report renders execution steps in an easy-to-read table.


Thanking you for all your feedback,

Meliora team

Canis Majoris

VY Canis Majoris (VY CMa) is one of the largest stars detected so far and is located 3900 light-years from Earth. The estimates on it’s size vary, but it is estimated to be 1400 to 2200 solar radii (distance from center of Sun to it’s photosphere).

The size of this object is difficult to comprehend. It would take 1100 years travelling in jet aircraft at 900km/hr to circle it once. Also, It would take over 7 000 000 000 (7 Billion) Suns or 7 000 000 000 000 000 (7 Quadrillion) Earths to fill VY Canis Majoris. There are also few videos on YouTube which try to explain the size for you. 

(Source: Wikipedia, A Sidewalk Astronomer blog)


Tags for this post: announce features integration product release screenshots 


How to: Migrate from TestLink to Meliora Testlab

transSome of our customers have moved from using TestLink to our tool, Meliora Testlab. We’ve been asked to make this transition easier and so we decided to document this migration path, and this  post here describes how the migration works. Basically the migration moves your important test data from TestLink to Testlab so you can continue to work in your new tool.


When changing the tool, do I need to change the way I work?

This is a tough subject that deserves another post completely dedicated to that matter. To put it shortly, Testlab offers a lot of features that allow working in new ways, but most of the Testlink features are also in Testlab, so if there is no need to alter your way of working, you can do most of your work in the same way as before. TestLink’s “Test specification” view’s data can be seen in Testlab in “Test case design”. Assigning test cases to be tested in the simple form in Testlab is done by picking the cases to work set and then creating a test run, which tells the what release / version is being tested. Test execution is again pretty much the same again.

In nutshell – how does this all work?

In TestLink there is an export feature that allows you to export your test case data as XML. We can transform this data in to a format that can be directly put in in to the Testlab. When custom fields have been used in Testlink, the tool needs to be instructed how the custom fields are mapped in to Testlab fields. After the definition, a click of a button transform the data in to desired format.

For your convenience, Meliora will do this transformation free of charge (*). For our On-premise customers we can also deliver the tool to allow doing the transformation yourself.

The migration has five distinct steps:

  1. Modifying the Testlab to used format. ( optional )
  2. Exporting Test cases from TestLink testsuites.
  3. Describing the data mappings ( optional )
  4. Transforming the TestLink Export XML to csv format that Testlab can read
  5. Importing the test case data from csv.

The techical part behind these steps is pretty straightforward. Once you know how you want to migrate the data, the export-import will be just a few clicks for you.

*) Meliora reserves rights to decline of the free service on cases where making the transformation would engage Meliora with exeptional workload. This could happen if you would have huge amount of projects to be migrated.


What do I need to do before the migration?

Well, the only required thing is becoming a Meliora customer. Any edition of Testlab will do. A very very recommended thing to do is to plan how do you want to test with Testlab in the future. As Testlab offers many features that make testing easier that are not present in Testlink, it might be wise to change the way how testing is done at the same time when the tool is being changed. For example testlab has automatic revisioning, history, built-in (optional) review etc. so you might have made customizations to TestLink for these issues. These customizations are probably not needed anymore, so you need to decide if it is better just to ditch the customizations or do you want to continue using them. If you are in a hurry, fear not! Then you can just import all custom data and ditch the unneeded later.


General considerations

Most important thing to ensure is that your company’s testing work is not interrupted more than it has to. When you switch the tool, it is not effective if you continue using the old tool ( in migrated projects ) as you would be getting test case updates in two tools and test results in two tools for same project. Thus it is best to decide a timeslot for the switch and ensure everything goes smoothly when the time comes.

It is best to contact Meliora as you plan the migration and prepare a timeslot in case you want to keep to switch time in absolute minimum. Just create a support ticket and Meliora will help you getting the migration done smoothly.

Migration steps


Modifying the Testlab

Testlab, by default, has different fields and field Values than TestLink. You can, if you want, do the migration without modifying Testlab. Then the fields are going to be mapped by default. In case you decide to do the modification, these are the things to consider:

Choosing / Modifying a workflow

workflow-editIn testlab, the way how statuses are used are controlled by workflows. Workflows allow logic behind changing statuses – what statuses can be reached from what status, what fields are mandatory in which state and who has privileges to make changes. Testlab comes with two default workflows, called “simple” and “review”. The difference for test cases here is that review has a phase for review, where simple skips this. Basically you need to decide what statuses you wish to see in Testlab. The default Status transformations are depicted in following table:






TestLink Status
Testlab Simple workflow
Testlab With review workflow
Draft In design In design
Ready for review Ready for review In design
Review in progress Ready for review In design
Rework In design In design
Obsolete Deprecated Deprecated
Future Ready Ready
Final Ready Ready

You can add additional data transformations for the migration in addition to simply changing statuses. For example the “Testlab way” for handling test cases that are not yet runnable, is using a “Milestone” field to define when the test case is planned to be used. More on this on chapter “Describing the data mappings”.

Modifying the Testlab project

In case you have used custom fields and wish to keep the data in the custom fields in the future as well, you need to configure your Testlab’s project to include those. You’ll find the instructions for that in Testlab’s manual – it’s very simple operation.

Keep in mind, that you also have an option to import data from custom fields to description field. This makes sense when you do not want to lose this information, but you do not need to filter data in reports using custom field data.

If you are migrating data to multiple Testlab projects you can do the common modifications once and then copy the project with modifications. This way you do not need to do the modifications for all the projects separately.


Exporting test cases from TestLinkTestlink export options

In Testlink, you need to export test cases as XML for each project you wish to migrate.

  • In Testlink, go to “Test specification”
  • Choose “Actions” -> Export All Test Suites
  • make sure you have at least four boxes checked
  • Save your XML export file


Describing the data mapping

Before the transforming the data you can define how the TestLink data is to be transformed in the migration. Our on-premise customers that wish to do the transformation themselces do not need to do this as they are going to put this data in to tool themselves. Send the following information  to Meliora:

  • What custom fields are to be migrated. For each field describe to what field you want to import the data in to.
    • Example1 ) TestLink custom field “risk” values to Testlab field “risk”
    • Example 2) TestLink custom field “legacy link” added to the end of “description” field
  • How you want to change the data values
    • Example 1) TestLink custom field “risk” value “petty” to be “Low” in Testlab
    • Example 2) TestLink status “Future” to Testlab status “In design” + Milestone to value “Future”
Transforming the XML to Testlab csv

Here you have a few options:

  1. For SaaS users you can just send the XML to Meliora, and Meliora will do the transformation and import the data to your Testlab project. Just wait for the confirmation and you can start using your Testlab with imported data!
  2. For On-premise / SaaS users that want Meliora to just do the transformation, send the XML to Meliora and Meliora will deliver you an CSV that is ready to be imported in to testlab.
  3. For On-Premise users that want to do the transformation themselves, contact Meliora for details. Meliora will deliver you the tool along with instructions to do the transformation.
Importing the data from csv

The import itself is just a few click really:

  1. From Testlab menu choose Import -> Test cases
  2. Choose the csv file
  3. Try the import first with “dry run” option on. This will show errors / warnings should there be any. Here you will see if, for example, if you do not have a custom field in this project where you try to input data with the csv file.
  4. After you are satisfied with dry run results, uncheck the dry run box to really load the data in to the Testlab.
  5. Refresh the test case tree and start using your Testlab.

Final words

This document describes the basic setup of the migration process. As with all project more complicated than Hello World!, there will be unknown factors. You might have in-house customization done to your TestLink or you might want to include data from completely another source. Fear Not! Meliora can work with migrations that does not follow the ordinary path. Just contact our great support and we will work trough the migration together!

Meliora team



Tags for this post: announce features product release reporting screenshots 


Testlab – Helios release

A new version of Meliora Testlab – Helios – has been released. In this version, one of the goals was to optimize the speed of the user interface which makes this version of Testlab faster than ever. Please read more about the changes below.


Faster UI

Several enhancements have been made to optimize the UI, rendering and backend services. Some are workarounds to browser-related bugs, some are different rendering strategies and some are optimizations in situations where there are lots of assets in your project. 

We hope these changes make the use of Testlab snappier than ever. As the changes are browser-specific and may benefit some users more depending on the configuration of the workstation, we are more than happy to receive any feedback from you on your experiences.


Comments for test results
File attachments for steps of test cases

Previously, one could attach files to test cases. In Helios, you can also attach files to individual steps of the test case. The editor for editing steps has a new configurable column available for which you can attach any files relevant to the specific step.

The files are presented to the tester when the test case is in testing. The files are also included to reports and to e-mail messages sent for the test cases if preferred.


Expand attached images to listing reports

Requirements, test cases, and issues in Testlab can be attached with files. These files might be images, which is most common in case of issues where images such as screenshots are used to describe the issues encountered.

In Helios, the detailed listing reports (List of requirements, List of test cases and List of issues) have a new option to expand the attached images. This way, when the report is rendered, the images are expanded and shown in the printed report.

And there is more

In addition to the changes above:

  • SAML-authentication can be used to authenticate to web-published reports.
  • When hovering the assets in Testlab’s UI, the hovers now show also the files attached to the assets.
  • Help-browser window can now be resized and left open while using Testlab. As previously, you can also open up the help to separate browser tab.
  • API: The responses of REST API have been enhanced in a way that attached files are also included in responses. The related objects of assets also provide HAL-compatible links now for easier navigation in the response graph.
  • Concurrent editing and testing of test cases is now handled with better and more clear warning messages. 

Thanking you for all your feedback,

Meliora team

Helios 2

Helios 1 and Helios 2 are probes launched in 1974 and 1975 to study solar processes. They are no longer operational but remain in their orbits around the Sun.

The probes were once fastest man-made objects at 252 792 km/h (157 078 mph) – over 6 times around the earth in an hour. That is 70.22 kilometers per second. From London to New York in 79 seconds. On the other hand, it is only 586 times faster than the fastest production car Bugatti Veyron 16.4 Super Sport. For Helios 2, It would take 18028 years to reach Alpha Centauri, the nearest star system to the Sun. With Bugatti Veyron, it would take 10.5 million years. And while driving, you would have to fill the tank 459 900 000 000 times which equals 45 263 661 534 000 liters of gasoline. If we construct a cube as a gasoline tank for the fuel, we would need a cube that is approx. 3.6 kilometers per side.

(Source: Wikipedia)


Tags for this post: announce features product release reporting screenshots 


Testlab – Parallax Denigrate release

We are proud to bring you a new version of Testlab – Parallax Denigrate – with usability-related features and risk analysis reporting.


Custom fields for projects
Targeted filtering of tree assets

Assets organized in tree structures, such as specification related assets (requirements, test cases, and test sets, can be filtered by a search field at the top of the tree control. This makes it easy to filter into the tree the assets which you are interested in.

This search field has been enhanced in a way which enables you to target the search better. For example, for test cases, filtering with

  • “priority:critical” shows all test cases prioritized as critical,
  • “name:plane tag:customer” shows all test cases with a word “plane” in the name of the test case and with a tag “customer”.

As said, this logic works with all trees in Testlab – including the pickers which are used to select assets related when editing. The UI has also been added with an easy help on the syntax as a tooltip.


Comments for test results
Risk analysis reporting

Often, the people responsible for the project are asked tough questions on the status of the project or product: How well is it tested? Can we go to production? Do you think the product is in a good enough for release? The manager must basically make an educated guess with the best possible information available – often from the data in the test management tool.

Testlab has been added with a Risk Analysis report which analyzes the requirements, test cases, and issues in the project and combines the information from risk view-point as a single report. The report includes

  • requirements not marked as covered, with failing or blocking tests or requirements not tested at all – grouped by requirement risk,
  • test cases not yet marked as ready, not passed or test cases which have not been tested at all – grouped by priority and
  • issues not yet resolved or closed – grouped by severity or priority.

As a combined single report, this report provides essential information for the manager to make his decision.


Issue-centric testing

Features have been added to support issue-centric testing better. Previously, issues in Testlab could be linked to a test case and a test run but in this new version of Testlab, it is possible to link multiple test cases (optionally via a test run as executed tests) to a single issue. The controls while editing the issue have been changed appropriately to make it easier to link an existing issue to a test case.

New functions have also been added to better utilize these links:

  • While running tests, you can now easily add the test cases linked to an issue as executable tests to your current run. To do this, click the “Add linked test cases to this run” control via the Issues tab while running tests.
  • While running a test, you can also add a link to an existing issue by clicking the “Link to current test case” control via the Issues tab.

Note: Due to this change the REST-API model has been updated. The old format is still supported for a single link but is due to be deprecated in future releases. Please see the documentation of DefectResource-endpoint in your Swagger instance (https://yourtestlab.melioratestlab.com/api).


Showing results in test case tree

The controls at the bottom of the test case tree are used to highlight the test cases with testing results. In previous versions of Testlab, the controls limited you to fetching the not passing results only. The controls were designed from the re-testing and regression testing point of view.

In Parallax Denigrate, these controls were changed in a way so that you have an option to simply filter in results for test cases from your preferred viewpoint (latest results in the project, results for a milestone, for a version or for a single test run). The actual change in the UI is that you also get the results for passed test cases. This makes these controls more flexible and the use of these controls is easier in different usage scenarios.


Better tree-pickers

The popup controls used to pick related assets are now much easier to use:

  • The pickers now feature the same searching feature introduced earlier (see “Targeted filtering of tree assets” above). This makes finding the asset from the tree much easier.
  • When picking assets, you have an option to add or remove the selected assets from the set of current values.


Reporting enhancements
  • The “Do not include sums of zero” option on the requirement, test case, issue and project grouping reports now filter out sums of zeroes from both axes (“Field to report” and “Group by”). Previously, the option filtered out sums of zeroes only from “Field to report” axis.
  • When old revisions (or otherwise deprecated assets) are included in reports, the assets are now highlighted with a gray color and also, the date when the asset was revised/deprecated is shown.
  • When reporting the latest results only on “Results of run tests” report, the report now also includes the test cases not yet run.
  • “Execution status of test cases” report has been added with two new options to customize the report’s behavior (“Test cases in test runs only” and “Result for latest revision”). Please see the inline tooltip helps for these two options on this report on the logic of these options.
Other miscellaneous changes
  • When editing a single issue, you can save the edits without closing the window by keeping Shift-key pressed.
  • When batch editing assets in the table view, top-right corner change event notifications are no longer generated to your teammates.
  • For deprecated test cases in the list of test cases in test execution view now includes the timestamp of the deprecation (when the test case was revised or when the test case was deprecated).
  • The state of “Show events” checkbox at the top right corner is now saved and remembered between sessions.
  • Copying and pasting with keyboard shortcuts now work as expected to step editor’s pre- and post-conditions.

Thanking you for all your feedback,

Meliora team

Bookhouse Boys

Usenet is a distributed discussion system which enables you to read and post messages, often termed as “news”, still existing but more popular back in the days. Usenet features newsgroups on various different topics and was a popular place for discussion.

In 1996, a series (hundreds) of unexplained word puzzles were posted to different newsgroups. The title for all these posts was “Markovian Parallax Denigrate” and each post contained a series of words which cryptographers and hackers have been trying to decipher – with little success. Some theories exist but the “Markovian Parallax Denigrate” has still been referred as “The Internet’s oldest and weirdest mystery”.

(Source: Wikipedia, The Daily Dot, Last (?) existing post in Google’s archive)


Tags for this post: announce features product release reporting screenshots 


Testlab – Bookhouse Boy release

To celebrate the warmth of the summer months, we are proud to release a new feature version of Testlab: Bookhouse Boy.


Custom fields for projects
Custom fields for projects

Just like requirements, test cases, and issues, the project asset can now also be defined with custom fields. You can find the field settings for your project asset from Testlab > Manage company … view.

When you customize the fields for your project asset these field settings will take effect in all your projects. For example, you can use these fields to track some high-level project details inside Testlab such as project deadlines, resourcing related details or something else.

For reporting out these details two new project templates have been added:

  • “List of projects” which is basically a list of project details from your Testlab instance. The project filters in projects by the criteria set and lists out project details as a listing.
  • “Project grouping” which can be used to group project data by defined fields. This report includes a bar graph for the grouped data and a listing of projects matching the set criteria.

Please note: As these two reports expose details on project level, “report.all” permission does not grant permission for these two new reports. This way, these two reports might not show up with your permission set automatically. You should adjust the role your have in your projects to include “report.projectlistreport” and “report.projectgroupingreport” to have access to these reports.


Comments for test results
Test result comments

Each result of the test case can now be set with a distinct comment. Earlier, test case assets themselves and also steps of executed tests could be commented.

Now as you create test runs, the test cases get bound as items in these runs. As you list out test cases in your scheduled runs in the Test execution view, there is a new column in this table for an execution comment. Note that you set the comment when you wish – You don’t have to execute the test and give it a result to include a comment: This way you can also use this comment field for some information before execution of the actual tests.

The comments for executed (or scheduled) tests are also shown in test cases’ execution history view, on related reports and on test coverage view (for tests with a result set). Also for Jenkins integration, the comments sent via the API are also set to this field, if any.


Report page sizing and orientation

Reports can now be rendered in different page sizes and in chosen orientation. The size of the page can be set as A4 or the larger A3 and the report can be laid out in portrait or in landscape orientation. Please experiment with these settings if you have trouble fitting all your data on your reports.


Reporting enhancements
  • Test case related reports now support filtering in test cases from test sets and test runs. If you filter in test cases with test runs, keep in mind, that the latest execution status of the test case is then reported against these filtered in runs.
  • “Results for run tests” report now has an option to filter in latest executed result for each test case. This way, if the same test case is included multiple times, only the latest executed results are left in on the report.
  • A project can now be set with a logo which is included in each report. You can upload the logo of your choice by editing the project details.
  • Grouping reports can now be grouped also with custom field values.


Progress indication

The UI now includes automatic notification on server operations that might take some time to complete. Also, most operations expected to run some time such as imports of data also support showing the progress of the operation in this notification dialog.

Support for new file formats

Exporting data is now possible as modern Excel-files (.xlsx). Also, reports can now be exported as .docx-files for Microsoft Word.


Other changes
  • The changes in the tags of the asset are now stored and included in the changes history of requirements, test cases, and issues.
  • The number of defects found -column in test execution view can now be used as a filter in the table.


Thanking you for all your feedback,

Meliora team

Bookhouse Boys

With Bookhouse Boy, we wish to celebrate and give homage to the true visionaries behind Twin Peaks – one of the greatest television drama series created. After 25 years, a new season of this great television is currently running offering new bizarre twists in the story of interesting characters including the small town of Twin Peaks itself. With the excellent writing of Frost and visionary directing of Lynch, the series is sure to leave it’s mark to the history of television.

All Testlab releases are code-named with some weird fact, theory, historical event or something else that should give some food for your imagination. As such, a code-name from Twin Peaks is more than fitting.

(Image from Twin Peaks (the original series) / Lynch/Frost Productions)


Tags for this post: announce features jenkins product release reporting screenshots 


We are hiring!



Testlab – Lilliput Sight released

Meliora is proud to announce a release of new major Testlab version: Lilliput Sight. This release includes some new major features such as asset templating but in the same time, bundles various smaller enhancements and changes for easier use. New features are described below.


Asset templating

Asset templating

Creating assets for your projects in a consistent way is often important. This release of Testlab includes support for templates which you can apply when adding new requirements, test cases or issues.

A template is basically a simple asset with some fields set with predefined values. As you are adding a new asset, you have an option for applying templates. When you apply a template, the asset is set with the values from the applied template. You can also apply multiple templates, so designing templates to suit the needs of your testing project is very flexible.

A set of new permissions (testcase.templates, requirement.templates, defect.templates) has been added to control who has access to using the templates. These permissions have been granted to all your roles which had the permission to edit the matching asset type.


Robot Framework output support
Robot Framework output support

When pushing automated results to Testlab, we now have native support for Robot Framework’s custom output file. By supporting the native format, the results include detailed information of keywords from the output which are pushed to Testlab as the steps executed.

The support for Robot Framework’s output has also been added to the Jenkins CI plugin. With the plugin, it is easy to publish Robot Framework’s test results to your Testlab project in a detailed format.


Copying and pasting steps

The editor for test case steps now includes a clipboard. You can select steps (select multiple steps by holding your shift and ctrl/cmd keys appropriately), copy them to the clipboard and paste them as you wish. The clipboard also persists between test cases so you can also copy and paste steps from a test case to another.



Filtering with operators in grids

The text columns in Testlab’s grid now feature operator filters which allow you to filter in data from the grid in more specific manner. You have an option of choosing the operator the column is filtered with such as “starts with”, “ends with”, … , and of course the familiar default “contains”.

With a large number of data in your project, this makes it easier to filter in and find the relevant data from your project.



Mark milestones, versions and environments as inactive

When managing milestones, versions, and environments for your project, you now can set these assets as active or inactive. For example, if a version is set as inactive, it is filtered out from relevant controls in the user interface. If your project has numerous versions, environments or milestones, keeping only the relevant ones active makes the use easier as the user interface is not littered with the non-relevant ones.

For versions and environments, the active flag is set in the Project management view. For milestones, the completed flag is respected as in completed milestones are interpreted as inactive.


Usability related enhancements
  • Editing asset statuses in table views: You can now edit statuses of assets in table views – also in batch editing mode.
  • New custom field type – Link: A new type of custom field has been added which holds a single web link (such as http, https or mailto -link).
  • Support deletion of multiple selected assets: The context menu for requirements and test cases now includes “Delete selected” function to delete all assets chosen from the tree.
  • Delete key shortcut: The Delete key is now a keyboard shortcut for context menu’s Delete-function.
  • Execution history respects revision selection: The execution history tab for a test case previously showed a combined execution history for all revisions of the chosen test cases. This has been changed in a way that the tab respects the revision selection in a similar manner to the other tabs in the Test case design view. When you choose a revision, the list of results in the execution history tab is only for the chosen revision of the test case.
  • Custom fields hold more data: Earlier, custom fields were limited to a maximum of 255 characters. This has been extended and custom fields can now hold a maximum of 4096 characters.
  • Test cases already run in test runs can be run again: If the user holds the permission for discarding a result of an already run test case, you can now choose and execute test cases already with a result (pass or fail) directly from the list of test cases in Test execution view. Earlier, you needed to discard all the results one by one for the tests you wish to run again.
  • Enhancements for presenting diagrams: The presentation view for relation and traceability diagrams has been improved – you can now zoom the view and pan the view in a more easier manner by dragging and by double clicking.
  • Copy link to clipboard: The popup dialog with a link to open up an asset from your Testlab has been added with a “Copy to clipboard” button. Clicking this button will copy the link directly to your clipboard.


Reporting enhancements
  • “Filter field & group values” option added for grouping reports: Requirement, test case, and issue grouping reports have been added with an option to apply the filter terms of the report to the values which are fetched via “Field to report” and “Group by”. For example, if you filter in a requirement grouping report for importance values “High” and “Critical”, choose to group the report by “Importance” and check the “Filter field & group values” option, the report rendered will not include reported groups for any other importance values than “High” and “Critical”.


Enhancements for importing and exporting
  • Export verified requirements for test cases: Test case export now includes a “verifies” field which includes the identifiers of requirements the test cases are set to verify.
  • Input file validation: The input file is now validated to hold the same number of columns in each read row than the header row of the file has. When reading the file, if rows are encountered with an incorrect number of columns, an error is printed out making it easier to track down any missing separator characters or such.


Thanking you for all your feedback,

Meliora team


Various disorientating neurological conditions that affect perception are known to exist. One of them is broadly categorized as Alice in Wonderland Syndrome in which the people experience size distortion of perceived objects. Lilliput Sight – or Micropsia – is a condition in which objects are perceived to be smaller than they actually are in the real world.

The condition is surprisingly common: episodes of micropsia or macropsia (seeing objects larger than they really are) occur in 9% of adolescents. The author of Alice’s Adventures in Wonderland – Lewis Carroll – is speculated to have had inspiration for the book from his own experiences in Micropsia. Carroll had been a well-known migraine sufferer which is one possible cause of these visual manifestations.

(Source: WikipediaThe Atlantic, Image from Alice’s Adventures in Wonderland (1972 Josef Shaftel Productions))



Tags for this post: announce features integration jenkins plugin product release reporting 


Unifying manual and automated testing

Automating testing has been an ongoing practice to gain benefits for your testing processes. Manual testing with pre-defined steps is still surprisingly common and especially during acceptance testing we still often put our trust in the good old tester. Unifying manual testing and automated testing in a transparent, easily managed and reported way is particularly important for organizations pursuing gains from testing automation.


All automated testing is not similar

The gains from testing automation are numerous: Automated testing saves time, it makes the tests easily repeatable and less error-prone, makes distributed testing possible and improves the coverage of the testing, to bring up few. It should be noted though that not all automated testing is the same. For example, modern testing harnesses and tools make it possible to automate and execute complex UI-based acceptance tests and in the same time, developers can implement low-level unit tests. From the reporting standpoint, it is essential to be able to combine the testing results from all kinds of tests to a manageable and easily approachable view with the correct level of details.


I don’t know what our automated tests do and what they cover

It is often the case that testers in the organization waste time on manual testing of features that are already covered with a good set of automated tests. This is because the test managers don’t always know the details of the (often very technical) automated tests. The automated tests are not trusted on and the results from these tests are hard to combine to the overall status of the testing. 

This problem is often complicated by the fact that many test management tools report the results of manual and automated tests separately. In the worst case scenario, the test manager must know how the automated tests work to be able to make a judgment on the coverage of the testing. 


Scoping the automated tests in your test plan

Because the nature of automated tests varies, it is important that the test management tool offers an easy way to scope and map the results of your automated tests to your test plan. If is not often preferred to report the status of each and every test case (especially in the case of low-level unit tests) because it makes it harder to get the overall picture of your testing status. It is important to pay attention to the results of these tests though so that failures in these tests get reported.

Let’s take an example on how the automated tests are mapped in Meliora Testlab.

In the example above is a simple hierarchy of functions (requirements) which are verified by test cases in the test plan:

  • UI / Login -function is verified by a manual test case “Login test case“,
  • UI / User mgmnt / Basic info and UI / User mgmnt / Credentials -function is verified by a functional manual test case “Detail test case” and
  • Backend / Order mgmt -functions are verified by automated tests mapped to a test case “Order API test case” in the test plan.

Mapping is done by simply specifying the package identifier of the automated tests to a test case. When testing, the results of tests are always recorded to test cases:

  1. The login view and user management views of the application are tested manually by the testers and the results of these tests get recorded to test cases “Login test case” and “Detail test case“.
  2. The order management is tested automatically with results from automated tests “ourapp.tests.api.order.placeOrderTest” and “ourapp.tests.api.order.deliverOrderTest“. These automated tests are mapped to test case “Order API test case” via automated test package “ourapp.tests.api.order“.

The final result for the test case in step 2 is derived from the results of all automated tests under the package “ourapp.tests.api.order“. If one or more tests in this package fail, the test case will be marked as failed. If all tests pass, the test case is also marked as passed.

As automated tests are mapped via the package hierarchy of the automated tests, it makes it easy to fine tune the detail level you wish to scope your automated tests to your test plan. In the above example, if it is deemed necessary to always report out the detailed results on order delivery related tests, the “ourapp.tests.api.order.deliverOrderTest” automated test can be mapped to a test case in the test plan.


Automating existing manual tests

As testing automation has clear benefits to your testing process, it is preferred for the testing process and the used tools to manage it to support easy automation of existing manual tests. From the test management tool standpoint, it is not relevant which technique is used to actually automate the test, but instead, it is important that the reporting and coverage analysis stays the same and the results of these automated tests are easily pushed to the tool.

To continue on with the example above, let’s presume that login related manual tests (“Login test case“) are automated by using Selenium:

The test designers record and create the automated UI tests for the login view to a package “ourapp.tests.ui.login“. Now, the manual test case “Login test case” can be easily mapped to these tests with the identifier “ourapp.tests.ui.login“. The test cases themselves, requirements or the structure of these do not need any changes. When the Selenium based tests are run, later on, the result of these tests determine the result for the test case “Login test case“. The reporting of the testing status stays the same, the structure of the test plan is the same, and related reports are easily approached by the people formerly familiar with them.



Testing automation and manual testing are most often best used in a combination. It is important that the tools used for test management support the reporting of this kind of testing in as flexible way as possible.


(Icons used in illustrations by Thijs & Vecteezy / Iconfinder)


Tags for this post: automation best practices example features product reporting usage 


Testlab – Raining Animal released

Meliora is proud to announce a new version of Meliora Testlab – Raining Animal. This version brings in a concept of “Indexes” which enable you to more easily collaborate with others and copy assets between your projects.

Please read on for more detailed description of the new features.


Custom fields for steps
Custom columns for steps

Execution steps of test cases can now be configured with custom columns. This allows you to customize the way you enter your test cases in your project.

Custom columns can be renamed, ordered in the order you want them to appear in your test case and typed with different kinds of data types.


Collaborating with indexes

A new concept – Indexes – has been added which enables you to pick assets from different projects on your index and collaborate with them.

An index is basically a flat list of assets such as requirements or test cases from your projects. You can create as many indexes you like and you can share them between users in your Testlab. All users who have access to your index can comment on it and edit it – this makes it easy to collaborate with a set of assets in your Testlab. 


Copying assets between your projects

Each asset on your index is shown with the project it belongs to. When you select assets from your index, you have an option to paste the selected assets to your current Testlab project. This enables you to easily copy content from a project to another. 


SAML 2.0 Single Sign-On support

The authentication pipeline of Testlab has been added with an support for SAML 2.0 Single Sign-On support (WebSSO profile). This makes it possible to use SAML 2.0 based user identity federation services such as Microsoft’s ADFS for user authentication.

The existing CAS based SSO is still supported but providing SAML 2.0 based federation offers more possibilities for integrating Testlab with your identity service of choice. You can read more about setting up the SSO from the documentation provided.

Better exports with XLS support

The data can now be exported directly to Excel in XLS format. CSV export is still available, but exporting data to Excel is now more straightforward.

Also, when exporting data from the table view, only the rows selected in the batch edit mode are exported. This makes it easier for you to hand pick the data when exporting.

In addition to the above and fixes under the hood,
  • the actions and statuses in workflows can now be rearranged by dragging and dropping,
  • stopping the testing session is made more straightforward by removing the buttons for aborting and finishing the run and
  • a new permission “testrun.setstatus” has been added to control who can change the status of a test run (for example mark the run as finished).

Meliora team

Throughout history, a rare meteorological phenomenon in which animals fall from the sky has been reported. There are reports of fish, frogs, toads, spiders, jellyfish and even worms coming raining down from the skies.

Curiously, the saying “raining cats and dogs” is not necessarily related to this phenomenon and is of unknown etymology. There are other some quite bizarre expressions for heavy rain such as “chair legs” (Greek) and “husbands” (Colombian).


(Source: Wikipedia, Photo – public domain)


Tags for this post: announce features integration product release usage 


Testlab – Earthquake Light released

Meliora Testlab – Earthquake Light – has been released. In addition to set of fixes, this release comes with a new advanced reporting mode which allows you to customize the criteria of data on your reports in an intuitive manner. We’ve also integrated with Stripe to make subscription handling and credit card payments as easy as possible.

Please read on for more detailed description of the new features.


Inline table editing
Reporting criteria in advanced mode

Most of the report templates available have been added with an option to switch the criteria form to the new advanced mode. In this mode, the criteria for picking the data on the report can be specified as a set of rules the reporting engine will use when rendering the report.

Reports in advanced mode work in similar manner to the earlier so called simple mode. You can save them, schedule them for publishing and so on.


Advanced operators
Various operators for each field

A rule on an advanced reports consists of the targeted field, an operator executed to determine if the rule matches and an optional value for the operator. The mode has been implemented with a full set of operators allowing you to define a complex set of rules.

The operators available depend on the type of the field.



Boolean operators and sub-clauses

The criteria may be defined with sub-clauses. Each clause is set with an boolean operator (match all [and], match any [or], match none [not]) which declares how the list of rules in the clause should be interpreted.

This allows you to define a complex set of rules to pick the data you need on your report.

Stripe subscriptions for credit card billing

The hosted Meliora Testlab has been integrated with Stripe – a leading credit card payment gateway – to make subscription and credit card handling for you as easy as possible.

If any action is needed from existing customers, we will contact all customers directly for instructions.


Meliora team


For a very long time, people have occasionally reported seeing strange bright lights in the sky before, during or after an earthquake. For much of the modern times, these reports were considered questionable and skeptics say, there is no considerable proof of the existence of such phenomenon.

The lights reported are usually blueish and greenish dancing lights in the sky comparable a bit to the aurora borealis. Theories exist on the cause of the lights such as the electricity in certain rocks of the earth gets activated and gets discharged during the teutonic stress. It has even been suggested that the existence of the lights could be used to predict upcoming earthquakes.

The latest the earthquake lights were reported was with the New Zealand earthquake in November 2016 and there are even videos circulating documenting the event.

(Source: Wikipedia, National Geographic, Youtube, Photo from UC Berkeley Online Archive)



Tags for this post: announce features product release reporting usage 


Testlab – Ghost Boy release

We are proud to announce a new version of Meliora Testlab – Ghost Boy – which brings in features such as rapid batch editing of requirements, test cases and issues. A more detailed description of the new features can be read below.


Inline table editing

Inline table editing

The central assets of your Testlab projects – Requirements, test cases and issues – can now be inspected in a table view. You can choose a folder from your asset related tree and the table view will list all assets from this folder.

The data of the assets can be edited inline. This allows you to rapidly edit a set of assets and save your changes with a single click. Adding and editing assets can still be made in similar fashion as in earlier versions, but the table view brings in new alternative for rapid edits.

As the data is presented in a table, all regular table related functions such as sorting, filtering and grouping are available for you in addition with such features as exporting the set of assets to Excel, sending them via e-mail and printing.



Batch editing

Batch edits

Ever had a need to bump up the severity for a set of issues ? Or assign a batch of test cases to some user ?

The new table view with inline editing features a batch edit mode which allows you to pick a set of assets for batch editing. Then, all edits made to some asset in the table will be replicated to all chosen assets. This way, it is easy to make batch edits to a large set of assets while you are designing.



Project’s users listing

Earlier, granting access and roles for users in projects was done in the user management only. You chose an user and granted the needed roles for this user.

Ghost boy brings in a new Users tab to project management view which allows you to easily manage roles of your users in a project-centric way. You can also filter, sort, export, e-mail and print the listing if needed.


Miscellaneous enhancements and changes

With the new features listed above, this feature release contains small enhancements for executing tests:

  • Publish now -button added for published reports: When you configure a report to be automatically published, you can now press a button to force a publish of this report. This makes it easier set up automatically published reporting.
  • Test run selectors enhanced: The test run selectors in the test case tree and in the test coverage view has been changes to a filterable picker for better usage when there are a large amount of runs in your project.
  • Tagging assets in table view: The new table views have a button which allows you to tag the chosen assets with tags. Earlier, the tagging in issues table worked in a way that it tagged all visible issues in the table. Now, the assets to be tagged can be chosen in the “batch edit mode” for more flexible usage.
  • Continue tour on the current view: The tour found in the Help menu of Testlab now starts & continues on the current view.
  • Results for run tests report: The report now allows you to set the time of day for starting time and ending time the results are reported from.


Meliora team

Ghost Boy

Think of a situation where you would be unable to move in any way, communicate or interact with outside world. You would be fully aware and able to think – trapped in your own body with your thoughts. Or, you can think of this but never really comprehend what it must feel like.

Meet Martin Pistorius, a South African born in 1975, who fell into a coma in early teens but eventually regained consciousness by the age of 19. He was still unable to move and spent years locked-in his own body until one of his day carers noticed he might be able to respond to outside interaction. He has since recovered but still needs a speech computer to communicate.

You can listen more about this fascinating story from an Invisibilia podcast or check out a TED talk Martin gave in 2015 at a TEDx event.

(Source: Wikipedia, Invisibilia podcast, TED talks, Brain vector designed by Freepik)



Tags for this post: announce features product release usage 


Testlab – Dis Manibus released

The new release of Meliora Testlab – Dis Manibus – brings in long-awaited customization features, other minor features and fixes. The release makes it easier to manage your projects’ versions, environments and option values for different fields. A more detailed description of the new features can be read below.


Field option management

The option values for different fields can now be freely managed. Options can be managed for requirements, test cases and issues. For example, you can now customize test case priorities, requirement types, issue severities and so on.

To add new options or to edit or remove existing options, open up the project in Manage projects view and choose the appropriate tab for your asset. The option editor can be accessed from the Options column of the chosen field. Edited options are supported in all integrations, data importing and exporting, reporting and when copying projects. In addition, the REST API has been added with an endpoint for accessing the field meta data.


Version and Environment management

Adding new values for versions and environments in your project is easy by just entering a new value when needed. If you prefer to have a strict control on which versions and environments should be used in your project, Dis Manibus brings in a view to manage them by hand. The view has been added as a new tab to Manage project view.

You also have an option to control if the users in your project can themselves add new versions or not.


Saveable table filters

All tables in Testlab’s UI have been added with controls which enable you to save your current filter criteria for later use. The criteria can be named and it can be saved to your project to be used by all users or even globally to all your projects.

Miscellaneous enhancements

With the new features listed above, this feature release contains small enhancements for executing tests:

  • Executing test case steps in free order: The steps of test cases can be executed in an order preferred.
  • Discard result and run again -button: The table of test run’s test cases in Test runs view has been added with a new button which enables you to easily run an already run test case again. Clicking the button discards the current result of a test case, preserves the results for the test case’s steps and instantly opens up the window for running the selected test case.
  • Saved reports can be filtered and sorted: The listing of saved reports in Reports view has no filter controls to filter in reports. Finding reports is now easier if you have a number of reports in your project. The filter settings can also be saved to the server for later use.


Meliora team


O U O S V A V V“, between the letters ‘D M‘, commonly stood for Dis manibus, meaning “dedicated to the shades”, is a sequence of letters commonly known as the Shugborough Inscription. The inscription is carved on the 18th-century monument in Staffordshire, England, and has been called one of the world’s top uncracked ciphertexts.

In recent decades there have been several proposals for possible solutions. None of the solutions satisfy the staff at Shugborough Hall though. And ofcourse, as with the all good mysteries, it is hinted that Poussin, the author of the original painting which is expressed in the relief of the monument, was a member of the Priory of Sion. This would mean that the ciphertext may encode secrets related to the Priory, or the location of the Holy Grail.

(Source: Wikipedia, Photograph by Edward Wood)



Tags for this post: announce features product release usage 


New feature release: Oakville Blob

We are proud to announce a new major release of Meliora Testlab: Oakville Blob. A major feature included is a Rest-based API for your integration needs. In addition, this release includes numerous enhancements and fixes. Read on.


Integration API

Integration API

Meliora Testlab Integration API offers Rest/HTTP-based endpoints with data encapsulated in JSON format. The first iteration of the API offers endpoints for

  • fetching primary testing assets such as requirements, test cases and issues,
  • fetching file attachments and
  • pushing externally run testing results as test runs. 

We publish the API endpoints with Swagger, a live documentation tool, which enables you to make actual calls against the data of your Testlab installation with your browser. You can access this documentation today by selecting the “Rest API …” from the Help menu in your Testlab. You can also read more about the API from the corresponding documentation page.

Tagging users in comments

Tagging team members to comments

When commenting assets in your project such as issues or test cases, you can now tag users from your project to your comments. Just include the ID, e-mail address or name of the team member prefixed with @ character and this user automatically gets a notification about the comment. The notice scheme rules have also been updated to make it possible to send e-mail notifications for users who are tagged to the comments of assets. Also, a new out-of-box notice scheme “Commenting scheme” has been added for this.

See the attached video for an example how this feature works.


Execution history of a test case

Sometimes it is beneficial to go through the results of earlier times of execution for a test case. To make this as easy as possible, Test design view has been added with a new “Execution history” tab.

This new view lists all times the chosen test case has been executed with the details similar to the listing of test run’s items in Test execution view. Each result can be expanded to show the detailed results of each step.


Notification enhancements

When significant things happen in your Testlab projects, your team members are generated with notifications always accessible from the notifications panel at the top. Oakville Blob provides numerous enhancements to these notifications such as

  • notifications panel has been moved to the top of viewport and is now accessible from all Testlab views,
  • notifications panel can now be checked as kept open meaning, that the panel won’t automatically close on lost focus of the mouse cursor and
  • detailed documentation for all notification types has been added to the help manual.

In addition, the following new notification events have been added:

  • if an issue, a requirement or a test case which you have been assigned with is commented, you will be notified,
  • if during testing, a test case or it’s step is commented, the creator of the test run will get notified,
  • if a step of test which you have been assigned with is added with a comment during testing, you will be notified.


Targeted keyword search

The quick search functionality has been enhanced with a possibility to target the keyword to specific information in your project’s assets. By adding a fitting prefix: to your keyword, you can for example target the search engine to search from the name of a test case only. For example, 

  • searching with “name:something” searches “something” from the names of all assets (for example, test cases, issues, …) or,
  • searching with “requirement.name:something” searches “something” from the names of requirements.

For a complete listing of possible targeting prefixes, see Testlab’s help manual.


Usability enhancements

Several usability related changes have been implemented.

  • Easy access to test case step’s comments: Test execution view has been added with a new “Expand commented” button. By clicking this button the view expands all test run’s test cases which have been added with step related comments.
  • Rich text editing: The rich test editors of Testlab have been upgraded to a new major version of the editor with simpler and clearer UI.
  • Keyboard access:
    • Commenting: keyboard shortcuts have been added for saving, canceling and adding a new comment (where applicable).
    • Step editing: Step editor used to edit the steps of a test case has been added with proper keyboard shortcuts for navigating and inserting new steps.
  • Test coverage view: 
    • Test case coverage has been added with a new column: “Covered requirements”. This column sums up and provides access to all covered requirements of listed test cases.
    • Test run selector has been changed to an auto-complete field for easier use with lot of runs.


Reporting enhancements

Several reporting engine related changes have been implemented.

  • Hidden fields in report configuration: When configuring reports, fields configured as hidden are not anymore shown in the selectors listing project’s fields.
  • Sorting of saved reports: Saved reports are now sorted to a natural “related asset” – “title of the report” -order
  • Better color scheme: The colors rendered for some fields of assets are changed to make them more distinct from each other.
  • Test run’s details: When test runs are shown on reports, they are now always shown with the test run’s title, milestone, version and environment included.
  • Wrapped graph titles: When graphs on reports are rendered with very long titles, the titles are now wrapped on multiple lines.
  • Results of run tests report: The report now sums up the time taken to execute the tests.
  • Test case listing report: Created by field has been added.


Other changes

With the enhancements listed above, this feature release contains numerous smaller enhancements.

  • New custom field type – Unique ID: A new custom field type has been added which can be used to show an unique non-editable numerical identifier for the asset. 
  • Editing of test case step’s comments: The step related comments entered during the testing can now be edited from the Test execution view. This access is granted to users with “testrun.run” permission.
  • Confluence plugin: Listed assets can now be filtered with tags and, test case listing macro has an option to filter out empty test categories.
  • Test run titles made unique: Testlab now prevents you from creating a test run with a title which is already present if the milestone, version and environment for these runs are the same. Also, when test runs are presented in the UI, they are now always shown with this information (milestone, version, …) included.
  • “Assigned to” tree filter: The formerly “Assigned to me” checkbox typed filter in the trees has been changed to a select list which allows you to filter in assets assigned to other users too.
  • File attachment management during testing: Controls have been added to add and remove attachments of the test case during testing.
  • Dynamic change project -menu: The Change project -selection in Testlab-menu is now dynamic – if a new project is added for you, the project will be visible in this menu right away.
  • Permission aware events: When (the upper right corner floating) events are shown to users, the events are filtered against the set of user’s permissions. Now the users should be shown only with events they should be interested in.
  • Number of filtered test runs: The list of test runs in Test execution view now shows the number of runs filtered in.
  • UI framework: The underlying UI framework has been upgraded to a new major version with many rendering fixes for different browsers.


Thanking you for all your feedback,

Meliora team


“On August 7, 1994 during a rainstorm, blobs of a translucent gelatinous substance, half the size of grains of rice each, fell at the farm home of Sunny Barclift. Shortly afterwards, Barclift’s mother, Dotty Hearn, had to go to hospital suffering from dizziness and nausea, and Barclift and a friend also suffered minor bouts of fatigue and nausea after handling the blobs. … Several attempts were made to identify the blobs, with Barclift initially asking her mother’s doctor to run tests on the substance at the hospital. Little obliged, and reported that it contained human white blood cells. Barclift also managed to persuade Mike Osweiler, of the Washington State Department of Ecology’s hazardous materials spill response unit, to examine the substance. Upon further examination by Osweiler’s staff, it was reported that the blobs contained cells with no nuclei, which Osweiler noted is something human white cells do have.”

(Source: WikipediaReddit, Map from Google Maps)



Tags for this post: announce features integration jenkins plugin product release reporting 


Testlab Girus released

Meliora Testlab has evolved to a new major version – Girus. The release includes enhancements to reporting and multiple new integration possibilities.

Please read more about the new features and changes below. 


Automatic publishing of reports

Testlab provides an extensive set of reporting templates which you can pre-configure and save to your Testlab project. It used to be that to render the reports, you would go to Testlab’s “Reports” section and choose the preferred report for rendering.

Girus enhances reporting in a way where all saved projects can be scheduled for automatic publishing. Each report can be set with a

  • recurrence period (daily, weekly and monthly) for which the report is refreshed,
  • report type to render (PDF, XLS, ODF),
  • optionally, a list of e-mail addresses where to automatically send the report and
  • an option to web publish the report to a pre-determined web-address.

When configured so, the report will then be automatically rendered and sent to interested parties and optionally, made available to the relatively short URL address for accessing.


Slack integration

slack_rgb (1)

Slack is a modern team communication and messaging app which makes it easy to set-up channels for messaging, share files and collaborate with your team in many ways.

Meliora Testlab now supports integrating with Slack’s webhook-interfaces to post notifications about your Testlab assets to your Slack. The feature is implemented as a part of Testlab’s notice schemes. This way, you can set up rules for events which you prefer to be publishes to your targeted Slack #channel.

You can read more about the Slack integration via this link.



Webhooks are HTTP-protocol based callbacks which enable you ro react on changes in your Testlab project. Webhooks can be used as a one-way API to integrate your own system to Testlab.

Similarly to the Slack integration introduced above, the Webhooks are implemented as a channel in Testlab’s notice schemes. This way you can set up rules to pick the relevant events you wish to have the notices of.

An example: Let’s say you have an in-house ticketing system and you need to mark a ticket resolved each time an issue in Testlab is closed. With Webhooks, you can implement a simple HTTP based listener on your premises and set up a notification rule in Testlab to push an event to your listener everytime an issue is closed. With little programming, you can then mark the ticket in your own system as resolved. 

You can read more about the Webhooks via this link.


Maven plugin

Apache Maven is a common build automation tool used primarily for Java projects. In addition, Maven can be used to build projects written in C#, Ruby, Scala and other languages.

Meliora Testlab’s Maven Plugin is an extension to Maven which makes it possible to publish xUnit-compatible testing results to your Testlab project. If you are familiar with Testlab’s Jenkins CI plugin, the Maven one provides similar features easily accessible from your Maven project’s POM.

You can read more about the Maven plugin via this link.


Automatic creation of test cases for automated tests

When JUnit/xUnit-compatible testing results of automated tests are pushed to Testlab (for example from Jenkins, Maven plugin, …), the results are mapped and stored against the test cases of the project. For example, if you push a result for a test com.mycompany.junit.SomeTest, you should have a test case in your project with this identifier (or sub-prefix of this identifier) as a value in the custom field set up as the test case mapping field. The mapping logic is best explained in the Jenkins plugin documentation.

To make the pushing of results as easy as possible the plugins now have an option to automatically import the stubs of test cases from the testing results themselves. This way, if a test case cannot be found from the project for some mapping identifier, the test case is automatically created to a configured test category.


Jenkins plugin: TAP support

Test Anything Protocol, or TAP, is a simple interface between testing modules in a test harness. Testing results are represented by simple text files (.tap) describing the steps taken. The Jenkins plugin of Testlab has been enhanced in a way, that the results from .tap files produced by your job can be published to Testlab.

Each test step in your TAP result is mapped to a test case in Testlab via the mapping identifier, similarly to the xUnit-compatible way of working supported previously.

The support for TAP is best explained in the documentation of the Testlab’s Jenkins plugin.


Support for multiple JIRA projects in Webhook-based integration

When integrating JIRA with Testlab using the Webhooks-based integration strategy, the integration now supports integrating multiple JIRA projects to a single Testlab project. When multiple projects are specified and an issue is added, the user gets to choose which JIRA project the issue should be added to.

Due to this change, it is now also possible to specify which JIRA project the Testlab project is integrated with. Previously, the prefix and the key of the projects had to match between the JIRA and Testlab project.


Miscellaneous enhancements and changes
  • Results of run tests report added with new options to help you on reporting out your testing results:
    • Test execution date added as a supported field,
    • “Group totals by” option added which sums up and reports totals for each group of specified field,
    • reports have been added with columns and rows of sum totals and
    • test categories selection supports selection of multiple categories and test cases for filtering.
  • Updates on multiple underlying 3rd party libraries.
  • Bugs squished.


Sincerely yours,

Meliora team


Pithovirus sibericum was discovered buried 30 m (98 ft) below the surface of late Pleistocene sediment in a 30 000-year-old sample of Siberian permafrost. It measures approximately 1.5 micrometers in length, making it the largest virus yet found. Although the virus is harmless to humans, its viability after being frozen for millennia has raised concerns that global climate change and tundra drilling operations could lead to previously undiscovered and potentially pathogenic viruses being unearthed.

A girus is a very large virus (gigantic virus). Many different giruses have been discovered since and many are so large that they even have their own viruses. The discovery of giruses has triggered some debate concerning the evolutionary origins of the giruses, going so far as to suggest that the giruses provide evidence of a fourth domain of life. Some even suggest a hypothesis that the cell nucleus of the life as we know it originally evolved from a large DNA virus.

(Source: Wikipedia, Pithovirus sibericum photo by Pavel Hrdlička, Wikipedia)



Tags for this post: announce features integration jenkins plugin product release reporting 


Meliora Testlab – Polybius – released

Meliora Testlab team is proud to announce a new version of Testlab – Polybius. In addition to smaller enhancements many of the new features revolve around parameterization of test cases. It is now possible to create test cases as templates and when executing them, enter the parameters of the template to easily create variants of the test case.

Please read more about the new features and changes below. 


Test case parameters

Test case templating with parameters

Test cases can now be embedded with parameters. Parameters are basicly tags entered to test cases’ description, preconditions, steps or to the expected end result. In the content text, parameters should be entered as


When test case is entered with parameter values later on, the actual description shown to the tester will include parameter tags replaced with entered values.

Using parameters is an easy way to create many executable variants for a test case with similar description content. For example, if you have a set of test cases intended to be run with each different web browser variant, you can describe your test case with a appropriate ${BROWSER} tag. This way you can easily plan & execute this test case for different web browsers by entering fitting values for this parameter later on.


Parameter typing

Testlab includes a management view for all project’s parameters. You can configure your parameters as selection lists, which restricts the values that can be selected for these parameters.


Planning and running tests with parameters

To create test case variants for parameterized test cases you can now enter values for the parameters in Execution planning view. The editor features a column showing all possible parameters for the test case and allows you to enter values for them. When a parameterized test case (template) is added to a test set and it is set with parameter values, the test case becomes a parameterized instance to be run.

Test execution view has similar controls where you can enter parameter values for test cases already added to a test run.


Issue management enhancements
  • Issue is added with a new field “Found with parameters” which always includes the parameters of the test case the issue was added for (if any).
  • The issue listing includes a similar field as a column which allows you to easily filter in issues with specific parameter value.
  • When adding a new issue while executing a test case, the failed steps and their comments so far are now automatically copied to the issue’s description field.


Coverage and reporting with parameterized tests

Test coverage view with coverage views via requirements and test cases have been enhanced to include parameterized test cases. Each parameterized variant of a test case is now calculated as a single verifying test case in the coverage calculation. The view also shows the parameters the test cases were run with.

The reports also now include the test case parameters in appropriate places (Issue listing, requirement coverage, results of run tests and execution status of test cases report). Issue listing, results of run tests and execution status of test cases reports also have test case parameter values as filters which allows you to filter in specific results from your test runs.


Jenkins plugin changes

Meliora Testlab Jenkins plugin now supports passing test case parameters from Jenkins’ environmental variables. You can list the variables which are passed as they are with the results and will be included as parameters at Testlab side.

For example, test case(s) in your Testlab project might have a parameter titled ${BROWSER}. When running automated UI tests in your Jenkins, enter a value “BROWSER” to this new setting and ensure that your Jenkins job sets an environmental variable BROWSER to some sensible value. This way running the job sends and sets the ${BROWSER} test case parameter value to all run tests to a value matching to the environmental variable.


Importing test runs

A CSV import of test runs has been added. Importing test runs supports importing test run details, test cases run in them and testing results of test cases and their steps.


Centralized authentication (CAS) improvements

Registration of new user accounts

Previously, when users authenticated via an external source via CAS came into Testlab, if they had not yet been created with Testlab user account, they could not log on. This is because authorization to Testlab’s projects are done via Testlab’s user accounts the needed roles and permissions must be granted to these users for the access.

This version is improved in a way, that when a user without Testlab account gets authenticated via CAS, he/she is shown with a welcome screen allowing her to register a personal Testlab account by entering few details (e-mail address, full name). The Testlab user account is then automatically created and an e-mail messages is sent to the company administrators notifying them about the new account.

Granting roles to new accounts (Onpremise)

On-premise version of Testlab can be configured to automatically grant a set of roles in a set of projects when a new user account is registered via CAS.

Relaxing the SSL verification of the host name for ticket validations

When setting up CAS, the client can now be configured to skip the host name check of the SSL certificate when verifying CAS tickets. This may help setting up CAS with self-signed certificates or with proxy configurations.


Sincerely yours,

Meliora team


Fancy a game ? Polybius is an arcade game which is said to have induced various psychological effects on players. The story describes players suffering from amnesia, night terrors, and a tendency to stop playing all video games. Around a month after its supposed release in 1981, Polybius is said to have disappeared without a trace. [Wikipedia]

In the span of a week, three children really did fall ill upon playing video games at arcades in the Portland area. Michael Lopez got a migraine, the first he’d ever had, from playing Tempest. Brian Mauro, a 12-year-old trying to set the world record for playing Asteroids for the longest time, fell ill after a 28-hour stint. And only a week later 18-year-old competitive gamer Jeff Dailey died due to a heart attack after chasing the world record in Berserk. One year later 19-year-old Peter Burkowski followed suit for the same reason playing the same game. [Eurogamer]

(Photo by DocAtRS [CC BY-SA 3.0], via Wikimedia Commons)




Tags for this post: announce features jenkins product release reporting video 


New major release of Testlab – Eiserner Mann

With joy we announce a new version of Testlab – Eiserner Mann. This version brings some new feature enhancements and in addition, a more scalable way of loading data to the UI which helps you when you have projects with thousands of stories or test cases. All features are immediately live for our hosted customers.

Please read more about the new features and changes below. 


Test case coverage
Test case coverage

Testlab has always had the coverage analysis view from where you can see how your stories and requirements are covered in testing. For projects, which for some reason do not hold any stories or requirements, a new test case coverage view has been added which works in similar way compared to requirement coverage but reports the status of your testing against your test cases.

You have the same familiar controls for picking a milestone, a version or a test run to report the status of your testing against or, choose nothing to get all the latest results of your test cases against the whole test plan.

The new view is perfect for projects with only test cases and can also be used in combination with the requirement coverage view for all projects. We encourage you to use stories and requirements to make your test plan more manageable, but even if you won’t the new view offers you an easier way to get a grasp on the status of your testing.


Filtering report data with custom fields

The reporting engine of Testlab has been enhanced with a possibility to filter the data on your reports with your configured custom fields.

If your requirements, test cases and / or issues have custom fields, you can now use these fields to pick the data you wish to include on your reports. All fields are automatically included to the window you configure your report with.


New custom field types

Two new types of custom fields are now available to be used with your requirements, test cases and issues:

  • Issues: Allows you to select one or more issues to be linked to the asset. This way you can for example link issues to other issues or, in some other way you prefer.
  • Requirements: Allows you to select one or more requirements to be linked to the asset.


Requirement coverage report

A new report has been included which reports the coverage of testing against requirements by listing requirements and related test cases verifying them. This report can be used to inspect the coverage of testing and get a good picture of the status of your testing. The report basically includes the same point of view to your testing as does the Test coverage view of Testlab. 


Generating external links to assets

Using the new added link button makes it easy to generate a link which can be used to open up the asset in Testlab. This feature can be found in applicable views’ top right corner as a button with a link symbol.


Loading data on demand

With projects with thousands and thousands of assets such as requirements and test cases, the earlier way of fetching and caching the data of the project in the UI was sometimes a hinderance. This new version has a new strategy of loading the data on demand which makes your really large projects more swift UI-wise.

Deleting projects

If you wish to permanently delete a whole project from your Testlab it is now possible from the UI. Keep in mind, that deleting projects deletes all the data in them permanently – so use good care when using this functionality.


Other miscellaneous changes

In addition to the above, the following changes are also available:

  • Notifications on your dashboard are now folded in groups to make the notification panel more usable when there are lot of tasks assigned to you.
  • When creating requirements, the context menu in the tree of requirements now offers direct links to create different classes of requirements (stories, folders, etc).
  • Test cases can now be removed from test runs directly from the Test execution view. Earlier, this was only possible while running the actual test run.


Sincerely yours,

Meliora team


Der Eiserne Mann (The Iron Man) is an old iron pillar partially buried in the ground in the German national forest of Naturpark Kottenforst-Ville, about two kilometers north-east of the village Dünstekoven. It is a roughly rectangular metal bar with about 1.47 m above ground and approximately 2.7 m below ground. The pillar is currently located at a meeting of trails which were built in the early 18th century through the formerly pathless forest area, but it is believed to have stood in another nearby location before that time.

A metallurgical investigation in the 1970s showed that the pillar is made of pig iron. After the long exposure to the weather, the iron man shows signs of weathering but there is remarkably little trace of rust. [Wikipedia]





Tags for this post: announce features product release reporting 


New JIRA integration and Pivotal Tracker support

The latest release of Testlab has been added with a bunch of enhancements for integrations:

  • A new integration method (based on JIRA’s WebHooks) has been added. This brings us the support for JIRA OnDemand.
  • New JIRA integration method supports pushing JIRA’s issues as requirements / user stories to Testlab. This makes it possible to push stories from your JIRA Agile to Testlab for verification.
  • A brand new support has been added for Pivotal Tracker agile project management tool. You can now easily manage and track your project tasks in Pivotal Tracker and in the same time, test, verify and report your implementation in Testlab.

Read further for more details on these new exciting features.


Support for JIRA OnDemand

To this day, Testlab has supported JIRA integration with best in class 2-way strategy with a possibility to freely edit issues in both end systems. For this to work the JIRA has had to be installed with Testlab’s JIRA plugin, which provides the needed synchronization techniques. Atlassian’s cloud based offering, JIRA OnDemand, won’t allow us to install custom plugins which leads us to the situation that the 2-way synchronized integration possibility has been possible only with JIRA instances installed on customer’s own servers.

The new JIRA integration strategy has been implemented in a way, that it is possible to configure it in use by just using JIRA’s WebHooks. This requires no plugins and brings us the support for JIRA OnDemand instances. Keep in mind, that the new simpler integration strategy is also available for JIRA instances installed on your own servers.


One-way integration for JIRA’s issues and stories

The new integration method works in a way that issues and stories are created in JIRA and can be pushed to Testlab. This is possible for issues / bugs and also for requirements. For example, you can push stories from your JIRA Agile to Testlab as requirements which makes it possible to integrate the specification you end up designing in your JIRA as part of specification in your Testlab project which you aim to test.

You can read more about the different integration strategies for Atlassian JIRA here.


Configuring the integrations


All new integrations, including the JIRA integration for issues and requirements and the Pivotal Tracker integration, are implemented in a way in which they can be taken in use by yourself. The project management view in Testlab has a new Integrations tab which allows you to configure the (Testlab side configuration) for these integrations. Integrations may also include some preliminary set up for them to work and instructions for these are provided in the plugin specific documentation.


Pivotal tracker integration

Pivotal Tracker is an agile project management tool for software team project management and collaboration. The tool allows you to plan and track your project using stories.

Your Testlab project can be integrated with Pivotal Tracker to export assets from Testlab to Pivotal Tracker as stories and, to push stories from Pivotal Tracker to Testlab’s project as requirements.


Importing Testlab assets to Pivotal Tracker

pivotal_import_testlabThe integration works in a two way manner. First, you have an option to import assets from your Testlab project to your Pivotal Project as stories to be tracked. You have an option to pull requirements, test runs, test sets, milestones, issues and even test cases from your Testlab project. When done so, a new story is created to your Pivotal Tracker project. It is as easy as dragging an asset to your project’s Icebox in Pivotal.


Pushing stories from Pivotal Tracker to Testlab

pivotal_importedYou also have an option to push stories from your Pivotal Tracker project to your Testlab project. This way, you can easily

  • push your stories from your Pivotal Tracker to Testlab project’s specification as user stories to be verified and
  • push bugs from your Tracker to Testlab project as issues.

Using Pivotal Tracker with Testlab is an excellent choice for project management. This way you can plan and track your project activities in Pivotal Tracker and in the same time, test, verify and report your implementation in Testlab. 

You can read more about setting up the Pivotal Tracker integration here.


To recap, the new features

  • bring you support for Pivotal Tracker, one of the best in industry agile project management tools,
  • add support for Atlassian JIRA integration where issues from JIRA are pushed to Testlab as issues and/or requirements and
  • add support for JIRA OnDemand.

We hope these new features make the use of Testlab more productive for all our existing and future clients.





Tags for this post: announce features integration jira pivotal tracker release screenshots 


New major release of Testlab

Meliora is more than happy to announce a new major release of Testlab – the most advanced and easy to deploy ALM/Test management solution for all enterprises – codenamed Razzle Dazzle. This version includes management of Milestones, much enhanced reporting and effort estimation features, to name a few.

Please read more about the new features below and as usual, all features are immediately live for our hosted customers. 


Milestones video


In previous Testlab versions, you had an option to use versions and environments to organize your testing efforts and manage your project goals. This mostly works fine but there are situations when having only (usually technical) versions for tracking management might get cumbersome.

Testlab now features Milestones – project reference points which give your project the needed structure for tracking and for easier handling of assets. In software projects, these are typically software releases, sprints, product branches, parts of your software platform or something else that you aim to track your project in.

Some notable features for Milestone management:

  • Specification assets (user stories, requirements, …) can be targeted to a milestone. This makes separation of different parts of your project easier inside the Testlab project.
  • Similarly, test cases and issues may be targeted to a milestone.
  • A new major view has been added to manage your milestones. This view also offers an always up to date view on how the testing of a milestone is progressing:
    • Statistics on how the design of milestone’s specifications is progressing,
    • view on the latest execution results of test cases assigned to the milestone and
    • the list of issues targeted to the milestone.
  • Milestones can be optionally set up to inherit other milestones: When inherited, the milestone’s specification, test cases and issues can be seen to affect the (possibly) later milestones. This is especially handy when you are managing the testing efforts of your complex platform-oriented software product. For example, you might have customer specific variants of your product as separate milestones which are set up to inherit the “base platform milestone” in your project. This way for example, the requirements from the base platform’s specification are inherited to all your customer specific milestones.
  • A new dashboard widget has been added which allows you to track the progress of a selected milestone.
  • Reports have been added with possibility to filter content per milestone.
  • A new custom field type has been added with milestone selection.
  • Search function will return matching milestones by identifier and title.
  • Coverage can be reported against a milestone.
  • Integration related Plugins (Confluence, Jenkins CI) have been updated to support passing the Milestone information back and forth.

Keep in mind, that using Milestones is totally optional. You can start your project as earlier and add your milestones later on when you need one. Using them is recommended though, as the milestone view offers a handy view on your testing progress.


New report templates

Reporting of Testlab has been revised with a number of new report templates. In addition to the new reports presented below, the number of existing report templates have been added with new filtering and configuration possibilities.

Results for run tests A template to report out testing results for run test cases. The template can be configured to pick and filter the relevant test cases from your project and includes a grouping of your choice and in addition, a detailed list of each executed test. RunTestsReport
Testing effort estimation Report will include all test cases matching the entered filtering criteria and the report will include the average, minimum, maximum and the latest durations for the execution of the test cases. The times calculated will help you to estimate the effort it could take to execute the set of test cases reported. EffortEstimationReport
Issue grouping This report type is used to get an overall view of your project’s issues summed and grouped up against two attributes of your choice. By configuring this report properly you can get important insight in the state of your project’s issues.  IssueGroupingReport
Issues in time

A time-series graph about issues summed up according to a chosen field. Report will be generated for chosen time period and will include all issues matching the entered filtering criteria.

This report helps you to summarize how the issues in your project have progressed in time. You can observe the changes against a field of issue of your choice.

Test case grouping Similar to the Issue grouping report but calculates and reports against test cases of your project. TestCaseGroupingReport
Test cases in time Similar to the Issues in time report but calculates and reports against test cases of your project. TestCaseFieldTimeseriesReport
Requirement grouping Similar to the Issue and Test case grouping reports but calculates and reports against requirements of your project. RequirementGroupingReport
Requirements in time Similar to the Issues and Test cases in time reports but calculates and reports against requirements of your project. RequirementFieldTimeseriesReport


Exporting reports video

Excel and OpenOffice export of reports

All PDF-rendered reports can now be exported as

  • MS Excel documents, making it possible to easily analyze your data further in the spreadsheet domain, and
  • OpenOffice Writer documents, allowing you to format the report further in your word processing software.


Testing effort estimation video

Testing effort estimation

To this day, Testlab has been recording the execution time when you execute your test cases. Utilizing this information, new features have been added to help estimate your future testing effort.

  • In Execution planning, a new column “Average time taken” has been added which calculates and shows the average time it has taken previously when this test case has been executed. These values are used to calculate the total time it would take to execute the set of tests currently in the test set editor. This way, you can estimate the total average time it would take to execute a set of hand picked test cases.
  • In Test execution, we’ve added a new “Time taken” column to indicate how long it took to execute the test in the test run. The summary of a test run also includes a total time it has taken to execute the tests in the selected test run.
  • Milestone’s testing progress in the Milestones view shows the total time it has taken to execute test cases in the milestone.
  • Testlab captures the time it takes to execute a test case automatically when you execute the tests. The time a single test case execution can be edited in Test execution view which allows you to correct the statistics for test cases if the time taken to execute the test case has been incorrectly captured.
  • A new “Testing effort estimation” report has been added to report out further statistics for execution of test cases.


Copying projects

Previously, you had an option to configure a single “Template” project with a set of information and settings which would be copied to all new projects. Now, you have an option to select a project from which you would like to copy the content from and in addition, pick the information which is copied. This way, when you are adding a new project, you can always copy the content of your choice from some other project to make the starting of your new project as easy as possible!


Copying old revisions of assets

When you edit ready-approved assets (test cases or requirements) Testlab creates new revisions automatically to the database. Old revisions are accessible by selecting an old revision from the top right corner of applicable editing view. Now, if you for some reason have the need for rolling back changes or taking a previous version of test case back to use, you can copy the old revision content as a new asset in your project. Just select the asset and revision of your choice and select “Copy as new…” from the menu and Testlab will copy the selected revision as an new asset.


Sincerely yours,

Meliora team


During World War I and in a lesser extent in World War II the seas of the world saw some of the wildest and most psychedelic ship painting designs ever. This is known as dazzle camouflage, which works by making it difficult to estimate the camouflaged ship’s range, speed and heading. The evidence for the success of dazzle camouflage, also know as razzle dazzle, was at best mixed.

If you missed this little part of our history, dazzle your neighbours and go and educate yourself with a Stuff you missed in history class podcast of the topic.





Tags for this post: announce features product release reporting 


Integrating with Apache JMeter

Apache JMeter is a popular tool for load and functional testing and for measuring performance. In this article we will give you hands-on examples on how to integrate your JMeter tests to Meliora Testlab.

Apache JMeter in brief

Apache JMeter is a tool for which with you can design load testing and functional testing scripts. Originally, JMeter was designed for testing web applications but has since expanded to be used for different kinds of load and functional testing. The scripts can be executed to collect performance statistics and testing results.

JMeter offers a desktop application for which the scripts can be designed and run with. JMeter can also be used from different kinds of build environments (such as Maven, Gradle, Ant, …) from which running the tests can be automated with. JMeter’s web site has a good set of documentation on how it should be used.


Typical usage scenario for JMeter

A common scenario for using JMeter is a some kind of load testing or smoke testing setup where JMeter is scripted to make a load of HTTP requests to a web application. Response times, request durations and possible errors are logged and analyzed later on for defects. Interpreting performance reports and analyzing metrics is usually done by people as automatically determining if some metric should be considered as a failure is often hard.

Keep in mind, that JMeter can be used against various kinds of backends other than HTTP servers, but we won’t get into that in this article.


Automating load testing with assertions

The difficulty in automating load testing scenarios comes from the fact that performance metrics are often ambiguous. For automation, each test run by JMeter must produce a distinct result indicating if the test passes or not. The JMeter script can be added with assertions to tackle this problem.

Assertions are basically the criteria which is set to decide if the sample recorded by JMeter indicates a failure or not. For example, an assertion might be set up to check that a request to your web application is executed in under some specified duration (i.e. your application is “fast enough”). Or, an another assertion might check that the response code from your application is always correct (for example, 200 OK). JMeter supports a number of different kinds of assertions for you to design your script with.

When your load testing script is set up with proper assertions the script suits well for automation as it can be run automatically, periodically or in any way you prefer to produce passing and failing test results which can be pushed to your test management tool for analysis. On how to use assertions in JMeter there is a good set of documentation available online.


Integration to Meliora Testlab

Meliora Testlab has a Jenkins CI plugin which enables you to push test results and open up issues according to the test results of your automated tests. When JMeter scripted tests are run in a Jenkins job, you can push the results of your load testing criteria to your Testlab project!

The technical scenario of this is illustrated in the picture below.


You need your JMeter script (plan). This is designed with the JMeter tool and should include the needed assertions (in the picture: Duration and ResponseCode) to determine if the tests should pass or not. A Jenkins job should be set up to run your tests, translate the JMeter produced log file to xUnit compatible testing results which are then pushed to your Testlab project as test case results. Each JMeter test (in this case Front page.Duration and Front page.ResponseCode) is mapped to a test case in your Testlab project which get results posted for when the Jenkins job is executed.


Example setup

In this chapter, we give you a hands on example on how to setup a Jenkins job to push testing results to your Testlab project. To make things easy, download the testlab_jmeter_example.zip file which includes all the files and assets mentioned below.

Creating a build

You need a some kind of build (Maven, Gradle, Ant, …) to execute your JMeter tests with. In this example we are going to use Gradle as it offers an easy to use JMeter plugin for running the tests. For running the JMeter scripts there are tons of options but using a build plugin is often the easiest way.

1. Download and install Gradle if needed

Go to www.gradle.org and download the latest Gradle binary. Install it as instructed to your system path so that you can run gradle commands.

2. Create build.gradle file

apply plugin: 'java'
apply plugin: 'idea'
apply plugin: 'jmeter'

buildscript {
repositories {
dependencies {
classpath "com.github.kulya:jmeter-gradle-plugin:1.3.1-2.6"

As we are going to run all the tests with plugin’s default settings this is all we need. The build file just registers the “jmeter” plugin from the repository provided.

3. Create src directory and needed artifacts

For the JMeter plugin to work, create src/test/jmeter -directory and drop in a jmeter.properties file which is needed for running the actual JMeter tool. This jmeter.properties file is easy to obtain by downloading JMeter and copying the default jmeter.properties from the tool to this directory.

Creating a JMeter plan

When your Gradle build is set up as instructed you can run the JMeter tool easily by changing to your build directory and running the command

# gradle jmeterEditor

This downloads all the needed artifacts and launches the graphical user interface for designing JMeter plans.

To make things easy, you can use the MyPlan.jmx provijmeterplanded in the zip package. The script is really simple: It has a single HTTP Request Sampler (named Front page) set up to make a request to http://localhost:9090 address with two assertions: 

  • Duration -assertion to check that the time to make the request does not exceed 5 milliseconds. For the sake of this example, this assertion should fail as the request probably takes more than this.
  • ResponseCode -assertion to check that the response code from the server is 200 (OK). This should pass as long as there is a web server running in port 9090 (we’ll come to this later).

It is recommended to give your Samplers and Assertions sensible names, as you refer directly these names later when mapping the test results to your Testlab test cases.

The created plan(s) should be saved to the src/test/jmeter -directory we created earlier, as Gradle’s JMeter plugin automatically executes all plans from this directory.


Setting up a Jenkins job

1. Install Jenkins

If you don’t happen to have Jenkins CI server available, setting one up locally couldn’t be easier. Download the latest release to a directory and run it with 

# java -jar jenkins.war --httpPort=9090

Wait a bit, and Jenkins should be accessible from http://localhost:9090 with your web browser. 

The JMeter plan we went through earlier made a request to http://localhost:9090. When you are running your Jenkins with the command said, the JMeter will fetch the front page of your Jenkins CI server when the tests are run. If you prefer to use some other Jenkins installation you might want to edit the MyPlan.jmx provided to point to this other address.

2. Install needed Jenkins plugins

Go to Manage Jenkins > Manage Plugins > Available and install

  • Gradle Plugin
  • Meliora Testlab Plugin
  • Performance Plugin
  • xUnit Plugin

2.1 Configure plugins

Go to Manage Jenkins > Configure System > Gradle and add a new Gradle installation for your locally installed Gradle. 

3. Create a job

Add new “free-style software project” job to your Jenkins and configure it as follows:

3.1 Add build step: Execute shell

Add a new “Execute shell” typed build step to copy the contents of your earlier set up Gradle project to job’s workspace. This is needed as the project is not in a version control repository. Setup the step as for example:


.. Or something else that will make your Gradle project available to Jenkins job’s workspace.

Note: The files should be copied so that the root of the workspace contains the build.gradle file for launching the build.

3.2 Add build step: Invoke Gradle script

Select your locally installed Gradle Version and enter “clean jmeterRun” to Tasks field. This will run “gradle clean jmeterRun” command for your Gradle project which will clean up the workspace and execute the JMeter plan.


3.3 Add post-build action: Publish Performance test result report (optional)

Jenkins CI’s Performance plugin provides you trend reports on how your JMeter tests have been run. This plugin is not required for Testlab’s integration, but provides handy performance metrics on your Jenkins job view. To set up the action click “Add a new report”, select JMeter and set the Report files as “**/jmeter-report/*.xml”:


Other settings can be left to defaults or you can configure the settings for your liking.

3.4 Add post-build action: Publish xUnit test result report

Testlab’s Jenkins plugin works in a way, that it needs the test results to be available in so called xUnit format. In addition, this will generate test result trending graphs to your Jenkins job view. Add a post-build action to publish the test results resolved from JMeter assertions as follows by selecting a “Custom Tool”:


Note: The jmeter_to_xunit.xsl custom stylesheet is mandatory. This translates the JMeter’s log files to the xUnit format. The .xsl file mentioned is located in the jmeterproject -directory in the zip file and will be available in the Jenkins’ workspace root if the project is copied there as set up earlier.

3.5 Add post-build action: Publish test results to Testlab

The above plugins will set up the workspace, execute the JMeter tests, publish the needed reports to Jenkins job view and translate the JMeter log file(s) to xUnit format. What is left is to push the test results to Testlab. For this, add a “Publish test results to Testlab” post-build action and configure it as follows:


For sake of simplicity, we will be using the “Demo” project of your Testlab. Make sure to configure the “Company ID” and “Testlab API key” fields to match your Testlab environment. The Test case mapping field is set to “Automated” which is by default configured as a custom field in the “Demo” project.

If you haven’t yet configured an API key to your Testlab, you should log on to your Testlab as company administrator and configure one from Testlab > Manage company … > API keys. See Testlab’s help manual for more details.

Note: Your Testlab’s edition must be one with access to the API functions. If you cannot see the API keys tab in your Manage company view and wish to proceed, please contact us and we get it sorted out.


Mapping JMeter tests to test cases in Testlab

For the Jenkins plugin to be able to record the test results to your Testlab project your project must contain matching test cases. As explained in the plugin documentation, your project in Testlab must have a custom field set up which is used to map the incoming test results. In the “Demo” project field is already set up (called “Automated”). 

jmeterplanEvery assertion in JMeter’s test plan will record a distinguishing test result when run. In the simple plan provided, we have a single HTTP Request Sampler named “Front page”. This Sampler is tied with two assertions (named “Duration” and “ResponseCode”) which check if the request was done properly. When translated to xUnit format, these test results will get idenfied as <Sampler name>.<Assertion name>, for example:

  • Front page/Duration will be identified as: Front page.Duration and
  • Front page/ResponseCode will be identified as: Front page.ResponseCode

To map these test results to test cases in the “Demo” project,

1. Add test cases for JMeter assertions

Log on to Testlab’s “Demo” project, go to Test case design and

  • add a new Test category called “Load tests”, and to this category,
  • add a new test case “Front page speed”, set the Automated field to “Front page.Duration” and Approve the test case as ready and
  • add a new test case “Front page response code”, set the Automated field to “Front page.ResponseCode” and Approve the test case as ready.

Now we have two test cases for which the “Test case mapping field” we set up earlier (“Automated”) contains the JMeter assertions’ identifiers.


Running JMeter tests

What is left to do is to run the actual tests. Go to your Jenkins job view and click “Build now”. A new build should be scheduled, executed and completed – probably as FAILED. This is because the JMeter plan has the 5 millisecond assertion which should fail the job as expected.


Viewing the results in Testlab

Log on to Testlab’s “Demo” project and select the Test execution view. If everything went correctly, you should now have a new test run titled “jmeter run” in your project:


As expected, the Front page speed test case reports as failed and Front page response code test case reports as passed.

As we configured the publisher to open up issues for failed tests we should also have an issue present. Change to Issues view and verify, that an issue has been opened up:



Viewing the results in Jenkins CI

The matching results are present in your Jenkins job view. Open up the job view from your Jenkins:


The view holds the trend graphs from the plugins we set up earlier: “Responding time” and “Percentage of errors” from Performance plugin and “Test result Trend” from xUnit plugin. 

To see the results of the assertions, click “Latest Test Result”:


The results show that the Front page.Duration test failed and one test has passed (Front page.ResponseCode).


Links referred

 http://jmeter.apache.org/  Apache JMeter home page
 http://jmeter.apache.org/usermanual/index.html  Apache JMeter user manual
 http://jmeter.apache.org/usermanual/component_reference.html#assertions  Apache JMeter assertion reference
 http://blazemeter.com/blog/how-use-jmeter-assertions-3-easy-steps  BlazeMeter: How to Use JMeter Assertions in 3 Easy Steps
 https://www.melioratestlab.com/wp-content/uploads/2014/09/testlab_jmeter_example.zip  Needed assets for running the example
 https://github.com/kulya/jmeter-gradle-plugin  Gradle JMeter plugin
 http://www.gradle.org/  Gradle home page
 http://mirrors.jenkins-ci.org/war/latest/jenkins.war  Latest Jenkins CI release
 https://www.melioratestlab.com/wp-content/uploads/2014/09/jmeter_to_xunit.xsl_.txt  XSL-file to translate JMeter JTL file to xUnit format  
 https://wiki.jenkins-ci.org/display/JENKINS/Meliora+Testlab+Plugin  Meliora Testlab Jenkins Plugin documentation 





Tags for this post: example integration jenkins load testing usage 


How to: Work with bugs and other issues

Testers role in the project is often seen in two major areas: building confidence in that software fulfill it’s need and on the other hand finding parts where it does not, or might not. Just pointing finger where the problem lies is seldom enough: the tester needs to tell how the problem was encountered and why the found behaviour is a problem. Same thing goes with enhancements ideas. This post is about how to work with bugs ( what we call defects in this post ) and other findings ( we call all the findings issues ) from writing those to closing them.


Using time wisely

Author encountering a bug

While small scale projects can cope with findings written on post-it notes, excel sheets or such, efficient handling of large amounts of issues is next to impossible. The real reason for having more refined way of working with issues is to enable the project to be able to react to findings as needed. Some issues need fast response, some are best listed for future development, but not forgotten. When you work right with issues, people in the project find the right ones to work with at given time, and also understand what the issue was written about.

As software projects are complex entities, it is hard to archive such thing as perfect defect description, but they need to be pretty darn good to be helpful. If a tester writes a report that the receiver does not understand, it will be sent back or even worse – disregarded. That said, getting best bang for buck for tester’s time writing the defect is not always writing an essay. If the tester sits next to developer, just a descriptive name for issue might be enough. Then tester just must remember what meant by the description! Same kind of logic goes with defect workflows. On some projects the workflows are not needed to guide the work, but on some it really makes sense for an issue to go trough certain flow. There is not a single truth. This article tells some of the things you should consider. You pick what you need.


Describing the finding

First thing after finding an defect is not always trying to write it down to error management system as fast as possible. Has the thing been reported earlier? If not, the next thing is to find out more about what happened. Can you replicate the behaviour? Does the same thing happen elsewhere in the system under test? What was the actual cause of the issue? If you did a long and complex operation during the testing, but you can replicate the issue with more compact set of actions, you should describe the more compact one.

After you have understanding about the issue, you should write ( a pretty darn good ) description of the issue. The minimum is to write the steps what you did. If you found out more about the behaviour, like how in similar conditions it does not happen, write that down too. If the issue can be found executing the testcase, do not rely on that reader would read test test case. Write down what is relevant on causing the issue. Attached pictures are often a great way of showing what went wrong, and attached videos a great way of showing what you did to cause that. They are really fast to record and a good video shows exactly what needs to be shown. Use them.  https://www.melioratestlab.com/taking-screenshots-and-capturing-video/

Consider adding an exact time stamp you tested the issue. The developers might want to dig in to the log files – or even better, attach the log files yourself. It is not always clear if something is a defect. If there is any room for debate, write it also down why you think the behaviour is a defect.

Besides writing a good description, you should also classify your defects. This helps the people in project to find the most important ones for them from the defect mass. Consider the following:


Field Motive
Priority Not all issues are as important at the moment.
Severity Severity tells about the effect of the issue. That can be different than the priority. Then again, as in most projects severity and priority go hand in hand. Then it is just easier to use just one field.
Assignment Who is expected to take the next step in handling the issue.
Type Is the issue a defect, enhancement idea, task or something else.
Version detected What version was tested when the issue was found.
Environment On what environment was the issue found.
Application area To what application area issue has an effect.
Resolution For closed issues, the reason why the defect was closed.


Add the fields to your project that are meaningful for you to classify defects and hide the ones that are not relevant. If you are not sure if you benefit from classification field, do not use the field. Use tags. You can add any tags to issues and find the issues later or to report them. Easy.


Issue written – what next?

Now that you wrote an error report, did you find it by conducting an existing test? If not, should there be a test that would find that given defect? After the issue has been corrected you probably get to verify the fix, but if the issue seems likely to re-emerge, write a test case for it.


Writing an issue begins its life cycle. At bare minimum the defect has two stages – open when you work with it and closed when you have done what you wanted to do, but most of the time the project benefits from having more stages in workflow. Mainly the benefits come from easier management of large amounts of findings. It helps to know what needs to be done next, and by whom. Having information in what state the issues are, the project will be able to distinguish bottlenecks easier. Generally the bigger the project, the more benefits there are to be gotten from having more statuses. That said, if the statuses do not give you more relevant information about the project, do not collect them. Simpler is better.

Status Motive
New Sometimes it makes sense to verify the issues before adding them to further handling by someone else than the initial reporter. New distinguishes these from Open issues.
Open Defects that have been found, but no decision how the situation will be fixed has been made yet.
Assigned Defect is assigned to be fixed.
Reopened Delivered fix did not correct the problem or the problem occurred again later. Sometimes it makes sense to distinguish the opens from reopened to see how common it is for problems to re-emerge.
Fixed Issues are marked fixed when the mentioned issue has been corrected.
Ready for testing After the issue has been fixed this status can be used to mark ones that are ready for testing.
Postponed Distinguish those open issues that you are not planning to work with for a while.
Closed After the findings have been confirmed to cease to be relevant, the issue can be closed. Basically after fixing the issue will be tested.

When working with defects and changing their statuses, it is important to comment the issues when something relevant changes. Basically if the comment adds information about found issue, it probably should be added. The main idea behind this is that it is later possible to come back to discussion so you don’t need to guess why something was done or wasn’t done about the issue. Especially it is important to write down how the change will be fixed (when there is a room for doubt), and finally how the fix has been implemented so that tester will know how to re-test the issue. If the developer can open the way how the issue was fixed it helps tester find possible other issues that have arised when the fix has been applied.


Digging in the defect mass

So now that you have implemented this kind of classification for working around issues, what could you learn from statistics? First off, even with sophisticated rating and categorization the real situation can easily hide behind numbers. It is more important to react to each invidual issue correctly than to rate and categorize issues and only later react to statistics. That said, in complex projects the statistics, or numbers, help understand what is going on in the project and help find the focus on what should be done on the project.

Two most important ways to categorize defects are priority and status. Thus a report showing open issues per priority, grouped by status is a very good starting point to look at current issue situation. Most of the time you handle defects defferently from other issues, so you would pick the defects to one report and other types of defects to other. Now, this kind of report might show you for example that there is one critical defect assigned, 3 high priority defects open and 10 normal priority defects fixed. The critical and high priority defects you would probably want to go trough individually to make at least sure that they get fixed as soon as they should so they do not hinder other activities, and for the fixed ones you would look if something needs to be done to have them enabled for re-testing. If at some point you see some category growing, you know who should you ask questions. For example high number in assigned defects would indicate bottleneck in development and prolonged numbers in “ready for testing” a bottleneck in testing.

Another generally easy to use report is the trend report for open issues by their statuses or issues’ development. As the report shows how many open issues there has been at given time, you’ll see the trend – if you can close the issues the same pace you open them.

This was just a mild scratch in the area of working with defects. If you have questions or would like to point out something, feel free to comment.

Happy hunting!



Tags for this post: best practices issues testing usage 


Testlab – Flannan Isles released

Summer is here! We’re happy to announce a new feature release of Testlab: Flannan Isles. The version has a number of new features and user interface enhancements described in detail below. As expected, all features are immediately available for all our hosted users. Enjoy! 



Tagging assets

Ever had a situation where you’d like to pin some data in your system of choice to find it later on or, group it up with similar data ? This is now possible in Testlab as it implements tagging of assets where you can pin your assets with keywords of your choice.

  • Tag requirements, test cases, test sets, test runs, issues and reports
  • Easy tagging of single assets
  • Tag multiple assets by selecting your assets from a tree or, tag multiple issues using issue listing filtering
  • Easy to use project tag cloud – click a tag to search content by your tag. Or search manually by entering a search word in format “tag:your tag”.
  • Filter tree content by searching with “tag:your tag”
  • Reporting and coverage analysis now support filtering with tags

Tagging assets is especially useful in situations, where you have a need to group some of your assets for later use. For example, you might want to tag all your requirements targeted for future release for easier management of your next release.



Direct editing of requirements and test cases

The views for managing the central assets of Testlab, requirements and test cases, have been enhanced with more snappy direct editing mode.

In previous Testlab versions we had separate viewing of assets and a transition to an editing mode. From now on, the forms and controls on requirement and test case editing are always editable with just a click for everyone with needed permissions.

In addition, the form controls have now a more clear disabled look to indicate what is editable and what is not.



Control and monitor automated tests

Dashboard of Testlab now features a new widget Jenkins Jobs which enables you to monitor and trigger jobs in your Jenkins CI server.

  • View your jobs’ build trend, latest build result and build number directly on your Testlab dashboard
  • Launch builds directly from the widget
  • Open up the related views on your browser from your Jenkins server with just a click
  • Pick the jobs you prefer – even from multiple different Jenkins servers if needed. Up to four different Jenkins servers can be controlled from a single dashboard.
  • Easy UI – if you are familiar with Jenkins’ UI you feel at home with this one
  • Integrates directly from your browser utilizing the Jenkins’ Meliora Testlab plugin at your Jenkins server


Reporting enhancements

The reports of Testlab have been enhanced with following addition:

  • New filtering options added for
    • Listing of requirements: covered & assignee
    • Listing of test cases: status & priority & assignee
    • Listing of issues: resolution & environment & related test case & resolved in version
  • Issues per priority report has been added with a sub report listing all issues counted to the totals
  • Similarly, execution status of test cases now includes a list of related test cases


Regenerating requirement identifiers

Requirement management now has a feature in the menu to regenerate IDs. During design, this can be used to easily regenerate all IDs in some folder or even for entire project. The identifiers will be regenerated in 1, 1.1, 1.1.1, 1.1.2, 1.1.3, … format prefixed with the same prefix as in the selected folder. If IDs are regenerated for the whole project the identifiers will be prefixed with project’s key.

Keep in mind, that regenerating IDs will overwrite and possibly change the current IDs – so use this feature sparingly.


Sincerely yours,

Meliora team


The Flannan Isles are a small island group close to Scotland, west of the Isle of Lewis. In December 1900, three men occuping the lighthouse vanished, leaving behind an unfinished meal and a mystery that’s never been conclusively solved.

The only sign of anything amiss in the lighthouse was an overturned chair by the kitchen table. No sign of the three men was found, either inside the lighthouse or anywhere on the island. Many rumours of their disappearance surfaced, from a murder to sea serpent eating the men and to foreign spies abducting them, but no conclusive evidence was never found in the investigations.

Photo by Marc Calhoun from geograph.co.uk.



Tags for this post: announce features jenkins release reporting video 


Taking screenshots and capturing video

This article introduces you on an easy way to capture and annotate screenshots during testing. We show you couple of easy ways to use screen capturing and recording tool Monosnap.

The latest Testlab release brings you inbuilt integration to Monosnap, a handy screen capturing tool with possibility of annotating the screenshots before uploading. Testlab supports desktop clients of Monosnap for Windows and Mac OS X operating systems. You are ofcourse free to use any screen capturing tool you prefer but we feel Monosnap really stands out from the crowd feature-wise and in the ease of use.


Why take screenshots or record video

When you are testing software on your workstation taking screenshots is a great way of documenting issues. A picture is worth a thousand words, right ? For example, when you are testing and an issue such as a defect is encountered capturing a screenshot, annotating the capture by highlighting the issue in an exact way and uploading it to Testlab usually tells the team members very well what went wrong. If the capturing tool allows you to annotate the shot, it’s perfect – the amount of textual description you need to enter for the defect is typically much less when you can mark and highlight the relevant parts of the screenshot.

The benefits of using screenshots in issue management are quite self-evident. But screenshots and recorded screencaptures can be quite beneficial in requirement management too. For example, when you are documenting new features on existing user interfaces, taking a screenshot and annotating it properly is a great addition to documenting your requirements. Same applies to test cases: If a test case is testing a complex user interface a well annotated screenshot or two can be a great help for a tester when testing.


Monosnap introduced

monosnapMonosnap is a collaboration tool for taking screenshots, sharing files and recording video from your desktop. The tool is available for multiple platforms (such as a Google Chrome extension, for iPhone and iPad) but here we talk about the desktop installable clients for Microsoft Windows and Mac OS X operating systems as they can be integrated and used seamlessly with Testlab.

When Monosnap is installed and run it runs as a desktop application and is accessible in a way depending on your operating system. For Mac OS X, the tool is available in your desktop’s menu bar as an icon. Similarly in Windows, the tool is available in your so called system tray and as a hovering hotspot on your desktop if you prefer.

For capturing screenshots the basic way of working with Monosnap is as follows:

  1. You capture an area of your desktop by selecting “Capture Area” from Monosnap’s menu or pressing the appropriate keyboard shortcut.
  2. A Monosnap window appears with the captured area shown. The window has functions to annotate the capture: For example draw different shapes on it and write text on the capture.
  3. When you are happy with the capture you can upload it to a service of your choice or save the capture on your disk.

For capturing video, you

  1. select “Record Video” from monosnap’s menu or press the appropriate keyboard shortcut.
  2. Monosnap’s recording frame appears. You move and resize this frame to the area on your desktop which you would like to record as a video capture. You also have options to record video from your workstations web cam, record audio from your microphone if you prefer.
  3. To start recording you press the Rec button. You can annotate the video during recording by drawing different shapes on it. When you have recorded your video you press Rec button again to stop the capture.
  4. When recorded, the video is encoded to a MP4 format and depending on your workstation if might take a few seconds. A window appears with the encoded video in it which you can preview before uploading. You can then upload the captured video to a service of your choice or access the encoded video file on your disk. 


Using Monosnap with Testlab

To use Monosnap with Testlab you have two options: Take screen captures with Monosnap and upload them manually to Testlab by dragging and dropping or, integrating Monosnap to Testlab’s WebDAV interface which allows you to upload captures to Testlab with a click of a button.

Uploading manually

When uploading manually no pre-configuration is needed. You can use Monosnap in a way you prefer and when you have a capture ready upload it to Testlab in same way you would upload a regular file attachment. Keep in mind though, that Monosnap makes this quite easy as it features a “Drag bar” on the right hand side of the capture window. From this, you can just grab and drag the capture on your Testlab browser window and attach it to the asset open in the window just by dropping.

If dragging and dropping is not possible for some reason, as a workaround, you can ofcourse save the capture on your disk and upload it regularly to Testlab.

To see how it actually works play the video below:




WebDAV integration

Monosnap is great in a way that it supports a possibility of uploading the captures with a click of a button to service of your choice. This enables Testlab to act as a WebDAV storage for which into the Monosnap can push the captures to. When configured, you can just push the Upload button of Monosnap and the capture is automatically uploaded to Testlab and attached to the asset open in your Testlab browser window.

To make use of this feature some pre-configuration is needed:

  1. Open up Monosnap’s menu and select “Preferences…” or “Settings…”. Monosnap’s settings window opens up.
  2. Select “General” tab and configure the following:
    • After screenshot: Open Monosnap editor
    • After upload: Do not copy
    • Open in browser: no
    • Short links: no
  3. Select the “Account / WebDAV” view and configure the following:

    For Mac OS X:

    • Host: https://COMPANY.melioratestlab.com/api/attachment/user
      Note: Replace COMPANY with the virtual host of your own Testlab. For example, if you are using hosted Testlab from mycompany.melioratestlab.com enter “https://mycompany.melioratestlab.com/api/attachment/user” to this field. For on-premise installations, set this field to match the protocol, host name and the port of your server to a /api/attachment/user context.
    • Port: Leave as blank (shows as gray “80”)
    • User: User ID of your Testlab account
    • Password: Password of your Testlab account
    • Folder: Leave as blank (shows as gray “/var/www/webdav”)
    • Base URL:Leave as blank (shows as gray “”)
      Click “Make default” button to make the configured WebDAV service as the default upload service of Monosnap. When set, the Upload button always uses this service by default.


      For Microsoft Windows:

    • Host: COMPANY.melioratestlab.com
    • Note: Replace COMPANY with the virtual host of your own Testlab. For example, if you are using Testlab from mycompany.melioratestlab.com enter “mycompany.melioratestlab.com” to this field.
    • Port: HTTPS or HTTP port of your Testlab server – if you are using hosted Testlab enter 443
    • User: User ID of your Testlab account
    • Password: Password of your Testlab account
    • Directory: /api/attachment/user
    • Base URL: Leave as blank

The preconfiguration is documented in detail in the “Screenshots and recording” section of the Testlab’s integrated help manual.

Keep in mind, that the pre-configuration step needs to be done only once. Once you’ve configured your Monosnap to upload captures to Testlab it just works – no need to configure it again later.

Where is the capture uploaded to

When captures are uploaded via Testlab’s WebDAV interface the uploaded captures are automatically attached to the asset which is currently open in your Testlab browser window. So when uploading, make sure you have an asset (a requirement, a test case or an issue) open in your Testlab window in a way, that a file can be attached to it. If for example, your Testlab user account wouldn’t have proper permissions to attach files to assets the uploading will just silently fail.

To see WebDAV integrated Monosnap in action play the video below:




Advantages gained

Having easy to use screen capture tools make your documenting easier and speeds up work in multiple tiers: Documenting issues and other assets is faster and people dealing with the documented assets have a clearer understanding on the issue at hand.


Tags for this post: example features screenshots usage video 


Exploratory testing with Testlab

In this article we introduce you to the recently released new features enabling more streamlined workflow for exploratory testing.

Exploratory testing is an approach to testing where the tester or team of testers ‘explores’ the system under test and during the testing generates and documents good test cases to be run. In a more academic way, it is an approach to software testing that is concisely described as simultaneous learning, test design and test execution.

Compared to scripted testing – where test cases and scenarios are pre-planned before execution – exploratory testing is often seen as more free and flexible. Each of these methodologies have their own benefits and drawbacks and in reality, all testing is usually something in between of these two. We won’t go into methodological detail in this article as we focus on how to do the actual test execution in explorative way with Testlab. We can conclude though, that exploratory testing is particularly suitable if requirements for the system under test are incomplete, or there is lack of time.


Pre-planning exploratory test cases

As said, all testing approaches can be usually placed in between a fully pre-scripted and fully exploratory approach. It is often recommended to consider if pre-planning the test cases in some way would be beneficial. If the system under test is not a total black box meaning there are some knowledge or even specifications available it might be a wise idea to add so called stubs for your test cases in pre-planning phase. Pre-planning test case stubs might give you better insight in testing coverage as in pre-planning, you have an option to bind the test cases to requirements. We’ll discuss using requirements in exploratory testing in more detail later.

For example, one approach might be that you could just add the test cases you think you might need to cover some or all areas of the system without the actual execution steps. The actual execution steps, preconditions and expected results would be filled out in exploratory fashion during the testing. Alternatively, you might be able to plan the preconditions and expected results and just leave the actual execution steps for the testers.

Keep in mind, that pre-planning test cases does not and should not prevent your testers from adding totally new test cases during the testing. Additionally, you should consider if pre-planning testing might affect your testers’ way of testing. Sometimes this is not desirable and you should take into account the experience level of your testers and how the different pre-planning models fit into your testing approach and workflow in overall. 


Exploratory testing session

Exploratory testing is not an exact testing methodology per se. In reality, there are many testing methods such as Session-based testing or pair testing which are exploratory in a way. As Testlab is methodology agnostic and can be used with various different testing methods, in this article we combine all these methods by just establishing the fact that the testing must be done in a testing session. The testing method itself can be any method you wish to use but the actual test execution must be done in a session which optionally specifies

  • the system or target under testing (such as a software version),
  • environment in which the testing is executed in (such as production environment, staging environment, …) and
  • sets a timeframe in which the testing must be executed in (a starting date and/or a deadline).

To execute test cases in a testing session add a new test run to your project in Testlab. Go to the Test runs view and click the Add test run… button to add a new blank testing session.

New testing session as a test run

When the needed fields are set you have an option to just Save the test run for later execution or, to Save and Start… the added test run immediately. The test run is added to the project as blank meaning it does not have any test cases bound to it yet. We want to start testing right away so we click the Save and Start… button.


Executing tests

The set of functionality while executing is a good match for an exploratory testing session. As said, test execution in Testlab enables you to

  • pick a test case for execution,
  • record testing results for test cases and their steps,
  • add issues such as found defects,
  • add comments to test cases and for executed test case steps,
  • add existing test cases to be executed to the run,
  • remove test cases from the run,
  • re-order test cases in the run to a desired order,
  • add new test cases to the test case hierarchy and pick them for execution,
  • edit existing test cases and their steps and
  • cut and copy test cases around in your test case hierarchy.

The actual user interface while executing looks like this:


The left hand side of the window has the same test case hierarchy tree that is used to manipulate the hierarchy of test cases in test planning view. It enables you to add new categories and test cases and move them around in your hierarchy. The hierarchy tree may be hidden – you can show (and hide) it by clicking the resizing bar of the tree panel. The top right shows you the basic details of the test run you are executing and the list below it shows the test cases picked for execution in this testing session.

The panel below the list of test cases holds the details of a single test case. When no test cases for execution are available, the panel disables itself (like shown in the shot above) and lists all issues from the project. This is especially handy when testing for regression or re-testing – it makes it easy to reopen closed findings and retest resolved issues.

The bottom toolbar of buttons enable you to edit the current test case, add an issue and record results for test cases. The “Finish”, “Abort” and “Stop” buttons should be used to end the current testing session. Keep in mind, that finishing, aborting and stopping testing have their own meaning which we will come to later in this article.


Adding and editing test cases during a session

When exploring, it is essential to be able to document the steps for later execution if an issue is found. This way scripted testing for regression is easier later on. Also, if your testing approach aims to document the test cases for later use by exploring the system you must be able to easily add them during execution.

If you have existing test cases added which you would like to pick for execution during a session you can drag the test cases from the test hierarchy tree on to the list of test cases. Additionally, you can add new test cases by selecting New > Test case for the test category you want to add the new test case to. Picking test cases and adding new via inline editing is demonstrated in the following video:




Editing existing test cases is similar to the adding. You just press the Edit button at the bottom bar to switch the view to editing mode. The edits are made in identical fashion compared to adding.


Ending the session

When you have executed the tests you want you have three options:

Finish, abort or stop

It is important to understand the difference which comes from the fact that each executed session is always a part of a test run. If you wish to continue executing tests in the same test run at the later time you must Stop the session. This way the test run can be continued later on normally.

If you conclude that the test run is ready and you wish not to continue it anymore you should Finish it. When done so the test run is marked as finished and no testing sessions can be started on it anymore. It should be noted though, that if you discard a result of a test case from the run later on the run is reset back to Started-state and is again executable.

Aborted test runs are considered discarded and cannot be continued later on. So, if for some reason you think that the test run is not valid anymore and should be discarded you can press the Abort run button.


Asset workflows and user roles in exploratory testing

Requirements, test cases and issues have an asset workflow tied to them via project’s workflow setting. This means that each asset has states that they can be in (In design, Ready for review, Ready, …) and actions which can be executed on them (Approve, Reject, …). In exploratory testing having a complex workflow for project’s test cases is usually not desirable. For example, having a workflow which requires review of test cases from another party makes no sense when testers should be able to add, edit and execute test cases inline during testing.

That said, if using default workflows it recommended to use the “No review” workflow for your projects. 


No review workflow


If executing test cases which are not yet approved as ready Testlab tries to automatically approve them on behalf of the user. This means, that if the test case’s workflow allows it (and the user has the needed permissions to do so) the test case is automatically marked as approved during the session. This way using more complex workflows in a project with an exploratory testing approach might work if the transitions between test case’s states are suitable. That said, as the testers must be able to add and edit test cases during executing having a review based workflow is useless.

The asset workflows’ actions are also tied to user roles. For the testers to be able to document test cases during execution the tester users should also be granted a TESTDESIGNER role. This ensures that the users should have the needed permissions to add and edit test cases they need.


Using requirements in exploratory testing approach

When designing test cases in exploratory testing sessions the test cases are added without any links to requirements. In Testlab, testing coverage is reported against the system’s requirements and in testing parlance, a test case verifies a requirement it is linked to when the test case is executed and passes.

It is often recommended to bind the added test cases to requirements at the later stage. This way you can easily report what actually has been covered by testing. It should be noted, that the requirements we talk here don’t have to be fully documented business requirements for this to work. For example, if you would just like to know which parts of the system have been covered you might want to add the system’s parts as project’s requirements and bind the appropriate test cases to them. This way a glance at Testlab’s coverage view should give you insight which parts of the system have been tested successfully.

Better yet, if you did pre-plan your test cases in some form (see above) you might consider adding a requirement hierarchy too and linking your test case stubs to these requirements. This would give you insight into your testing coverage straight away when the testing starts.



In this article we talked about the new test execution features of Testlab enabling you to execute your tests using exploratory testing approaches. We went through the advantages of pre-planning some of your testing, using requirements in your exploratory testing, talked about testing sessions and noted how Testlab’s workflows and user roles should be used in exploratory testing.


Tags for this post: example exploratory features testing usage 


New features: Enhanced reporting, exploratory testing and easy screenshots

A new quarterly feature release of Testlab (codenamed Last Resort) is out! This update gives you enhanced reporting, better support for exploratory testing, easier screen and video capturing and a bunch of other enhancements. Read on.



Enhanced reporting

Reporting in Testlab is now much better than before. You can now preconfigure report parameters and publish your reports so that they can be launched with a single click. Reports can be saved to three different scopes: to a project, to all projects or to personal use only.

  • New report view added for adding and launching reports
  • Preconfigure report parameters and publish your reports with a descriptive name (for example “Critical open issues of BETA” or “Last month testing activity”)
  • Save reports to your project, to all your projects or as private report just for you.
  • Launch your reports with a single click
  • Time bound reports such as progress reports can be configured with relative dates. For example, you can configure a testing progress report to always be reported from start of the current month to current day. Generating a monthly progress report this way is just a click away.
  • New report type added: Testing activity. This reports a summary of all testing activities performed during a time frame or for a set of versions including executed tests and updated assets such as issues, requirements or test cases.
  • Listing reports now include all configured custom fields and comments.
  • All reports now take into account the project’s time zone and user’s locale for date and time formatting.
  • Testlab’s search function also looks up published reports.



Exploratory testing

Pre-planning your testing and test cases is not always possible or desirable. Exploratory testing is an approach to testing where the tester or team of testers ‘explores’ the system under test and during the testing generates and documents good test cases to be run.

This type of testing is now well supported in Testlab. It is possible to add, remove and edit test cases during execution. It is also possible to add notes or comments to test cases during execution. This can enhance communication between testers and even between different testing sessions. 

  • Add test cases to the test run during a testing session.
  • Remove test cases during a testing session.
  • Edit existing test cases and their steps during execution.
  • Comment test cases during execution and have access to comments entered for test cases’ steps on previous runs.
  • Execute test cases in any order by jumping between sessions’ test cases.

All new functions are implemented in a way that they available to all testing approaches.



Capturing screenshots and video

Testlab integrates with Monosnap, a powerful screenshot tool, allowing you to easily capture and annotate screenshots. You can also record a screencast or video. Testlab offers an interface to Monosnap so that you can add the content to Testlab with a click of a Upload button. Alternatively, you can just drag-and-drop captured content from Monosnap to attach it easily to Testlab’s asset.

Captured content is uploaded in a secure way directly to Testlab and stored as file attachment to your project asset.

  • Capture screenshots and annotate them – great way to highlight any issues.
  • Easy and fast.
  • Record video and upload MP4 encoded screencast to Testlab.
  • Integration supports the desktop client of Monosnap for Windows and Mac OS X.


Other enhancements

In addition to major features described above, the version includes enhancements such as:

  • Test cases can be easily executed directly from the test hierarchy tree context menu. In addition to this, the issue view enables you to easily run the test case the issue is related to – handy, when you just want to re-test and verify if the issue is really fixed or not.
  • Attached images are shown directly in UI by hovering on them with your mouse cursor.
  • When adding a new test run the dialog now features “Save and start …” button to start executing with the new test run immediately.
  • Major performance enhancements in situation when the project has a huge number of executed tests or test runs.


Sincerely yours,

Meliora team

letterOn every British nuclear submarine, there is a safe. Inside that safe is another safe. And inside that safe is a handwritten letter from the British Prime Minister, to be opened only if the country has been decimated by nuclear war.” (This American Life, episode 399).

United Kingdom’s nuclear doctrine is unique in a way that each Prime Minister can decide the orders for the nuclear submarines in the event that the British government has been incapacitated by a nuclear strike. These orders are handwritten by the Prime Minister, placed in a safe on each submarine and are destroyed unopened after the Prime Minister leaves office. What action each Prime Minister would have ordered is only ever known to the outgoing Prime Minister. If the orders were ever to be carried out, the action taken would be the last official act of Her Majesty’s Government.



Tags for this post: announce exploratory features release reporting screenshots 


Heartbleed: Testlab not affected


There has been a lot of discussion lately about serious security vulnerability CVE-2014-0160 commonly called Heartbleed.

The issue resides in a commonly used cryptographic software library OpenSSL which potentially might compromise sensitive data such as user credentials and cryptographic keys from the servers.  

We would like to inform that Meliora Testlab service is not affected.

The transport confidentiality is and has been guaranteed and communicated data successfully encrypted so users of Testlab are not required to change their passwords because of Heartbleed vulnerability. We would like to remind though that it is a good practice to change your password regularly and to make sure your password strength is good enough.



Tags for this post: product security 


CeBIT 2014 was great

Meliora Ltd participated to CeBIT 2014 – Europe’s premier tech trade show.
We had a great opportunity to present Meliora Testlab to people and made a big number of interesting contacts.

Today, well-designed and engineered, business driven IT solutions are becoming a key distinguishing and competitive factor for a company’s business. This message was generally well received by the people we discussed with.


Meliora team

Register today to Meliora Testlab – if you did not already. Then check our demo project to get an overview of Testlab usage!
Testlab helps you to get a simple but efficient test process in place.




Choosing the test management tool

Choosing the best option to oneself rarely is an easy one. The problem is similar when you are buying jeans for yourself and when you are choosing a software for your company – the salesman tells the brand they represent is the best option and probably recommends the more expensive options. The price often reflects the quality, but does not guarantee it. If you want the best bang for your buck, you need to know about the product you are purchasing. In the case of test management tool the choice can be made easier with some preparation. This article is about what you should remember in order to be able to choose the best tool for your purposes.

Different approaches for choosing the tool

One option you have is to outsource the problem. This can be a good approach especially if QA tools are far from your organizations field of expertise. You will easily find QA companies or companies specialized in selling software to do the research for you. If you use this approach, remember to be a little bit skeptical about the advice you get. Not all research conducted this way is without ulterior motives. Companies recommending solutions to you might have monetary interest to recommend specific solution or simply they might have more experience about one tool over the others and thus they might recommend it even if it would not be the best one for you.

Another thing you should think early on is the approach of choosing the tool and taking it into use. Usually, you’ll the the most benefit from your tool by using a single tool with long term benefits in mind, but consider piloting the test management tool in a real project before making the final decision. This way you will get real feedback from your own professionals and have a possibility to withdraw from using the tool if it doesn’t suit your needs.

An another article about deploying a test tool using a pilot project is in works so stay tuned.


Tool as a service or installed on own server ?

Do you already know if you want the tool installed on your own premises or do you know you want it as a care-free service ? Usually the reason for installing the tool in your own premises is a concern about security. Another thing is that not all SaaS solutions offer you a possibility to integrate to your other systems. As a rule of thumb, if you do not have a reason to have the solution in your own premises, you will have it easier as a service. Most important benefits in SaaS installations are

  • No need to install or maintain the software. You can use that time for productive work
  • You always have the latest version.
  • No need for long commitments. Buy what you need.

When you choose to sign up for a service make sure you have a way to export your data relevant to yourself in a format you can use later on. Optimally this means some easily usable text based format such as CSV or XML. To be on the safe side, contact your vendor and ask if you have a possibility of getting all your data if you decide to give up using the service. Even after the project is over, you might need the data. For example when developing a new version of the software you might want to revise previous requirements / user stories and not to start from scratch and maybe check the found defects not to make same mistakes again.


Features to consider when choosing a test management tool


What features do you value ?

Test management tool comparison sheet

When purchasing just about anything, including these jeans or test management tool, the main point is it should suit your needs. When deciding a tool a commonly used practice is to first list your requirements for the tool and then see how the tools fare against your requirements; The idea is that the tool should support your way of work.

A good practice is to look at the tool features with open eyes meaning that you should be open to any features which you originally weren’t looking for. Sometimes you might find features that really help you even that you didn’t list it beforehand or the tool might not have certain feature, but another good way to accomplish the thing why you wanted the feature in the first place. Making really strict rating system for the evaluation probably guides you towards features you are already familiar with without taking into account that the tools always develop new features over time. Then again, you should value the features you really benefit from and give less attention to features that seem nice but you are not sure if you really need them.

The above said, I have included here an evaluation sheet to help you on your evaluation efforts. Usually evaluating many tools takes time and this sheet also helps you compare different features of the tools. When giving points to different features you can give higher points to solutions that are more usable. For example if your organization needs to import data often you will value the ability to do it as easily as possible without needing to install additional plugins or software. In the end, you don’t need to choose the one that got the most points if your gut feeling tells otherwise.

You should also prepare for future needs. For example if you have not used formal requirements in conjunction with testing, you might want to ensure that your tool supports requirement management. As features, it is also best to take into account general use and maintenance of the tool. If the tool requires installation of client software or plugins, does those support your possibly heterogeneous workstation environment? If installations are needed, how do you deliver them to users’ workstations ? 

One possibly important feature group that is closely tied to your way of working is the capability for integrations. Which other tools do you use in your development and which of those should be tied to testing. Generally you benefit most from the integrations that support your daily work. For example if your developers use continuous integration, you might want to have integration between the tools so that your testers will see what unit tests there are and what is their run status at any given moment. This feature helps people to work together and thus saves precious time.

The architecture of the tool does not affect you directly, but you should look how often the tool is updated. The tools with steady update history are more probably to have updates also in the future.


What do the tools cost in the end ?


Cumulative yearly costs for different pricing modelsOne prime factor in our buying decision is of course the overall cost of the service. When we buy jeans, we see pretty accurately the price we are going to pay. With development oriented software the final price is bit more complicated. In addition to the actual cost of the tool we have to take into account the hosting costs and how much time people need to spend on the maintenance.

Different tools have different pricing structures. If we pay for named users we need a lot more of them than floating licenses depending on the way of working in your organization (typically, one in three of your named users should usually be enough floating licenses in comparison). If you buy software as a service, the yearly costs are simply twelve times the monthly fee. For on premise installations the best thing for you to do is to approximate the yearly costs including the hosting, maintenance and updates. Approximate how much time the tool maintenance requires and take it into account too. If you buy the software, take also the maintenance fees in account.

Think what is the value of minimum up-front investments for you. With monthly fee pricing you can stop using or downgrade the licenses when you need to. Then again, by committing to longer subscription you might get discount.


What goes in comes out – and the format matters

All testing tools offer some kind of reporting system, but they are not equal. Make sure you get the information reported that you need. A good reporting system is easy to use and gives you the information you need in format that is easy to understand. Some tools have a reporting system that is comprehensive but requires a lot of work to setup and results might be easily misinterpreted. Then again, some tools allow you to see the basic information, but they do not really tie it together to meaningful reports to help you make decisions based on them. Ok, you might know which cases failed and passed on the last run, but what is the big picture – for example how thoroughly have you tested certain area of your software project? Or you might know the amount of issues, but not the rate which in the defects are closed.


Avoid shotgun marriages with the tool

You should also think of some sort of exit-strategy. What happens when you want to change the tool or keep a pause in it’s usage? Will you get your data out in a format that can be used later? If not, it is harder for you to change the tool if you would grow unsatisfied with the service you get. The problem is almost the same in SaaS and installed services. If you don’t get the data in usable format, which today means a .csv -file, you can not import it straight to another tool.


What do you need from your vendor and can the vendor deliver you that ?

A working tool as itself is never enough – support and service level your vendor can deliver is as important. If you need support, will you get help from the vendor? And if you do, what is the procedure? With some vendors getting real help for technical problems is notoriously hard as you need to fight through different tiers of customer support before getting to someone who can really help. And that takes time. Then again, if you go for a hosted service, you will need technical support less as the tool is maintained by the vendor. Some companies also offer a more consultative touch to their support: How should you use your requirements effectively to support testing? What sort of workflow should you use in certain project? This can be a lifesaver for some companies where all the involved people are not experts as you get real world expert answers when you need them with reasonable costs.

If you need training, consultation and support, you should give value what the vendor is capable of offering. Can the vendor or someone else you trust provide you that? Do you have needs to customize the tool now or possible later? For example, if you would like to integrate your tool to your own third party product – can the vendor deliver that, and if yes, for what kind of price? Best vendors can support you in all your needs, and still do not cost you your arm and leg.


Try before you buy

In evaluation, it is essential that you have the possibility to try out the tool in real world scenario before making the decision. That is the only way to be sure that the tool works as you have expected it to work. That can be sometimes a bit tricky as some features will reveal how they really work only after using the tool in real project. Many tools offer you a trial period to test the software which is good, but testing the software in real world scenario often requires more time. If you can, consider piloting the chosen tool with a project or two before committing yourself to big investments.




Migrating from using Excel

This article gives you some pointers on how to migrate your current Excel-based QA and testing process to Meliora Testlab.

If you’ve been working in IT-development field you’ve probably been in a project which is managed by hand with spreadsheets, most likely Excel. We’ve all been there. In this article we go through some challenges on migrating your current process to a modern test management tool and give concrete advice on importing your data.


Typical spreadsheet based testing scenario

To describe a typical manual test management scenario with spreadsheets let’s start off with the artifacts involved.


Artifacts in typical spreadsheet based testing scenario


To design an application or a service, some kind of (requirement) specification must be made. It is quite common for a specification to be written as a separate document or better yet, composed of some kind of requirements such as use cases, user stories or business requirements in a separate requirement management system.

When the application is implemented there must be a way to verify that the implementation works in a way designed. The specification is used to design a test plan which typically consists of test cases and steps to execute during the testing. These steps are often described in a spreadsheet which makes up the test plan for the testing.

In addition, the results of the testing must be tracked and reported somehow. Typically the spreadsheet based test plan is written in a way that it can be copied and filled out to form a test report of the testing.


Simple diagram on using Excel


The above diagram describes a common process for working in the spreadsheet based environment. Analysts designing the application are responsible for writing down the specification document. This document is then stored somewhere and studied by the persons responsible for testing.

A person responsible for managing the testing, Test Manager in the diagram above, then familiarizes him/herself of the specification and writes down how the application should be tested to verify that it works in a way the specification describes. A spreadsheet based test plan is written, stored somewhere and delivered for the testers doing the actual testing.

When a tester is going thru the steps and testing the application he/she copies and fills out the test report as a spreadsheet document. The results of the tests are filled up and comments about the defects encountered are written down.

During the testing round it is quite common to encounter issues, such as defects. These should be fixed and the tester goes on and sends the test report to the developers responsible for fixing the issues. As the issues are resolved the developer informs the manager. Fixed issues should be re-tested so the process is cyclic in a way so that the test plan (and possibly the specification) is updated and new re-testing round is done.

There are variations to this of course. But typically, a some kind of test plan is written as a spreadsheet, stored somewhere and used as a reporting template for interested parties.



The scenario above poses some challenges which are often accentuated with the size of the project.

  • The management requires lot of dialog and direct communication between the parties. For example, if a tester should only be testing some part of the test plan communicating the ‘what should be tested’ to the tester is laborous.
  • Keeping the different assets and documents in sync is tedious. If and when the specification changes it is often hard to know which test cases should be updated.
  • Linking the different assets together is often hard or impossible. For example, it is often hard to know what actually should be re-tested for some defect found during the testing.
  • Document management and versioning takes a lot of work. How are you sure that all the parties involved are reading the same versions of the documents ? In addition, it is quite common for the documents to go sour over time so having the trust that your test plan is up to date is difficult.
  • Getting to know the latest status of the testing is tough with a large collection of testing reports and other documents.
  • Managing the documents and assets is usually hard. When the documents are sent around and delivered to 3rd parties over time it is practically impossible to control anymore who has access to what.
  • Handling the documents takes a lot of work. Typically, the more people you have working the more time you waste passing documents around and keeping them up to date.


Doing things better

To ease testing related work and work out the challenges mentioned above migrating your process to a centralized test management tool is preferred. Because all your testing related assets can be stored centrally it eases the effort related to passing documents around and keeping them up to date. It drives collaboration as these tools usually offer features for assignments, commenting and sharing content. With centralized data store reporting is a breeze with up to date views on your current testing and issue status.


Simple diagram on using Testlab


The above diagram shows a typical scenario when a similar testing process is handled with a centralized test management tool, Meliora Testlab. A project in Testlab includes all quality related assets of a single entity under testing, such as a software project, a system or an application. It represents a clear separation between different projects in your company.

Analysts are responsible for compiling up the features, needs and requirements for the application and creating the user stories, use cases, business requirements or some other types of assets which fit your software process. These requirements are verified by test cases which are added with execution steps. These test cases are then linked to the requirements they are verifying – to achieve a full transparency from application features through test cases, test runs the test cases are executed during the testing ending up to the issues found in the testing.


Migrating spreadsheet based data to Testlab

To migrate existing quality related assets from spreadsheets to your Testlab project is easy. You can import

To make the import as easy as possible for you please download the needed simplified template files below.

(.xls, Excel) spreadsheeticon
(.CSV) spreadsheeticon
Test cases
(.xls, Excel) spreadsheeticon
Test cases
(.CSV) spreadsheeticon
(.xls, Excel) spreadsheeticon
(.CSV) spreadsheeticon

The CSV files are exported as UTF-8 with ; as field separator and as a quote character. The files are importable as they are to Testlab without any further configuration and work with format auto-detection.

Keep in mind that the files are simplified to contain only the most common fields of the assets. A detailed description about the format supported and how to use the import features in Testlab is described here.


How do I import the data

To import data download the template files above for the assets you want to import. Make yourself familiar with the format from the links presented above and fill out the files to contain the data you want to import.

Alternatively, if you already have your data in spreadsheet format alter it properly so that it conforms to the format Testlab supports.

When you are happy with the data you have to export it in CSV (Comma Separated Values) format. Some guidance can be found in the following links for Excel and LibreOffice/Open Office

Make sure you export the file in a format Testlab supports it: A safe choice is export the file in UTF-8 or ISO-8859-1 character set, use semi-colon (;) as a field separator and double quote () as a text delimiter character.

Log into the project you would like to import the assets into. From Testlab / Import -menu select the type of the asset you would like to import and run the import tool for your CSV formatted file. You have an option to run the import in “Dry run” -mode first which just validates your import file for any errors or warnings. The process output shows you how the import is done and in the end gives you a summary about encountered warnings or such. After you are happy with the result run the actual import again by unchecking the “Dry run” option.

Keep in mind, that for the assets to show up in your view you might have to refresh the hierarchy tree you’ve imported the assets into.


I only have test cases

You only have test cases described in a spreadsheet but no requirements defined ? This poses no problem as a good strategy is to first import your test cases to Testlab and later on create the needed requirements to your project. You can bind the imported test cases to these requirements easily by dragging and dropping.

What is gained in this strategy is a good visibility to your testing status later on. For example, let’s assume that you have a set of user interface related acceptance tests in your project but no related requirements described. You would just like to have a visibility on how different views in your application’s user interface are tested. Then you might want to just create a matching requirement in your Testlab project for each application view and bind the appropriate test cases to these requirements (views). After this, the Coverage view of your project (and other requirement related reports) in Testlab always show the testing status and issues matched to the views of your application. Voila!


Automated testing

If you’ve automated a part of your testing you might have quite many unit (or other automated) tests for which you would like to follow the status for. When pushing automated test results to your Testlab project you should have a matching test case in your Testlab project for your tests. For this, using the import is a handy way of creating your Testlab side test cases. If you just export a list of your unit tests to a text file it is quite trivial to form a suitable CSV file with the template described above which can be used to import the test case stubs to Testlab.


Advantages gained

There your have it. By moving your QA process from highly manual labour intensive spreadsheet based process to a modern quality management solution you drive collaboration, save time and gain better insight on the quality, testing and issue status of your projects. For any questions and advice needed, please contact Meliora and our team of experts are more than happy to help with you.




New feature release with importing and exporting

We are proud to announce a new major release of Testlab (Henrietta Lacks). This update brings you new major features such as importing and exporting data from spreadsheets, quick searching of assets in your project, enhancements to our plugins and many more!



Importing and exporting data

For easy adoption Testlab now offers import feature for your existing test cases, requirements or issues with a CSV format using office tools such as Microsoft Excel. Data from your projects can also be exported in identical format.

The feature supports importing requirements and their hierarchy, test categories, test cases including steps and issues. We also support importing values for custom fields, file attachments and comments.


Searching assets

Enjoy a fast and thorough searching function to find the assets you are interested in your project. Type in a keyword and you’ll get an organized result list for matching assets for easy access.

The search is implemented as an always accessible search field in your Testlab toolbar. The use of this field should be familiar for all users of some kind of embedded search, such as Mac’s Spotlight.


Coverage enhancements

The coverage view is essential for knowing what has been tested for your application – how your design has been verified. We added some features for coverage reporting such as:

  • Filtering in features which have testing results in your project. This helps to filter out all the features from the view which you never intended to be tested in your version or a run you’re reporting against.
  • Content popups for related assets: By hovering your mouse on coverage related test cases and issues you’ll get instant popup describing the hovered asset.
  • Added “Not in test run” status as a reported state in results bar. This tells you if there are tests still to be scheduled for the feature.



Drag and drop files

Attaching files to your assets in Testlab cannot get easier. You can now drag and drop files on your browser to automatically upload and attach them. You can even drop multiple files to upload them all at once (feature requires dropzone support and works currently in Chrome, Firefox, Safari 6+ and IE10+).



JIRA plugin enhancements

Our two-way JIRA plugin has been enhanced with the following features:

  • Issue types which are synchronized to Testlab can be now chosen. This way you can for example synchronize only defects and leave new features unsynchronized.
  • The fields which are not automatically mapped to Testlab’s fields can now be configured to be mapped to Testlab’s custom fields. This way you can synchronize custom field content or synchronize a missing JIRA field (such as estimate) to a custom field of your choice in Testlab.


Session management UI

Testlab now offers an user interface for administrators for which they can see all active Testlab sessions. This way they can transparently see how many licenses are in use and optionally terminate sessions from their Testlab.


Other enhancements

In addition to the above, the version features numerous other enhancements. Some of most notable include:

  • Jenkins plugin class level mapping: Test cases in Testlab can now be mapped to Jenkins’ run tests at parent level. For example, mapping a test case with “com.example.tests.ui” in Testlab will be set as failed if any tests in Jenkins with identifier starting with “com.example.tests.ui” fail.
  • Reordering test case steps: Steps of a test case can be now reordered by dragging and dropping.
  • Adding issues for test run’s test cases: A new icon button has been added to test runs panel which allows you easily add a new issue for a test case in an existing test run. For example, this way you can add issues later on to the test run if some are missed during the execution.
  • More quick links: The user interface now contains more quick links between assets. For example test run panel’s “Added issues” counts can be clicked to open up the related assets in a popup window.
  • Requirement class icons: Different requirement classes (user stories, use cases etc) now feature different icons. This way the different types of requirements are easily distinguishable in the user interface.
  • Rich text editor tab / de-tab: Pressing tab or shift+tab in rich text editor now indents and outdents text for easier editing.
  • Better keyboard navigation: Keyboard navigation in the user interface is now better, especially in the hierarchy trees.
  • Opening attachments inline: Attached files with browser supported content types are now opened inline directly in the browser window instead of forcing the document to be saved to disk.


Sincerely yours,

Meliora team

henrietta_lacksHenrietta Lacks (1920 – 1951) was an African-American woman who passed away of cervical cancer in 1951. Little did she know that her fatal tumors would hold a key to numerous medical advancements (polio vaccines and chemotherapy for example). Her biopsied cells were found to really thrive outside her body and would be used to create the first immortal cell line for medical research so that her cells are still alive today – over 60 years after her passing. Her cells were used to create the so called HeLa cell line for scientific research – currently the oldest and most commonly used human cell line. They were also the first human cells successfully cloned in 1955.

Henrietta’s cells have been mailed to scientists around the globe for research and scientists have grown some 20 tons of cells. Henrietta is still living today, around the globe and bigger than ever. Find out more by listening Radiolab’s great short story about the subject!



Tags for this post: announce features jenkins jira release video 


Introduction video

Today, we’re happy to bring you a brief introduction screencast about Testlab. The video will introduce you to the central concepts of Testlab and how they are presented in the user interface. We will give a glance to

  • requirements management,
  • test case design,
  • execution planning and test runs,
  • issue management and
  • test coverage.

Keep in mind that this introduction skips some central features of Testlab such as reporting but should give you some insight to the use of Testlab. To view the introduction please click below.


Introduction to Meliora Testlab


Tags for this post: demo example screencast usage video 


Brand new Demo project available

We are happy to publish a renewed version of the Demo project for you all to familiarize yourself with Testlab. You can find the project by logging to your Testlab and choosing project “Demo”.


Boris has a problem

The new Demo project tells a story of Boris Buyer who has an online book store. He’s selling some books in his web store but has been lately noticing declining sales. He decides to develop his web store further by the aid of his brother in law who owns an IT-company specializing in web design. They hatch up a plan to develop Boris’ online business further with few agile development milestones.

The data in “Demo” project has all the requirements, user stories, test cases, test sets, test runs, testing results and issues to make it easier for you to grasp on the concepts of Testlab. The project starts at January 1st of 2013, has one milestone version completed and happily installed in production, second milestone completed but still in testing and a third milestone for which the planning has been just started.

timetableThe timeline diagram above presents a rough sketch of “Demo” project’s progress showing the phases of the project at the top and the relevant assets found in Testlab at the bottom. Purple highlight on the timeline presents the moment in time the project is currently in.  


Great, where can I read more ?

A detailed description, a story and hints for what to look in Testlab can be found at


All new registrants will get this link in their welcome message you get when you sign up.


Just to note – all existing customers and registrants have their old Demo project renamed as “Old demo” so any changes you’ve made to your previous demo project aren’t lost.


Tags for this post: demo example usage 


Quality is not a feature

RoadmapQuality is a central part of a software development process. It shouldn’t be seen as an expense because higher quality correlates with lower development costs and faster delivery times. All this requires a vision about quality and good practices – not the most common “we’ll find the issues at the end of the project and fix them” -strategy.

Quality adds value, not expenses

Software projects with budget overruns and that are late on deadline are not necessarily in problems before the testing has been started. A high number of issues found at this stage usually add up to the problems and are very expensive to fix.

A study by Capers Jones in United States states that an average development cost for a function point is 1000$ and maintenance costs adds up an another 1000$. Software with higher than average quality has an average development cost for a function point of 700$ and $500 for maintenance. This adds up as an 40% saving for the software lifetime [Capers Jones: Applied Software Measurement: Global Analysis of Productivity and Quality, Mcgraw-Hill Professional 2008].

For a software project costing 400 000$ this could mean a saving of 160 000$. Should you improve your return of investment – by improving your quality ? 

Laatu ei ole lisättävä ominaisuus 

Laatu on keskeinen ohjelmistokehitysprosessin tekijä. Sitä ei pidä pitää kustannustekijänä, vaan korkeampi laatu korreloi matalampien kehityskustannusten ja nopeamman toimitusajan kanssa. Tämä edellyttää laatuvisiota ja hyviä toimintatapoja – eikä varsin yleistä ”löydetään virheet kehitysprojektin lopussa” -strategiaa.

Laatu luo lisäarvoa eikä kustannuksia

Ohjelmistonkehitysprojektit, jonka lopputuloksena ovat suuret budjetti- ja aikatauluylitykset, eivät välttämättä ole ongelmissa ennen testauksen aloitusta. Tässä vaiheessa löytyvät suuret määrät havaintoja aiheuttavat sitten ylityksiä kustannuksiin ja aikatauluihin.

Yhdysvaltalaisen Capers Jonesin selvitysten mukaan ohjelmistonkehityksen keskimääräiset kehityskustannukset olivat 1000$ per toimintopiste (FP, function point) ja ylläpitokustannukset toiset 1000$. Ohjelmistoilla, joiden laatu on keskitason yläpuolella, vastaavat luvut ovat 700$ per toimintopiste kehityskustannukselle sekä 500$ ylläpitokustannuksille. Tämä tarkoittaa 40% kustannussäästöä ohjelmiston elinkaarelle [Capers Jones: Applied Software Measurement: Global Analysis of Productivity and Quality, Mcgraw-Hill Professional 2008].

Kustannuksiltaan 400 000 ohjelmistolle tämä tarkoittaisi 160 000 kustannussäästöä. Parannatko sinä investointisi tuottoa – parantamalla laatuasi?



Testlab integrates to Confluence and Jenkins, “Toynbee Tile” released

Meliora is pleased to announce Testlab “Toynbee Tile” with integration plugins to Atlassian Confluence and Jenkins CI included. Please read more about the new features below.

E-mail notifications

The version includes fully configurable E-mailing notice schemes. A scheme allows you to configure the rules and events for which Testlab will send your users notifications via e-mail.

E-mails are neatly formatted as HTML messages with asset details included. E-mail message also includes an easy button for you to jump to Testlab with and open up the related asset.

In addition to the notification e-mails all requirement, test case and issue related e-mails are now sent with the new template when the “Send as E-mail” function in Testlab is used.


Notice schemes and e-mailing rules

Testlab allows you to design your own schemes for e-mail notifications. These schemes are shared for all projects in your company and can be re-used if applicable. 

A scheme is a collection of asset related rules which define the events which trigger the sending of a messages and receivers for these messages.

We’ve included a set of default schemes in Testlab to make the use of e-mail schemes as easy as possible for you.



Atlassian Confluence integration 

Toynbee Tile brings in with it a support for Atlassian Confluence plugin. The plugin includes macros for your Confluence site which allow you to include

  • lists of requirements,
  • lists of test cases and/or
  • lists of issues

to your Confluence pages.

Macros are fully configurable with options to pick the relevant assets from your Testlab projects and also render an easy to use “Open in Testlab” action for each asset for easy transition from Confluence to Testlab.


Jenkins CI integration 

Jenkins CI is a popular open-source continuous integration server. A typical Jenkins build often includes automated tests which might be for example unit tests or integration tests.

Meliora Testlab Jenkins plugin allows you to publish the results of the automated tests to your Testlab project. The plugin pushes the results to Testlab by creating a test run with automated tests mapped to Testlab’s test cases.

The plugin allows you to automatically open up issues in Testlab if tests in your Jenkins build fails. Issues can also be automatically assigned to some user in your Testlab project.

If you are using Jenkins for your development this makes Testlab a great choice for your test management efforts.


In addition to the major changes described above, some notifiable changes include:

Project’s time zone

Each project has now a default time zone which is used to format for example dates and times in the e-mail messages sent from Testlab. This information will also be used with reporting in the future.

Snappier user interface

Toynbee Tile includes an improved drawing strategy for the browser client. This should make your user interface more snappier than before in every day use.

Improved printing

When printing out individual assets (requirements, test cases or issues) from Testlab we now use the same layout as is used with e-mails.

That’s it for now! We hope the integrations to Confluence and Jenkins make the use of Testlab even more easier for you in your enterprise. All plugins will be released to their respective repositories (Atlassian Marketplace, Jenkins update center) as soon as possible but in the mean time all plugins are available to our customers by request.

Sincerely yours,

Meliora team

Toynbee tile
? The Toynbee tiles are tiles embedded in asphalt of streets in numerous major cities in the United States and South America with unknown origin. All tiles contain some variation of a message “TOYNBEE IDEA, IN MOVIE 2001, RESURRECT DEAD, ON PLANET JUPITER“. Since the 1980s several hundred tiles have been discovered.

What ? Yes, someone or some group of people are and have been embedding this message to the streets of major cities for 30 years now. Why – nobody exactly knows but some theories exist and there is a nice documentary about this intriguing subject with one in it for all interested parties there. 


Tags for this post: announce confluence features jenkins plugin product release 


Test requirements to the rescue!

Bridging the gap between customer needs and design is often difficult. This blog post addresses some of these issues and introduces you to the concept of a “test requirement” to help you tackle this problem.

Too often it happens that the project’s outcome is not what the customer would have wanted. There are numerous reasons for this, but two major reasons can be seen over and over again

1) Traditional (V-model) projects relying on pre-written business requirements run into misunderstandings when designing test cases from these requirements.

2) Agile projects tend to have issues with the visibility of quality.

Test requirements help customer and supplier to talk the same language what is needed of the software and at the same time having better understanding of the software’s quality.


Common problems with the V-model projects

Before going in to what these test requirements actually are, let’s review some commonly encountered issues in testing projects. First we’ll review the V-model. First off, very often the requirements are too vague to specify exactly how the software works and how it can be tested. This is true even when serious effort is given to the task of writing the requirements. It is understandable. It is not cost efficient, if even possible, to write so good requirements that describe unambiguously what the customer needs. Also on bigger projects these requirements tend to change which increases the expences of the the project.

For a test designer a changed requirement can be a nightmare if the information is only on somebody’s E-mail or meeting minutes. As the supplier of the software project wants to maximize the profits, it’s interest is usually to fulfill the requirements but not much more. This often leads to a situation where customer’s testers report bugs because software does not work as customer wants, but supplier does not want to fix them for the fixed price as the requirement does not directly specify how the feature should work.


Developing and testing using agile methods

If the V-model projects aren’t always that rosy, the agile testing is not completely without it’s problems: The customer is expected to allocate time to the project so that the project’s implementation would always be going to the right direction. It is accepted that even if the project does some wild implementation guesses/experiments that are later scratched and rewritten, it is acceptable, as the project’s implementation as a whole can still be very effective. This stays true if the customer steers the project in real time to the right direction, but often the project learns about the customer needs later than what would be optimal.

If the project is a bigger one and has many testers, having the test team test the right things at the right time is essential, and having visibility on what actually has been tested allows the test manager to accomplish that. The lack of this visibility is sometimes a problem on agile projects. The problem is often worsened if and when people on the project change while the documentation of the implementation is kept at minimum. 


The definition of test requirement

Ok, I’ve stated some often seen problems on testing and development, and supposedly I should have some ideas how to reduce them with test requirements, but what are those test requirements? In short, they describe what will be tested, but not how it will be tested. What if you are not using requirements as such, you are using for example user stories you might ask. It does not change a thing – if you can use something, user story for example, as a basis of design, you can use it as a source of a test requirement too. For the sake of simplicity I will call all these sources requirements for the rest of this blog post.

A test requirement is short – it should describe one relatively independent area of a requirement it is based on. It is agile – it can change or it can be rejected if needed. A test requirement can have additional information like a status for review or importance. All this can be utilized on decision making and reporting.


The “how” of test requirements

Now we know a little bit what test requirements are, but how do we use them? First off, there are numerous correct ways. You can pick the practices that best fit your development environment. Essential thing to remember when working with test requirements is to start working on them as early as possible. You can start when you have the requirement. Just split the requirement to the pieces you plan to test. You can include things you presume how it will work. Sometimes your assumptions will be wrong but it doesn’t matter. As the scope of a test requirement is small they are super fast to review later. More often the test requirements act as a valuable communication tool and increase the understanding between the customer and the supplier.

I mentioned reviewing. Test requirements should be reviewed by the customer and supplier both. It is an extra step, yes, but it is fast and efficient one. It can happen on any phase of the project, but the sooner the better. The point is that after the review both parties will have a better understanding about the upcoming solution.


Benefits of using test requirements

There are also other ways to work with requirements and cooperate between involved parties, so what are the benefits for using the test requirements?

  • It is simple. Splitting the requirements is an easy task and communication with these even simpler. ”Is this what you want?” Yes/No. ”Can it be done?” Yes/No/Yes, but it will be expensive/Depends (So yes, you will not always get the definitive answer right away about the solution, but having the answer for some helps already a lot).
  • It is effective. For every detail you define before development you have a fair chance to save time from unwanted design, implementation, testing, fixing and building.
  • Splitting down the requirements to small pieces makes talking about them easy, and using a format which defines how they will be tested is intuitive. Having this kind of detail in actual requirement is not good as making these requirements would be too time consuming and probably would lead to unoptimal design. It is better to talk about requirements with the development team. Waiting until development team has done the development according to general requirements is not good either as the outcome likely isn’t optimal for the customer.
  • Defining test requirements adds a visible and measurable layer that can be used to track the development and quality of the project.
  • Using test requirements is a good way of involving test team early on and using what they are good at – predicting what might go wrong. Experienced tester recognizes the possible weak points of the design. When you have test requirements ready, designing the actual test cases is easier. This is good as that phase in the project tends to be busier, and now you have already done part of the work.
  • Another advantage is the added precision when discussing about the found defects. When pointing a defect to more precise test requirement which has been reviewed, it is easier for tester to point out why the defect has been reported.


Working with test requirements using Meliora Testlab

Meliora Testlab has been designed to utilize the concept of test requirements and gives the user a powerful toolset to design and report test requirements and plan test cases from them. Testlab also contains strong collaboration features which enable project to define either stricter or looser workflows – just what suits the company. Testlab’s automatic notification features make sure people get notified when they should act. This makes the managers work easier.




Meliora Testlab “Lonely Cicada” released

We are proud to announce a new version of Meliora Testlab “Lonely Cicada”. This version brings you many usability enhancements and notably, a possibility to integrate your Testlab to your own central authentication user registry.

The version is live immediately and all the features and changes described below are now available! 

Centralized authentication 

Lonely Cicada implements a support for an external authentication source with SSO (single sign-on or, more appropriately, enterprise reduced sign-on). This allows your Testlab to be integrated to your organization’s own user registry, for example Active Directory. This way the password authentication is made against your centralized credentials with reduced time spent on login prompt.

This feature is available for all Enterprise Suite installations. You can read more about the feature from here.


Streamlined requirement management

This version implements a set of changes to requirement management functionality which integrates requirement design and test case design more deeply.

  •  Test case hierarchy is integrated to requirement design view. This allows easier transitions between requirements and test cases.
  • Verifying relations can be drag’n’dropped directly between the trees.
  • Test case edit can be initiated directly from requirement design view and adding a new verifying test case is now easier via the requirement design view.
  • Requirement tree now holds a “Show verifying test cases” checkbox which allows you to show the requirements’ verifying test cases in the requirement tree when preferred.
  • Comments are moved directly to the details view from the earlier placement of a separate tab to make them more accessible.
  • “Search:”-filter in requirement tree now searches the content of custom fields.
  • When adding a new requirement the requirement can be directly added to the workflow status desired. Previously, the asset was always created to the initial status and a separate edit for approving the requirement was required.

We believe these enhancements make working with requirements more easier for all our users.


Quick links

Testlab user interface now holds a set of so called quick links which allow you to quickly move to the asset referred. 

  • Coverage view tree view is now interactive so that the actual coverage data can be drilled down to individual assets. You are now able to click the test results and calculated summaries to open up a floating panel allowing access the assets included.
  • Clicking the quick link icons in
    • test set editor in execution planning,
    • test run details in test runs view,
    • issue listing and issue form
  • .. take you directly to the clicked asset. 

Persisted UI settings

Listings and grids in Testlab allow you to pick the columns, sort and group the content as you wish. Great, and this is now enhanced by the fact that the settings you’ve chosen are now saved to your browser settings. This way the views open up as you’ve set them up in your last session without the need for resetting them again.



Issue management enhancements

For issue management, we’ve:

  • Added a button to the issue listing which allows you to add the test case related to the issue to your work set. This makes it easier for you to pick the test cases to your next test run via the issues view.
  • Added a “severity” field for issues. If your project’s issue management process requires a separate severity for the issues in addition to the priority field just enable this field in your project’s settings and you are good to go.
  • Added “Trivial” level for priority.
  • The title bar of the issue listing now holds the count of filtered in issues.
  • Added quick links to the related assets such as issue’s test case and test run (see above).


Switching projects

When working with Testlab you are always connected to a specific project of yours. Previously switching between your projects required you to relogin to Testlab. The new version allows you to quickly switch between the project’s you have access to by selecting the project from Testlab > Change project -menu.


Other usability enhancements

In addition to the above, some notable enhancements include:

  • Test case design:
    • Comments have been moved directly to the details tab. This make the comments more accessible for users.
    • “Search:”-filter in test case tree now searches the content of custom fields.
    • Test cases can now be added directly to the workflow status desired. Previously, a separate edit was required for example to approve the test case.
    • Test case related reports can now be generated via context menu of the test case tree.
  • Execution planning:
    • Listing of test sets is not grouped anymore by default to make the view more accessible.
    • Test set editor now shows the priority of added test cases.
    • Test set tree has now a hover tooltip to show the description of the hovered test set.
  • Test runs:
    • Test runs can now be permanently deleted. A “testrun.delete” permission is required which is by default granted to all TESTMANAGERS (and ofcourse, administrator users).
    • Test run listing is now grouped by version by default.
  • File attachments:
    • Streamlined attachment controls: Attachments are now listed inline and removed by clicking the cross-symbol after the attachment.
    • From now on, the history regarding attachments now show the file size of the added attachment.
  • Permissions:
    • “user.add” permission now grants the editing permission for users.
    • All users in PROJECTMANAGER roles now have the permission to add users, but can only grant access to the project they are currently logged in to.
  • Custom fields:
    • Added a new custom field type which allows you to select an user.

We would like to thank you for all your feedback and we hope this version makes the use of Testlab even more productive than before.

Sincerely yours,

Meliora team

Lonely Cicada? There is a genus of cicada in North America (Magicicada) which spend most of their 13 or 17 year lives underground. After 13 or 17 years, mature cicada nymphs emerge at any given locality, synchronously and in tremendous numbers. After such a prolonged developmental phase, the adults are active for about 4 to 6 weeks. Within two months of the original emergence, the life cycle is complete, the eggs have been laid and the adult cicadas are gone for another 13 or 17 years.

Time after time something disturbs this cycle of individual cicadas and a single cicada can emerge from it’s safe-haven of underground out of this 17-year cycle and enter the brave new world without the company of any of it’s kind. If something should define lonely we think this is it.

Happen to be at east coast North America ? There is an emergence of cicadas happening right now and you can actually build a sensor to track cicadas at your location. Or just track them via online map.


Tags for this post: announce features release 


Testauspäivät 2013

A Testing conference for Testers, other Developers and Test/Project Managers. Two tracks of talks and one for exercises

We are there today – if you see us, let’s have a talk :)



Easier authentication for Meliora Testlab

Are your users affected by password fatigue from entering different username and password combinations? Do you want to reduce IT costs due to lower number of IT help desk calls?

Meliora Testlab has support for an external authentication source with SSO (single sign-on or, more appropriately, enterprise reduced sign-on). This means that your users log in to Meliora Testlab with their most used username and password! And at the same time reduce time spent on login prompt.



Most valuable features of Testlab’s new authentication provider are:

  • Integrate to your existing Active Directory
  • Support for various other user databases and authentication methods such as LDAP, DBMS, RADIUS, or X.509 certificates
  • Improves security through not exposing user passwords out of your company network
  • Ability to enforce existing uniform enterprise authentication policies


Example case

Your organization uses Microsoft Active Directory as authentication source and Meliora Testlab as a SaaS service. Meliora provides you with a simple Testlab Authenticator module (green module in the picture below), that is responsible for authenticating your users from Active Directory and providing identification data securely to Testlab.

A technical overview of the components and interactions between them is shown in the following picture.


When an unauthenticated user accesses Testlab, user’s browser gets automatically redirected to your Testlab Authenticator. If needed, Authenticator asks user credentials, validates them from Active Directory and creates a service ticket which is passed to Testlab via redirection. Testlab then validates the ticket from your Authenticator and if valid, grants access to Testlab.

The lifetime of the Authenticator ticket is configurable and is by default 10 hours. This means that if the same user re-logins to Testlab during this time the access is granted automatically without the need of entering the credentials again.


Supported technologies

For SSO, we support using the standard SAML 2.0 WebSSO profile or alternatively, CAS.


Security considerations

There are many security issues to consider when implementing an authentication solution. Meliora has made every effort to create an excellent solution for secure authentication.

Here are some highlights:

  • User credentials never leave company’s intranet
  • For CAS, a single simple and standard web application component is installed to company’s network (Testlab Authenticator, TA)
  • Solution is based on tried and well tested technologies (SAML 2.0 WebSSO standard or CAS)
  • No direct calls from Internet to company’s directory server (Active Directory or other authentication source)

Please, contact us for more information. We are happy to tell you more details about the solution.

Make your life easier and get super-easy authentication to your Testlab!

Meliora team


Tags for this post: announce features plugin product security 


New version of Testlab released [Pitch drop]

Meliora is pleased to announce an update for Meliora Testlab codenamed “Pitch drop”. This version brings you a number of enhancements and optimizations and is available immediately for our hosted version customers. The version includes many features requested by our clients and we would like to thank you for all the feedback!

Jira integration

The version offers features for integrating your Testlab with Atlassian Jira issue tracker. Implementation includes a true two-way synchronization module which enables you to synchronize the issues in real time between your Testlab and Jira deployments. This way you can continue to use your established method of working with issues with Jira and be more productive with requirement and test case planning and test execution with Testlab.

This feature is available for all Enterprise Suite installations. You can read more about the feature from here.


Real time project events

To keep you up to date with events in the project you are currently logged in Testlab delivers you real time events. The events are presented as floating notifications at top right corner of the user interface. Clicking the notification opens up the associated asset in your Testlab.

In addition to the regular project events such as changes in the project assets all assignments are delivered as sticky notifications. This means that when an asset is assigned for you the notification will stick to your user interface as long as you click the notification to open up the asset or dismiss the notification.


Notice board

A new dashboard widget is offered to easily communicate with the users in your testing project.

A project notice board offers a place to store your shared project assets and publish announcements. Announcements can be targeted to specific user roles and optionally tagged as important which delivers the announcement to all users on next login in a pop up window. By default test managers and project managers can manage the content of project’s notice board but new permission keys are included to control who can publish the content. In addition to this all administrator users of your company can publish company wide announcements which are shown to users in all projects in your company.


Easier re-testing

Testlab’s test case hierarchy tree is enhanced with a new set of filters which make it easier to highlight the test cases for re-testing.

Filters make it easy to pick all not passed test cases from current test case hierarchy as a whole, for a specific version or for a single test run. When re-testing previously tested features use these filters in execution planning view to easily add test runs for test cases previously failed, blocked or not tested.

Combine the above with the use of your personal work set in execution planning and adding test runs for re-testing couldn’t be easier!


Streamlined issue editing

Editing window for issues has a new streamlined layout which makes commenting of issues easier.


Commenting enhancements

When publishing comments the content is now automatically linkified so that all external links are rendered as clickable links. In addition to this comments can now be deleted.


Performance improvements

In addition to the features described above Pitch drop includes some magic under the hood to make the use of Testlab even snappier than before with network transport enhancements and more intelligent on-need-basis refresh logic for asset hierarchies.

Sincerely yours,

Meliora team

Pitch drop? In 1927 Professor Parnell heated a sample of pitch and poured it into a glass funnel with a sealed stem. Three years were allowed for the pitch to settle, and in 1930 the sealed stem was cut. From that date on the pitch has slowly dripped out of the funnel – so slowly that now, 80 years later, the ninth drop is only just forming. In the 80 years that the pitch has been dripping no-one has ever seen the drop fall. You can try your luck here.


Tags for this post: features jira product release 


Hello World [@Testlab]


3 minute guide to running your first test case with Testlab

So, you just registered and you want to run tests? This intro guides you to create and run your first test case.


  • your own Testlab site: for example https://yourcompany.melioratestlab.com
  • you have an user account for Testlab (with company administrator priviledges)

Register if needed here: https://www.melioratestlab.com/sign-up


Step 0: Create a project

Login to your Testlab and choose Testlab –> Manage Projects…

Click lower left-hand corner green plus-sign to create a new project.

Fill in details: for now, choose

  • name: Hello world,
  • project key: HLO and
  • Workflow: No review

See on-line manual if you want to check details of different workflows and/or fields.

Click Save to create project and then logout and login to your new project.


Step 1: Create a requirement

All test cases should be based on requirements.So we’ll define a requirement for the domain under test.

Choose view Requirements from left panel and from context menu New –> Requirement…

You can use keyboard shortcut Ctrl+N to do the same.

Enter requirement data:

  • requirement name: World should be greeted  and
  • description: As a world, I should be greeted.
  • (optionally other attributes as you see fit)

In this case, requirement summarizes in business terms what should be achieved.

Save the requirement by pressing shortcut key Ctrl+S or by clicking button Save.

Reviewing a defined requirement is an important step and normally is made as a separate phase by a client or product owner. For now, review your requirement and when satisfied, mark it ready by choosing Edit (shortcut Ctrl+E) and Mark as ready.



Step 2: Create a test case

You want to plan *how* to test your requirement. We’ll create a test case to describe exactly how we are going to test that a requirement is filled.

Choose view Test case design from left panel and use either context menu or keyboard shortcut Ctrl+Shift+N to create a new test category. Enter category name:

  • category name: human interface

Then you are ready to create a new test case. Press Ctrl+N (or context menu. again) to add test case with following data:

  • name: Listen up!
  • tab steps: Add step by choosing add step…. Enter Listen carefully and You hear hello. You can easily create more test case steps by using tab-key to move to next field in step editor.

Save test case by clicking Save (Ctrl+S). Next, review your test case and when you are satisfied, mark it ready by choosing Edit (Ctrl+E) and Mark as ready.

Choose view Test coverage to link your newly created test case to your requirement. You can do this by dragging the test case (lower left corner) to the requirement (upper left corner). Now test case is linked to the requirement. Basically this link means that requirement A should be tested with a test case B. Normally every requirement should have one or more test cases that *verify* that requirement is fulfilled correctly.

You should see immediately at the right side of the screen the effect of linking the test case to the requirement. Requirement REQ1 has now one test case (but no results yet. though. We’ll get there in 30 seconds).



Step 3: Make a test run

Choose view Execution planning from left panel and from top left, choose you personal working set Work set.

Now you can add the test case you want to execute to lower right-hand corner (labeled as test cases for test set Work set). You can do this by expanding lower left-side tree with test cases and dragging the test cases to test case list at lower right side.

You could also drag whole folders to test case list: this would add all test cases in folder to the test case list.

When you have dragged the Listen Uptest case to the list, click button Add test run…

This will open a pop-up window and create a test run with all checked test cases so that they can be executed. Enter a test run name:

  • Title: Sprint 1 run

and click Save. Now there is a new test run in upper right side with status Not started.

You could have entered also some other relevant data like expected completion date for this test run if you were planning the testing rouds well in advance.




Step 4: Run test(s)

We are ready to go: choose view Test runs, click the just created test run named Sprint 1 run from upper right side. If there were dozens of test cases, you could select a subset of test cases to run now, but we are going to select the all! Just select all test cases from lower right and click Run selected… to run selected tests.

This will open a test execution window. Each test case data will be presented to you: after clicking through test case step you have to choose either

  • Pass (test case executed successfully) or
  • Fail (test case did not run successfully)

 for test case result. Other (normally more rare) options would be

  • Blocked (test case cannot be run at the moment) or
  • Skip (skip running of test case: there is no result recorded for this test case)

If there were a problem with a test case execution and expected result was not achieved, it would be a good practice to *always* add an issue by clicking a button Add issue…

For now, just press Pass to record a successful result to your test case.

Voilà, you have completed your first test run and recorded the results to Testlab!



Bonus step (4+1)

To see where your project stands in regards to testing progress, choose again view Test coverage. Right hand side will show you the project testing status in relation to the project requirements. You should see that your single business requirement is well covered and passed all tests (=Green!).







Happy testing :)


Tags for this post: usage 


Testlab integrates with Atlassian Jira

We are happy to announce an integration module for integrating your Testlab with Atlassian Jira issue tracker. We’ve been working hard to bring you a true two-way synchronization module which enables you to synchronize the issues in real time between your Testlab and Jira deployments.


Simple: this module enables you to seamlessly integrate Jira issues with Testlab’s test cases and other test management data. If your organization already has an established method for working with issues, with this plugin you can continue to handle issues like you are used to *and* be more productive with test case planning and test execution with Testlab!

How it works

The module is implemented as a plugin at Jira side and as an integration interface for the plugin at Testlab side. The following diagram describes the synchronization principle at high level.

Both peers have an event listener which publishes the changes in issues in real time. This means that when an issue is updated, commented or a file is attached to it in Jira the change is immediately pushed to Testlab and vice versa. As said, we currently sync issue details, comments and attached files.

In addition to real time push a scheduled (every 15 minutes) synchronization job is triggered. This means that Jira pushes all issue updates from the last synchronization point of time to Testlab. Testlab processes the data and replies with a push of it’s issue changes in similar fashion. This allows us to synchronize the issues between the systems even if the real time push fails for some reason for example due to network problems. We allow you to choose a conflict resolution strategy when overlapping updates occur to always honor the latest data, the Jira data or Testlab’s data.

The actual data between Jira and Testlab is pushed through a secured and key authenticated network channel. Just make sure your Jira allows secured HTTPS to make the push from Testlab to Jira secure.

Great! How do I get it?

The integration module is available to all our enterprise suite customers.


Tags for this post: announce features jira plugin product 


Testlab Yamaguchi is *ready* – in case you missed it

We are pleased to announce an official launch of Meliora Testlab, codenamed Yamaguchi, for following browsers:

  • Internet Explorer (version 8 or later)
  • Mozilla Firefox
  • Apple Safari
  • Google Chrome

Testlab makes it easier than ever to test your software – and enables you to improve your customer satisfaction.

For any organization it is essential that their quality management process is consistent, repeatable and effective. This leads to improved application quality and more effective and cost efficient software project implementation.

Meliora Testlab gives the test team a centralized tool for managing requirements, test cases, defects and testing project tasks. As a modern browser based test management solution, Meliora Testlab provides better visibility to your test management process and assets ensuring the delivery of high quality software.

So… sign up and try it now

And note – If you have a hosted version of Meliora Testlab – congratulations, you have already upgraded!

As a launch celebration we are currently offering all new customers two user licences with price of one for three months as long as you sign up before the end of April. Benefit from this offer now and enjoy four very affordable months of Testlab when combined with the 14 day trial period for all sign ups!


Look forward

We as a team have worked quite hard for few months to bring you this release – thank you for your feedback – and there are some very interesting features coming out in future. Few notable features for enterprise users are integration to Atlassian’s Jira. In addition to this we are looking into a Jenkins plugin which would automatically feed results of automated tests from your CI environment to Testlab.

We will use this blog as communication medium and we will keep you updated. 


Sincerely yours,

Meliora team

Yamaguchi, you ask? Well, we like to pick codenames for our releases from something interesting and in this case it comes from a unimaginable story of Tsutomu Yamaguchi. As the guys from Radiolab put it, “On the morning of August 6th, 1945, Tsutomu Yamaguchi was in Hiroshima on a work trip. He was walking to the office when the first atomic bomb was dropped about a mile away. He survived, and eventually managed to get himself onto a train back to his hometown … Nagasaki.” Have a blast and listen for Radiolab’s great podcast of his story after you sign up and familiarize yourself with Testlab.


Tags for this post: product release 


Press release: Meliora Testlab

Meliora launches Testlab – a hosted enterprise-grade quality and test management tool to improve your quality.

For an organization it is essential that quality management process is consistent, repeatable and effective. This leads to improved application quality and more effective and cost efficient software project implementation. Meliora Testlab gives the test team a centralized tool for managing requirements, test cases, defects and testing project tasks. As a modern browser based test management solution, Meliora Testlab provides better visibility to your test management process and assets ensuring the delivery of high quality software.

Meliora Ltd, a company founded to provide commercial services in quality management field, today announced the launch of www.melioratestlab.com, a hosted browser based test management tool for managing your application lifecycle including requirement-, test case- and issue management features. With concurrent licensing model Testlab is well suited for a team of any size: a small team working on a single product or a large enterprise.

By combining requirement-, test case- and issue management to a single product the quality process is more transparent and easier to manage. It drives collaboration among development teams and it’s members, quality functions, business analysts and project overseers. Operating from one test management platform standardizes requirement definition and management, release and build management, test planning and scheduling, issue tracking and reporting – all with complete traceability.

As a hosted solution Meliora Testlab is easy to access and effortless to set-up. You’ll get your Testlab running in minutes with 14 day free no-obligations-trial by visiting www.melioratestlab.com. For enterprise customers with specific needs an on-site deployment is offered. Additional quality- and Testlab centric services are available including Testlab customer support, training and fast-track start packages to get on your way in a best way possible.

With Meliora’s customer service oriented team, continuous agile development practices and competitive pricing Meliora Testlab is a low risk and affordable investment to increase your product quality and revenue.

For more information on Meliora Testlab, please visit https://www.melioratestlab.com

About Meliora

Meliora’s mission is to provide best-of-class quality management tools to help you improve your quality. Meliora Ltd was founded in 2012 and is located in Jyväskylä, Finland.



Meliora Testlab is open for everyone

Our new web-based and hosted service, Meliora Testlab, is now open for everyone.

Efficiency in testing process and customer satisfaction are essential to any modern enterprise. Requirement management, test management or quality improvement – Meliora Testlab is tool for you.

You can register in two minutes at www.melioratestlab.com. By filling a simple form you will have your own server (for example https://mycompany.melioratestlab.com) ready immediately. Service will be absolutely free and fully fuctional for 30 days.

We are extremely interested in your feedback. Right upper corner has an envelope widget that enabled you to send your  feedback to us right away.

Meliora team



Tags for this post: product 

Best-of-class cross-browser hosted SaaS quality and test management, testing and issue management tools to improve your quality. Site information.