Posts tagged with: features

“features” tagged posts relate to the features of Meliora Testlab.


5.10.2019

Testlab – Automaton release

A new Testlab version long in the making has been released, codenamed Automaton. This major new release includes industry-leading features for managing and analyzing your test automation results, integration to MantisBT and various smaller enhancements and new features.

With the upgrade to Testlab – Automaton, you have an option to migrate your existing (automated) test results to the new way of working. You should read more about this option in the post published earlier.

 

Issue situation radiator
New test automation view for your automation results

Business challenges related to interpreting results of automated tests are obvious. More often, tests are created by developers with various different motives, the number of tests is often numerous and traceability to the specification is often lacking. Testlab – Automaton includes features to make sense of these results and gain the business benefits outside the scope of regression or smoke testing – which in practice, often tends to be the use for automated tests.

In Automaton, a new main view called “Test automation” is introduced. This view holds the automated test runs and details of the results in them. The view also holds a workbench for mapping the results to tests via automation rules.

Scoping your automation results for more insight

The new automation features include a workbench for which you can create rules on how your test automation should be mapped to your project. This makes it possible to get insight with the correct level of detail on your mass of automation results.

Tule rules in your automation rule set can be

  • “Add and map” rules, which automatically create the needed test cases to your project and map the incoming results to them,
  • “Map” rules, which map the incoming results to existing test cases,
  • “Ignore” rules, which ignore a set of tests from the results and
  • “Map to test case” rules, which directly target a specific test case(s) for your results.

Using the rules, you can define which parts of the incoming automation result tree target which tests in your project. Having a correct level of detail to your tests is crucial for understanding the situation of your testing efforts and for easy management of your testing.

 

Jenkins plugin updates

The plugin used to publish test automation results from your Jenkins jobs has been updated. This update makes it possible to use the rule sets and sources provided by the new automation features with the plugin. When using Jenkins with Testlab – Automaton, update of the plugin is required.

Please familiarize yourself with the documentation of the plugin. Keep in mind that if you are still using the previous Lost Cosmonaut version of Testlab, you should continue using the previous version of the plugin as long as your Testlab installation is not updated.

 

 

 

MantisBT integration

Testlab Automaton can be integration to MantisBT for issue management. This makes it possible to manage your testing in Testlab and report the found issues in your MantisBT installation.

You can read more about MantisBT integration at the documentation page provided.

 

Thanking you for all your feedback,

Meliora team


Lost Cosmonaut

The ancient history of self-operating machines, Automatons, is fascinating. In the modern computer-age, having machines doing things for us operating on our instructions is a bit given. In earlier times, many automatons were designed to give the illusion that they are operating under their own power: The word “automaton”, is the Latinization of Greek word meaning “acting of one’s own will”.

In the 13th century, Al-Jazari was the first inventor to create human-like machines to serve their masters. He even designed an automated toilet where a human-like automaton would hand you soap and towel as you are washing your hands. Much earlier, the Greek had designed various complex mechanical tools and machines which could be described as automatons. One of the more famous is Heron of Alexandria (10 AD – c. 70 AD) who invented various machines from the first vending machine to steam-powered devices.

(Source: Wikipedia, picture of the book “About automata” by Hero of Alexandria, public domain)

Facebooktwitterlinkedinmail


Tags for this post: announce features product release screenshots 


26.9.2019

Getting started : Bringing automation test results to Testlab

This post is an introduction of how to bring your automation results into the Testlab tool. We also introduce options and questions relating to migration from earlier Testlab versions to the new version: Testlab – Automaton. You can read more about your migration options below.

 

Why bother – what is the motivation behind bringing the results into Testlab?

Before the test results are brought to Testlab, it is common to discuss the reason for doing that – why go through all the trouble? The test results most probably are already somewhere. Possibly in Jenkins, possibly in HTML report in the disk – anyway they are somewhere in a readable format. Why would you set up a system to bring the results into Test management / ALM tool, Testlab? The most common reason for bringing the results in to test management tool is the better findability, usability, and ability to connect the test results to requirements/use cases. When the testing information is brought to everyone to see the project can work more efficiently – and thus make better decisions. Also combining the testing situation of Manual and automation testing into one view help understand the overall quality situation better. If your test results are only on CI tool you usually see the latest testing situation well, but you can not see what requirements/user stories they test, what they do not test, and how the situation develops over time.

 

Screenshot of viewing test results in Testlab’s Test automation view

 

Bringing the test results to Testlab – what does it do?

If you are familiar with manual testing in Testlab, it probably helps you if we tell that you’ll be able to see similar test runs as for manual test cases, and see similar test cases for automation as for manual tests. You can use the same reports you are familiar with to report automation results as well. So this is the basic case which is only the beginning. Testlab has a rule-engine that allows you to define how you wish to see and use the test results. The reason why you want to define rules is that it will be easier for you to handle the results you get to the management tool – to communicate the project situation for example.

 

Central concepts of test automation

Before we hop into the details of how this test automation importing works in details, let’s go through the central concepts and terms we use in our tool and in this article.

Result file The original result file that your test automation tool produced and from where you want to import results to Testlab.
Ruleset Set of rules you use to handle the result file to be utilized in Testlab. If you have multiple different kinds of automated test results, you can have multiple rulesets to handle them.
Test run A test run is a container in Testlab where test results are saved – very much like for manual tests.
Imported results Test results in your test run that have been handled by ruleset from your result file from your test automation tool.
Source As a concept, it groups test results that you wish to compare together. Jenkins job is one example of a useful source.
Rules Individual rules in a ruleset that dictate how you import test results from your result files.
Identifier The identifier is a unique identifier which is distinct for each test case. It consists of a folder structure and name where the test resides in the tree.
Required test cases In Testlab the test results are always saved for a test case. Thus, when importing test results, if there isn’t a test case in Testlab yet where the results would be saved, they are considered as required test cases. You can then create these test cases automatically.

 

Importing my first test results

You probably have your automation tool in place already. Quite likely it is able to save the test results in xUnit XML format which is a standard for test automation tools. Testlab reads this format, as it does Robot frameworks output and TAP result files. If the tool you use does not produce any of these formats, let us know and we’ll see if we can add support for your used format. So when it comes to Testlab, the story starts from the point where you have run the tests using your automation tool and have the results in a file.

When automating testing, the tests are often launched from a CI tool like Jenkins or using some other automated method. Jenkins can be configured to send the test results to Testlab automatically after the tests have been run, or the results can be sent using an API. You can also drag and drop result files to Testlab manually from your computer so you can try how it works pretty easily.

 

Do it yourself!

We assume you have your Result file at hand. If not, and want to try how it works, grab one from the demo project that came with your Testlab ( Instructions a a bit later). In Testlab, head to “Test automation” view and select “Automation rules” -tab. This is the place where you can define what Testlab does with your automation test results. Let’s start by creating a new ruleset for reading your automation test results. Do that by clicking the + in “Rulesets” pane.
Add a ruleset

You can use this ruleset for reading this file and similar ones you might import to Testlab later, so you should name it accordingly. Now you can call it “My ruleset” or “Mithrandir” if you wish. Another required field is the Title of the test run that will be created by default using this ruleset. It can be overwritten later, but in case you do not give it a name, this one will be used. Give it a name now, you can use for example “My results”. You can also give defaults to Milestone, Version, and Environment if you wish, and mark some tags to be added to the test run. You can also define if you wish to create issues automatically if test runs contain failing tests, and how the issues are created. Once you save your changes, you have the ruleset ready where you can add your rules.

Second thing is to import your result file into Testlab. Find it in your computer and drag & drop it to the Testlab window. You should now see your test results in raw format, and you probably recognize the test names and results from this. The test cases are not yet permanently saved to Testlab yet – you still have to decide how you wish to do that. If you don’t have your own result files but want to try it out how it works, select a preloaded result file from the demo project. The results on the demoproject are already loaded in to the Testlab, but you can still follow the same process. See picture below.

 

Preloaded result file

The third phase is this decisionmaking. Basically you have two things to decide. What are the test cases you want to bring in, and how you want to see them in Testlab. Let’s start first with the simplest case – you wish to bring all test results in and have one test case in Testlab per original test result. We’ll go to other options and to the reasons what you might want to do them a bit later in this article. So to add a rule that gets you these test results you just need to add a rule by right-clicking over your test results (in the middle of the screen), keep the rule type as “Add and map” and click “Add a new rule”. This is the rule that gets your results in and also creates test cases automatically. It matches test cases that have identifier staring with text in “Value” box. Changing that dictates which test cases this rule picks.

Next, you need to click “Apply rules” because Testlab does not do that for yourself. You’ll see the “Preview test run” on the right part of the screen. This part of the screen shows the situation from Testlab’s point of view – what will be the name of the created test cases, test results and so on. For this case where you created one test for one, the values on the right should be really familiar for you.

The fourth and final phase is confirming what you see and saving the results. Click “Save results…” on the bottom right corner. You get a window where you can enter details for the test run to be created. The values are prefilled from ruleset you used, so you can just click save results and have the results saved right away, or you can modify them if you wish. Title of the test run is used to distinguish different test runs, so give it a recognizable name (or go with a one inherited from ruleset if it is fit for you). The source is prefilled to be your name when importing results manually. Let’s leave the other values as they are for now – we’ll cover those later. Just click “Save results”.

 

How you see the results

Now you have the results in – Great! Main place to peruse the automated test results is in “Automated test runs” tab which you should be seeing now. The top part that consists of imported test runs shows their relevant statistics. Result mappings show light green for those test cases that you have mapped and grey for those test cases you have chosen to ignore. This is what you want to see. If this bar shows pink it means there are some unmapped tests for which you probably want to add rules to map them. This can happen for example when you push results from an outside source with tight rules and somebody creates new tests that do not match the rules. For clarity – you don’t have to update the rules every time new test cases are pushed in – only when the existing rules do not catch them. So if you create new test cases with your automation tool, create a file from that and upload it to Testlab. If the identifier of the new test cases starts with the value entered, your existing rule will work just fine.

You’ll also see the test cases and their results in other parts the Testlab user interface. If you go to test case design, you’ll find the newly created automated tests cases there. You can also add additional information to these test cases like a description of the purpose of the test case or other fields to classify your test cases. You can also define which requirements/user stories the test cases verify in “Test coverage” view. This way you’ll see the coverage status for your requirements. To try it out just drag & drop test case over the requirement to mark the test case to cover the requirement. You can also use all the powerful reports to share the quality situation through automated testing.

 

Best practices

So now you have seen how we get test results in with a simple case. Sometimes this is exactly what you need and want, but it is also good to take heed of some of the best practices, especially if you are going to import results for a long time. When you have relevant test data organized well, it is easier for you to see the quality situation.

 

General practices

Classify, classify, classify! The same thing that applies to manual testing is also important for automated testing – the more Testlab knows what is the target of testing, the more relevant information you get out of it. The more relevant the information you get, the better decisions you can make. So if you have some sort of release schedule in use, create releases as milestones in Testlab and mark those Milestones to be used for test automation. This allows you to see what and how much you tested in certain milestone. Same goes for versions and environments.

Less is more: First, it feels intuitive to get all the possible automated test results to Testlab – as it has powerful reports and radiators that shows you your test execution data nicely. There is a reason for not to get all test results as they are. The reason is that we humans are bad at interpreting really huge loads of data in one go. If you would run a thousand tests every half an hour you would get 1,5 million results in a month. The detail when certain test failed in the past is rarely that relevant in that mass. Testlab offers you a few ways to keep the data relevant for you:

  1. Ignore irrelevant results: This is the most straightforward – if you bring result files that have irrelevant tests for you just ignore them completely so that you never get the results in the Testlab. This might make sense for example in a case where you have test cases that your build runs but are not relevant for your project.
  2. Aggregate test results: You might have a huge number of small test cases that you run, and each of them runs just a small detail. In the end, you are probably interested which detail it is only when the test fails. When the test runs successfully, you are probably happy just knowing the whole group of tests we run successfully. For this case, Testlab offers the possibility to map many imported test results to one Testlab test case. The advantage doing this is that you get a more human approachable amount of Test cases visible to your Testlab, it is easier to spot the area when something fails and if you map your requirements to the Test cases, maintaining the mapping gets a lot easier. As the test case details contain the information which imported test cases were run, you will not lose any information either. Also, if any of the imported test cases fail, the Testlab test case fails so you will not miss any fail situation.
  3. Save results on relevant intervals: It is not always wise to keep all the test results in Testlab. It is enough to act when something fails and know the overall situation. This holds true if you run automated tests, for example, many times a day. It is not likely that you will want to check the exact results of an individual run from the past anyway. To save the results on relevant intervals, you can overwrite the test runs in Testlab. Thus you will always import the results when the tests are run but keep only the latest. You can control this by naming conventions from for example Jenkins plugin.

 

Importing results from Robot framework

Testlab supports reading Robot framework own output format which enables getting a bit richer output than Xunit XML. Thus it is advised to send the results in Robot Frameworks own XML format instead of Xunit XML. This way you will get to see more information about what kind of test was run.

While generally, the anticipated outcome is to have Jenkins run the Robot tests, remember that you can also send the results directly to Testlab by dragging and dropping. This makes sense, especially when developing the tests and you want to share the progress to your team. Quite often the test done using Robot Framework are depicted on a high level (like usually acceptance tests are). This means you probably want to see the results in Testlab as they are in Robot Framework. Also, the suite structure is usually well suited for Testlab usage. This means that bringing the results in is easy – try using the Add and map for all results.

 

Automating manual system testing

When automating system testing, you might want to consider keeping the existing manual testing tree in place and put your automated test cases in the same tree. On some cases, this makes reporting and managing requirement linking easier. You can always make the distinction between the tests using the “type” field when needed.

 

Migrating from the previous versions of Testlab

Compared to earlier versions of Testlab, Testlab – Automaton brings in significant changes how the automated tests are stored to the system and presented in the user interface.

In Testlab – Automaton, test cases and test runs are typed as manual or automated. Results of manual tests are recorded to manual test runs and results of automated tests are stored to automated test runs. In the user interface, there are different views to manage and present these assets:

  • “Manual testing”: This view presents all manual test runs and offers an option to execute (manual) test cases. This view is basically the view “Test runs” in previous Testlab version.
  • “Test automation”: This view shows all automated test runs and offers a mapping workbench to defined rules how the tests should be mapped when the results are imported. This is a new view in Testlab – Automaton.

You can read more about these features in the Help manual of Testlab.

 

Test case mapping identifiers

Earlier, test cases could be automated by defining a custom field with a value that maps to the identifier of your automated test (or part of it). In Automaton, this changes so that a custom field is not anymore necessary and all this happens under the hood. When upgrading to Automaton, you have an option to migrate all test cases which have a test identifier mapping in a custom field to Automaton-version’s “automated test case” type. The mapping identifier will be moved from the custom field to the “Automation rule value” field in the migrated automated test case. How, when and pros and cons of this, please see questions in Migration FAQ below.

 

Existing test automation results

If you have pushed test results from outside sources such as Jenkins, you most likely have test runs in your projects which hold results for these automated test cases. When upgrading to Automaton, you have an option to migrate all these test runs to the new model. This way the test runs will be converted to automated type and the results are shown in the new Test automation view of Testlab. How, when and pros and cons of this, please see questions in Migration FAQ below.

 

Migration FAQ

a) I just want to continue the using Testlab without doing anything, can I skip all this migration stuff?

Yes, you can.

The only thing you should consider is that if you have automated tests in your projects (for example, you are using Jenkins to push results in), there might be some downsides for you doing so. See question c.

 

b) I have only manual test cases in my projects. As far as I know, we are not using automation at the time. What should I do?

Great, you probably don’t have to do anything. The only thing we recommend is that you should make sure that you are not using automation. For example, if you know you are using Jenkins integrated to your Testlab for some projects, you most likely are using automation. You also could ask your administrator if there are any automation features in use. If it happens that you are using automation, please see question c for your options to proceed.

 

c) We have automated test cases with test identifiers in our project. What are our options?

Your options are as follows:

  • Continue using Testlab as you are without migrating any data
    If you plan to continue pushing automated test results to your project(s), we don’t recommend this approach. If you still wish to do so, there are some downsides:

    • New results pushed will most likely create new test cases for your automated tests. This is because mapping via custom field value is not supported anymore and the (default) ruleset will create new test case stubs for your tests. Your old results (and test cases linked to them) will exist as expected, but you will end up with duplicate test cases for the results added via the new test automation logic. If this is to be avoided, we recommend you migrate your test cases.
    • If the project has duplicate test cases for automated tests, care must be taken when reporting data to filter in relevant tests (old tests or new tests).
    • The (default) ruleset will create the full depth test case tree for your automated test hierarchy. This means, that if your current automated tests are scoped to package level, new test hierarchy will not match the old one. This means that the new tests created will need some manual work to be organized to the same hierarchy matching the old test cases.
  • Migrate your automated test cases to the new rule-based system
    This means that your current (ready-approved) test cases which have mapping identifiers in a custom field, will be converted to automated (typed) test cases and the mapping identifiers from the custom field will be removed. Keep in mind, that if you Jenkins is configured to create new test cases for the automated tests, after the migration, you will need to update your ruleset to also do so. By default, after the migration, the default ruleset will not create new test cases – it will only map results to your existing test cases via the “Automation rule value” field. This migration has the following benefits:  

    • When pushing new results, the results will be mapped to the previously existing test cases in the project.
    • Reports will most likely work as they are. Only the type of the test case will be changed.
  • Migrate your automated test cases to the new rule-based system and migrate your existing test results
    As explained above, you have an option to migrate your test cases to the new system. Keep in mind though, that that does not automatically convert the results of these tests to the new system. You have an option to also migrate your results.
    When done so, all existing test runs which have only automated test cases in them will be converted to automated test runs. All test runs which have manual tests in them will be left unconverted. After the migration, manual test runs will be found in “Manual testing” view and automated test results will be found in “Test automation” view. This conversion has the benefit of migrating the data to the format of the new system so that the current and all the future features tied to the automated test runs will also work with your old result data.

 

d) We prefer not to migrate anything but continue using Testlab. Is there still something I have to do for automation to work? 

The old way of mapping test identifiers via custom fields is not supported anymore and the API where the results are pushed in has changed. Because of this, in minimum

  1. If you are using Jenkins, you must update your Testlab Jenkins plugin and reconfigure it. In earlier versions of Testlab, there are various settings in Jenkins Plugin side which have been moved to the settings of a ruleset. Please refer to Testlab’s documentation on how to manage rulesets and it’s settings.
  2. If you have some custom code pushing results via Testlab’s REST API, you most likely need to update your integration to work with Testlab’s rulesets.

Please see question c for any downsides you might have by skipping the migration of your existing automated test cases.

 

e) We are using Testlab as a Service and I would like to migrate. What do I do?

You should read these questions thoroughly and plan your migration with the administrative user (and persons responsible for the projects in your Testlab) and decide which projects you want to migrate in your installation. You will also have to check which custom field(s) you are using to map the automated tests.

When you know

  • which projects to migrate and
  • what is the title of the custom field used to map the automated tests in each project,

please submit a ticket to Meliora’s support and ask for the migration of your data. Please note that we will not automatically migrate any of your data and you need to submit a ticket for this to happen.

 

f) We have an on-premise installation of Testlab. How do we migrate the data when we upgrade to Testlab – Automaton?

You should read these questions thoroughly and plan your migration with the administrative user (and persons responsible for the projects in your Testlab) and decide which projects you want to migrate in your installation. You will also have to check which custom field(s) you are using to map the automated tests.

When you know

  • which projects to migrate and
  • what is the title of the custom field used to map the automated tests in each project,

you can use the migration scripts provided in the upgrade package to migrate each project. Information on how to (technically) run these scripts after the upgrade of the Testlab is included in the additional notes inside the upgrade package.

 

g) I have questions and need help deciding, what do I do?

Please contact Meliora’s support and just ask, we are happy to help!

 

Meliora Testlab Team

Facebooktwitterlinkedinmail


Tags for this post: automation features integration product release usage 


4.6.2019

Upcoming releases with Automation workbench

In this post, we will discuss a bit about our upcoming release schedule and offer a glimpse to the features in upcoming releases.

 

Challenges with test automation

Business challenges related to interpreting results of automated tests are obvious. More often, tests are created by developers with various different motives, the number of tests is often numerous and traceability to the specification is often lacking. Features are needed to make sense of these results and gain the business benefits outside the scope of regression or smoke testing – which in practice, often tends to be the use for automated tests.

If you are able to map and scope the tests efficiently, you

  1. have a better understanding of what is currently passing or failing in your system under testing,
  2. have your team collaborate better,
  3. ensure that key parties in your project can make educated decisions instead of being reactive and
  4. have a tool to really transition from manual to automated testing as existing test plans can be easily automated by mapping automated tests to existing test cases.

 

Automation in Testlab

In Testlab, results from automated tests can be mapped to test cases in your project. This way of working with your automated tests has benefits as the results can be tracked and reported in exactly the same way as the manual tests in your current test plan. What happens is that when results for automated tests are received (for example from a Jenkins’ job), a test run is added with the mapped test cases holding the appropriate results. The logic how the mapping works is currently fixed so that the test cases can be mapped with pre-defined fixed logic: If the ID of the automated test starts with an ID bound to a test case, it will receive results of this automated test.

 

Upcoming Automation workbench

Future versions of Testlab will include a new Test automation view with a workbench aimed at managing the import and mapping of your automated tests. A screenshot of the current prototype can be seen below (click for a peek):

Issue situation radiator

 

The workbench

  • allows you to define “sources” for your tests: a source is a new concept which allows you to configure how the incoming results from such source are handled. You might want to think the source as your “Jenkins job” or your Jenkins installation as a whole if your jobs follow the same semantics and practices. A source is defined with “rules”, which define how the automated tests are mapped. The workbench
  • features a rule engine, which will allow comprehensive mapping rules to be defined with
    • ignores (which tests are ignored),
    • creation rules (how test cases for tests are automatically created) and
    • mapping rules (such as “starts-with”, “ends-with”, regular expressions – defines how the results for the tests are mapped to your test plan) and
  • the workbench allows you to
    • easily upload new results by dragging and dropping,
    • execute any rules as a dry run to see how the results would be mapped before the actual import,
    • create any test cases by hand – if preferred over the creation rules – and
    • use pre-built wizards which help you to easily create the needed rules by suggesting appropriate rules for your source.

We feel this is an unique concept in industry to help you get better visibility in your automation efforts.

The prototype shown is work-in-progress and is due to be changed before release.

 

Upcoming releases

As our roadmap implies, The new major automation related features are planned to be released in Q3 of our 2019 release cycle. The team will be hard working with these new features which means that the Q2 release will be a bug-fix release only. That said, we will be releasing maintenance releases in between, but the next feature release will be the version with automation features in Q3/2019.

Meliora Testlab Team

Facebooktwitterlinkedinmail


Tags for this post: automation features integration product release usage 


11.4.2019

Testlab – Lost Cosmonaut release

The Meliora team is proud to announce a new version of Meliora Testlab codenamed Lost Cosmonaut. A major addition in this release is the new radiator for real-time tracking of your issues. In this version, you also might see a speed boost due to a new server protocol between the UI and the server. Read more about the changes below.

 

Issue situation radiator
Issue situation radiator

Radiators are a new reporting related concept introduced in the earlier release to view your project situation in always up-to-date view. Lost Cosmonaut features a new ‘Issue situation’ radiator to give you a fast glance at how the issues in your project are currently being handled.

The radiator provides multiple different statistics to help you improve your issue handling work and to identify possible bottlenecks in your processes. A more detailed description of what statistics the radiator provides and how the radiator should be configured can be found in the help manual of Testlab.

 

Saveable table views

In earlier versions of Testlab you had the opportunity to save the current filters in your table views. Lost Cosmonaut improves this by introducing table views.

In addition to filters, the function now saves the whole configuration of your table including the visible columns, column widths, sorting and grouping settings. You also have an option to reset the view back to the original settings. Keep in mind though that when resetting the view you lose all your current settings in the table.

 

 

 

 

 

 

“Show non-inactive” assets in trees

A new control “Show non-inactive” has been implemented to trees representing test cases and requirements. By default, this checkbox is unchecked which hides all deprecated assets from the tree. To show all deprecated assets, check the control.

Keep in mind that this option also controls if the deprecated assets are shown in the appropriate table views (test cases or requirements). This control makes it easier to hide and show the deprecated assets without the need for using the status related filters to achieve the same.

 

Performance improvements via a new server protocol

A new server protocol long in the making is introduced which the UI uses to transfer data from the server. This should bring performance improvements and make the UI snappier in most places. Keep in mind though, that the speedups gained depend highly on the amount of data in your project, on the browser used and on the speed of your network connection.  

 

In addition to the above

In addition to numerous smaller changes and the major ones above,

  • Milestones view has been added with “Hide completed” checkbox to hide the already completed milestones,
  • milestones have been added a new start date field (currently an informative field used only in radiators),
  • asset tagging has been improved by adding a new “tag.manage” permission to control who can add completely new tags and remove existing tags completely,
  • during testing, when a new issue is added instead of only failed steps of test case, all steps are included the description of the issue,
  • by using the search field in the top right toolbar you can now search for test cases verifying specific requirements (“verifying:REQ”) and
  • if the “unique ID” of test cases has been configured to be visible in the project, the ID is automatically shown in the views related to the execution of tests.

 

Thanking you for all your feedback,

Meliora team


Lost Cosmonaut

Yuri Gagarin became the first man in space in 1961. This was an irrefutable major feat for whole humankind. But at what price?

During the so-called Space Race, between 1955 and 1972, both the Soviet Union and the United States competed on which of these parties would be the first to conquer space. As said, on April 12, 1961, was the first recognized successful space flight made with a person onboard. But theories exist saying that there many incidents in space programs that were kept secret, especially in the Soviet Union’s space program. These theories spark from the recordings made by amateur radio operators featuring cosmonauts on failed space missions before Gagarin.

(Source: HowStuffWorks, Wikipedia, illustration from Open Clip Art Library / public domain)

Facebooktwitterlinkedinmail


Tags for this post: announce features product release screenshots 


1.2.2019

Official support for Jenkins Pipelines

A continuous delivery pipeline is an automated process for delivering your software to your customers. It is the expression of steps which need to be taken to build your software from your version control system to a working and deployed state.

In Jenkins, Pipeline (with a capital P), provides a set of tools for modeling simple and complex pipelines as a domain-specific language (DSL) syntax. Most often this pipeline “script” is written and stored to a Jenkinsfile stored inside your version control system. This way the actual Pipeline definition can be kept up-to-date when the actual software evolves. That said, Pipeline scripts can also be stored as they are to Pipeline-typed jobs in your Jenkins.

 

Meliora Testlab plugin for your Pipelines

Meliora provides a plugin to Jenkins which allows you to easily publish your automated testing results to your Testlab project.

Previously, it was possible to use the plugin in Pipeline scripts by wrapping the plugin to a traditional Jenkins job and triggering it with a “build” step. A new version 1.16 of the plugin has been released with official support for using the plugin in Pipeline scripts. This way, the plugin can be directly used in your scripts with a ‘melioraTestlab’ expression.

When the plugin is configured as traditional post-build action in a Jenkins job, the plugin settings are set by configuring the job and entering the appropriate setting values via the web UI. In Pipelines, the settings are included as parameters to the step keyword.

 

Simple Pipeline script example

The script below is an example of a simple Declarative Pipeline script with Meliora Testlab plugin configured in a minimal manner.

pipeline {
    agent any
    stages {
        stage('Build') {
            // ...
        }
        stage('Test') {
            // ...
        }
        stage('Deploy') {
            // ...
        }
     }
     post {
         always {
             junit '**/build/test-results/**/*.xml'
             melioraTestlab(
                 projectKey: 'PRJX',
                 testRunTitle: 'Automated tests',
                 advancedSettings: [
                     companyId: 'mycompanyid',
                     apiKey: hudson.util.Secret.fromString('verysecretapikey'),
                     testCaseMappingField: 'Test class'
                 ] 
             )
         }
     }
}

The script builds, tests, deploys (with the steps omitted) the software and always as post stage, publishes the generated test results and sends them to your PJRX project in Testlab by storing them in a test run titled ‘Automated tests’. Note that advanced settings block is optional: If you configure these values to the global settings in Jenkins, the plugin can use the global settings instead of the values set in the scripts.

 

Pipeline script example with all settings present

The example below holds all parameters supported (at the time of writing) by the melioraTestlab step.

pipeline {
    agent any
    stages {
        // ...
    }
    post {
        always {
            junit '**/build/test-results/**/*.xml'
            melioraTestlab(
                projectKey: 'PRJX',
                testRunTitle: 'Automated tests',
                comment: 'Jenkins build: ${BUILD_FULL_DISPLAY_NAME} ${BUILD_RESULT}, ${BUILD_URL}',
                milestone: 'M1',
                testTargetTitle: 'Version 1.0',
                testEnvironmentTitle: 'integration-env',
                tags: 'jenkins nightly',
                parameters: 'BROWSER, USERNAME',
                issuesSettings: [
                    mergeAsSingleIssue: true,
                    reopenExisting: true,
                    assignToUser: 'agentsmith'
                ],
                importTestCases: [
                    importTestCasesRootCategory: 'Imported/Jenkins'
                ],
                publishTap: [
                    tapTestsAsSteps: true,
                    tapFileNameInIdentifier: true,
                    tapTestNumberInIdentifier: false,
                    tapMappingPrefix: 'tap-'
                ],
                publishRobot: [
                    robotOutput: '**/output.xml',
                    robotCatenateParentKeywords: true
                ],
                advancedSettings: [
                    companyId: 'mycompanyid', // your companyId in SaaS/hosted service
                    apiKey: hudson.util.Secret.fromString('verysecretapikey'),
                    testCaseMappingField: 'Test class',
                    usingonpremise: [
                        // optional, use only for on-premise installations
                        onpremiseurl: 'http://testcompany:8080/'
                    ]
                ]
            )
        }
    }
}

If you wish to familiarize yourself to the meaning of each setting, please refer to the plugin documentation at https://plugins.jenkins.io/meliora-testlab.

 

(Pipeline image from Jenkins.io – CC BY.SA 4.0 license)

Facebooktwitterlinkedinmail


Tags for this post: automation best practices features jenkins plugin release usage 


10.1.2019

Testlab – Plumbbob release

New year, a new version of Testlab. We are glad to announce a new version of Meliora Testlab – Plumbbob. This release includes multiple new features including Radiators. A Radiator is a new reporting related concept which aims to help you track the progress of your testing in real-time. Please read more about the new features and changes in this release below.

 

Restricting the visibility of assets
Radiators

Radiators – a new reporting related concept introduced in this release – are real-time views to your project data refreshed at frequent intervals. Radiators differ from regular reports in a sense that the regular reports are intended to be ‘printed on paper’ as radiators are intended for real-time tracking from a display screen.

A typical usage scenario for a radiator is a so-called feedback display your team might have in their break room.  Plumbbob release includes two radiators:

  • Testing feedback radiator, which shows the current situation of issues and testing in progress.
  • Multi-radiator, which allows you to configure multiple (single-)radiators to be shown in regular intervals.

More radiators for specific tracking purposes are to be added in future releases.

Note: A Radiator is protected by a password which is needed to display it. Opening up a radiator (or multiple radiators from a single browser) consumes a single license from your license pool. For the time being, Radiators are also usable in Self-service subscriptions – so feel free to try them out! 

 

Coverage filtering

Test coverage view has been added with a search-word based filtering function which can be used to filter in relevant data to the coverage grid. The filtering itself works similarly as it works in the requirement or test case trees. For example, in Test case coverage, filtering with “priority:high” filters in coverage only for “high” prioritized test cases.

 

Grid context menus

Most grids presenting data have row-specific controls which are represented by small buttons shown at the end of the row when the mouse cursor hovers on the row. In Plumbbob, you can also access these functions via a context menu which is shown when you click the right mouse button on the row.

 

 

Special pasting

When managing test cases you can copy test cases around in your project using the left-hand side test case tree. The tree has been added with a “Paste special…” functionality, which enables you to select the content copied for selected test cases. For test cases, you have an option to choose if you want to copy the steps, attached files, linked requirements and optionally, copy the test cases by retaining their workflows status instead of copying the test cases to “In design” status.

 

Assigning tests to individuals

To ease managing your testing, test cases in test runs can now be assigned to individual testers. When a test case to be run is assigned to a tester, Testlab automatically selects these test cases for testing when this tester continues his/hers testing.

 

Categorized project events

When your team works in Testlab events are shown in the top right corner of the UI. In Plumbbob, you have an option to hide these event messages but at the same time, go through them at the later time. All events received are stored and categorized behind a red indicator. When clicked, you can read through them and/or dismiss them permanently.

 

In addition to the above

In addition to numerous smaller changes and the major ones above,

  • test case tree can now be filtered with requirements the test cases are currently verifying. Using search word such as “verifying:something” brings up test cases which are verifying matching requirements by searching requirements’ name and ID,
  • in Test execution, it is now easy to run test cases still not approved as ready. Running these test cases requires appropriate permissions (permissions to run test cases and to change the test case status to ready),
  • reports, which allow you to choose the fields listed, now don’t render the listings at all if no fields are selected,
  • data imports – such as test cases, requirements, … – now add timeline events to the dashboard and
  • most grids and reports now have an option to filter in assets by defining “Milestone: No milestone” criteria.

 

Thanking you for all your feedback,

Meliora team


Canis Majoris

At the Nevada test site, in the mid 1957s, United States conducted the biggest and longest series of nuclear tests ever in the continent. The operation was called Operation Plumbbob.

The scientists were worried about the radiation spills to the atmosphere. To go around this issue the Operation consisted of nuclear blasts conducted underground, in deep boreholes. These were to become the world’s first underground nuclear tests.

In a test codenamed Pascal B, the team experimented on how the air pressure affects the explosion and radiation spread. The borehole was welded shut with a 900 kilogram (2000 lb) steel cap. Then the a-bomb at the bottom of the shaft was detonated.

The steel cap over the detonation was blasted off at a speed of more than 240 000 kilometers per hour (66 km/s, 41 mi/s, 150 000 mph) which has been the topic of some discussion later on. As the cap was never found, it has been speculated that as the cap easily exceeded the escape velocity of the Earth, the cap (or part of it) would be the first manmade object to orbit the Earth. This would beat the Sputnik launched in October 1957.  

(Source: The Register, Wikipedia, photo by National Nuclear Security Administration / Nevada Site Office (Public Domain))

Facebooktwitterlinkedinmail


Tags for this post: announce features product release screenshots 


28.9.2018

Testlab – Circleville release

We’ve released a new version of Meliora Testlab – Circleville. Please read more about the new features and changes in this release below.

 

Restricting the visibility of assets
Choosing fields to be reported

In Circleville, when rendering reports, you have an option to choose the fields you wish to include on your report. This applies to most reports which feature a table or listing of some kind. In previous versions, the report templates included listings with a pre-defined set of fields present.

When choosing the fields you also have an option to arrange the fields to the order desired. The fields and the corresponding order of them are saved as you configure your report.

 

Configurable requirement classes

You now have an option to configure your own classes for requirements, if needed. When you configure one, you enter a title and choose an icon for it. This information is used to present the requirements in the UI. The classes can also be used in reporting.

By using customized classes you have an option to choose the custom fields which are specific to each class. See below.

 

Different custom fields for different types of assets

When configuring custom fields you now have an option to choose which type of issue or which class of requirement the field applies to. This way, for example for issues, you can have a different set of fields for “defects” and different set of fields for “new features”. As said above, this applies to different classes of requirements too.

 

Help manual with inbuilt search

The help manual incorporated to Testlab now has a searching function inbuilt. Searching the manual is easy: Just enter a search term to the field in the lower left-hand corner. The contents index of the manual gets highlighted for pages with hits and in addition, the current help page open gets highlighted with yellow for any possible hits.

 

In addition to the above
  • Workflow changes: Deprecated assets such as deprecated requirements or test cases cannot be edited anymore. To edit them, use appropriate action to transfer the deprecated assets back to design.
  • Workflow changes: By default, closed issues cannot be edited anymore. To edit closed issues a new permission “defect.edit.closed” must be granted to the user.
  • When test cases in a test set of Planning view are hovered on, the details of test cases are presented in a tooltip.
  • Table views of requirements and test cases now show the number of assets presented.
  • Links to open issues in Testlab can now be formatted to include the issue ID instead of the primary key.
  • Reporting: “List of issues” and “Issue grouping” report templates now support a new field “Requirements” which allows you to report the requirements linked to the issues via the test cases the issues are linked with.

 

Thanking you for all your feedback,

Meliora team


Canis Majoris

A small town in Ohio US, Circleville, is best-known today as the host of the Circleville Pumpkin Show held to celebrate local agriculture since 1903. This picturesque town has a sinister history of its own, though.

A mystery still unsolved spans from sometime in 1976 to late 90s when local residents started receiving personal and threatening letters with details of their personal life included. Thousands of these letters, called Circleville Letters, were sent to citizens and local city officials. The letters were written in block letters and were sent by an anonymous sender.

Finally, a man thought to be responsible for the letters was apprehended on a case related to few recipients of these letters. He was found guilty for an attempted murder and sentenced for years in prison. 

The letters kept on coming, though. The officials put the man under solitary confinement which did not stop the letters and they were certain that this man could not be sending the letters. A lot later, even a team of television producers working on a television document received one. The letters kept on coming till the late 90s and suddenly stopped. 

(Source: Gizmodo, Reddit, photo by Aaron Burden)

Facebooktwitterlinkedinmail


Tags for this post: announce features product release screenshots 


9.7.2018

Testlab – Globster release

We’ve released a new version of Meliora Testlab – Globster. Please read more about the new features and changes in this release below.

 

Restricting the visibility of assets
Restricting the visibility of assets

Your projects can now be specified with rules which restrict the visibility of certain assets in your project. This can be applied for requirements, test cases and issues and applied against their workflow status or values in customized fields.

For an example: If you have a 3rd party users in your project for which you’d wish to hide a set of requirements in your project, you can define a rule which limits the visibility of these requirements only to your own users in a certain user role.

 

Restricting the visibility of customized fields

Similarly to the assets, for all custom fields, it is now possible to choose the user roles for which the fields are visible for. This makes it possible to hide some information from your assets from a certain group of users.

When the field is restricted for certain roles, only users in these roles have access to the information in this field. Please refer to the help manual of Testlab for more details on how the data is visible and/or hidden.

 

Rich-text custom field type

A new type of custom field has been added which enables you to add a field with long richly formatted text for your assets. This new type of field differs in logic from other custom fields in a way that all rich-text typed custom fields are always presented on a separate tabin the design view.

 

More custom fields

The maximum number of custom fiels per asset type has been increased from 10 to 150.

 

Updates to plugins

Jenkins-, Confluence-, and JIRA-plugins have been released with bugs fixed and minor enhancements. Please update accordingly.

 

In addition to the above
  • Run Tests in … option in test case menu now has a filtering picker for picking the test run the tests should be executed in.
  • Selecting a report to be viewed is now easier in the UI as the listing of reports is now configurable and easier to filter.
  • With Execution history tab in Test design view it is now possible to inspect combined execution history for all test case’s revisions.
  • Reports can now be generated also in Finnish language.

 

Thanking you for all your feedback,

Meliora team


Canis Majoris

A globster is an unidentified organic mass that washes up on the shoreline of an ocean or other body of water. The term was first coined by Ivan T. Sanderson for the so called Tasmanian carcass found in 1960.

Globsters may present such a puzzling appearance that their nature remains controversial even after being officially identified by scientists. Some globsters lack bones or other recognisable structures, while others may have bones, tentacles, flippers, eyes, or other features that can help narrow down the possible species. The picture on the right is the “St. Augustine Monster” that washed ashore near St. Augustine, Florida, in 1896. It was first said to be the remains of a gigantic octopus but in 1995 analysis it was concluded that the globster in question was a large mass of collagenous matrix of whale blubber, likely from a sperm whale. 

(Source: Wikipedia)

Facebooktwitterlinkedinmail


Tags for this post: announce features plugin product release screenshots 


13.4.2018

Testlab – Canis Majoris release

We’ve released a new version of Meliora Testlab – Canis Majoris. This version includes major additions to the REST interfaces of Testlab and several UI related enhancements. Please read more about the changes below.

 

REST API enhancements
REST API enhancements

The REST based integration API of Testlab has been enhanced with a number of new supported operations. With Canis Majoris, it is possible to add, update and remove

  • requirements,
  • test categories,
  • test cases and
  • issues.

Add operation maps to HTTP’s POST-method, update operation maps to PUT-method (and in addition, supports partial updates) and remove operation maps to DELETE-method. More thorough documentation for these operations can be found in your Testlab instance via the interactive API documentation (/api).

 

Test categories with fields

In previous versions of Testlab, test cases were categorized to simple folders with a name. In Canis Majoris, test categories have been enhanced to be full assets with

  • a rich-text description,
  • time stamps to track creation and updates,
  • a change history and
  • a possibility add comments on them.

 

In addition to the above
  • Date and time formats are now handled more gracefully in the user interface by respecting the browser sent locale.
  • Test cases of a requirement can now be easily executed by choosing “Run tests…” from
    • the tree of requirements or
    • from the table view of requirements.
  • Similarly, the test cases linked to a requirement can be easily added to a your work (test) set by choosing “Add to work set” from the table view of requirements.
  • Test case listing -report renders execution steps in an easy-to-read table.

 

Thanking you for all your feedback,

Meliora team


Canis Majoris

VY Canis Majoris (VY CMa) is one of the largest stars detected so far and is located 3900 light-years from Earth. The estimates on it’s size vary, but it is estimated to be 1400 to 2200 solar radii (distance from center of Sun to it’s photosphere).

The size of this object is difficult to comprehend. It would take 1100 years travelling in jet aircraft at 900km/hr to circle it once. Also, It would take over 7 000 000 000 (7 Billion) Suns or 7 000 000 000 000 000 (7 Quadrillion) Earths to fill VY Canis Majoris. There are also few videos on YouTube which try to explain the size for you. 

(Source: Wikipedia, A Sidewalk Astronomer blog)

Facebooktwitterlinkedinmail


Tags for this post: announce features integration product release screenshots 


5.3.2018

How to: Migrate from TestLink to Meliora Testlab

transSome of our customers have moved from using TestLink to our tool, Meliora Testlab. We’ve been asked to make this transition easier and so we decided to document this migration path, and this  post here describes how the migration works. Basically the migration moves your important test data from TestLink to Testlab so you can continue to work in your new tool.

 

When changing the tool, do I need to change the way I work?

This is a tough subject that deserves another post completely dedicated to that matter. To put it shortly, Testlab offers a lot of features that allow working in new ways, but most of the Testlink features are also in Testlab, so if there is no need to alter your way of working, you can do most of your work in the same way as before. TestLink’s “Test specification” view’s data can be seen in Testlab in “Test case design”. Assigning test cases to be tested in the simple form in Testlab is done by picking the cases to work set and then creating a test run, which tells the what release / version is being tested. Test execution is again pretty much the same again.

In nutshell – how does this all work?

In TestLink there is an export feature that allows you to export your test case data as XML. We can transform this data in to a format that can be directly put in in to the Testlab. When custom fields have been used in Testlink, the tool needs to be instructed how the custom fields are mapped in to Testlab fields. After the definition, a click of a button transform the data in to desired format.

For your convenience, Meliora will do this transformation free of charge (*). For our On-premise customers we can also deliver the tool to allow doing the transformation yourself.

The migration has five distinct steps:

  1. Modifying the Testlab to used format. ( optional )
  2. Exporting Test cases from TestLink testsuites.
  3. Describing the data mappings ( optional )
  4. Transforming the TestLink Export XML to csv format that Testlab can read
  5. Importing the test case data from csv.

The techical part behind these steps is pretty straightforward. Once you know how you want to migrate the data, the export-import will be just a few clicks for you.

*) Meliora reserves rights to decline of the free service on cases where making the transformation would engage Meliora with exeptional workload. This could happen if you would have huge amount of projects to be migrated.

 

What do I need to do before the migration?

Well, the only required thing is becoming a Meliora customer. Any edition of Testlab will do. A very very recommended thing to do is to plan how do you want to test with Testlab in the future. As Testlab offers many features that make testing easier that are not present in Testlink, it might be wise to change the way how testing is done at the same time when the tool is being changed. For example testlab has automatic revisioning, history, built-in (optional) review etc. so you might have made customizations to TestLink for these issues. These customizations are probably not needed anymore, so you need to decide if it is better just to ditch the customizations or do you want to continue using them. If you are in a hurry, fear not! Then you can just import all custom data and ditch the unneeded later.

 

General considerations

Most important thing to ensure is that your company’s testing work is not interrupted more than it has to. When you switch the tool, it is not effective if you continue using the old tool ( in migrated projects ) as you would be getting test case updates in two tools and test results in two tools for same project. Thus it is best to decide a timeslot for the switch and ensure everything goes smoothly when the time comes.

It is best to contact Meliora as you plan the migration and prepare a timeslot in case you want to keep to switch time in absolute minimum. Just create a support ticket and Meliora will help you getting the migration done smoothly.

Migration steps

 

Modifying the Testlab

Testlab, by default, has different fields and field Values than TestLink. You can, if you want, do the migration without modifying Testlab. Then the fields are going to be mapped by default. In case you decide to do the modification, these are the things to consider:

Choosing / Modifying a workflow

workflow-editIn testlab, the way how statuses are used are controlled by workflows. Workflows allow logic behind changing statuses – what statuses can be reached from what status, what fields are mandatory in which state and who has privileges to make changes. Testlab comes with two default workflows, called “simple” and “review”. The difference for test cases here is that review has a phase for review, where simple skips this. Basically you need to decide what statuses you wish to see in Testlab. The default Status transformations are depicted in following table:

 

 

 

 

 

TestLink Status
Testlab Simple workflow
Testlab With review workflow
Draft In design In design
Ready for review Ready for review In design
Review in progress Ready for review In design
Rework In design In design
Obsolete Deprecated Deprecated
Future Ready Ready
Final Ready Ready

You can add additional data transformations for the migration in addition to simply changing statuses. For example the “Testlab way” for handling test cases that are not yet runnable, is using a “Milestone” field to define when the test case is planned to be used. More on this on chapter “Describing the data mappings”.

Modifying the Testlab project

In case you have used custom fields and wish to keep the data in the custom fields in the future as well, you need to configure your Testlab’s project to include those. You’ll find the instructions for that in Testlab’s manual – it’s very simple operation.

Keep in mind, that you also have an option to import data from custom fields to description field. This makes sense when you do not want to lose this information, but you do not need to filter data in reports using custom field data.

If you are migrating data to multiple Testlab projects you can do the common modifications once and then copy the project with modifications. This way you do not need to do the modifications for all the projects separately.

 

Exporting test cases from TestLinkTestlink export options

In Testlink, you need to export test cases as XML for each project you wish to migrate.

  • In Testlink, go to “Test specification”
  • Choose “Actions” -> Export All Test Suites
  • make sure you have at least four boxes checked
  • Save your XML export file

 

Describing the data mapping

Before the transforming the data you can define how the TestLink data is to be transformed in the migration. Our on-premise customers that wish to do the transformation themselces do not need to do this as they are going to put this data in to tool themselves. Send the following information  to Meliora:

  • What custom fields are to be migrated. For each field describe to what field you want to import the data in to.
    • Example1 ) TestLink custom field “risk” values to Testlab field “risk”
    • Example 2) TestLink custom field “legacy link” added to the end of “description” field
  • How you want to change the data values
    • Example 1) TestLink custom field “risk” value “petty” to be “Low” in Testlab
    • Example 2) TestLink status “Future” to Testlab status “In design” + Milestone to value “Future”
Transforming the XML to Testlab csv

Here you have a few options:

  1. For SaaS users you can just send the XML to Meliora, and Meliora will do the transformation and import the data to your Testlab project. Just wait for the confirmation and you can start using your Testlab with imported data!
  2. For On-premise / SaaS users that want Meliora to just do the transformation, send the XML to Meliora and Meliora will deliver you an CSV that is ready to be imported in to testlab.
  3. For On-Premise users that want to do the transformation themselves, contact Meliora for details. Meliora will deliver you the tool along with instructions to do the transformation.
Importing the data from csv

The import itself is just a few click really:

  1. From Testlab menu choose Import -> Test cases
  2. Choose the csv file
  3. Try the import first with “dry run” option on. This will show errors / warnings should there be any. Here you will see if, for example, if you do not have a custom field in this project where you try to input data with the csv file.
  4. After you are satisfied with dry run results, uncheck the dry run box to really load the data in to the Testlab.
  5. Refresh the test case tree and start using your Testlab.

Final words

This document describes the basic setup of the migration process. As with all project more complicated than Hello World!, there will be unknown factors. You might have in-house customization done to your TestLink or you might want to include data from completely another source. Fear Not! Meliora can work with migrations that does not follow the ordinary path. Just contact our great support and we will work trough the migration together!

Meliora team

 

Facebooktwitterlinkedinmail


Tags for this post: announce features product release reporting screenshots 


 
 
Best-of-class cross-browser hosted SaaS quality and test management, testing and issue management tools to improve your quality. Site information.