Posts tagged with: integration


26.9.2019

Getting started : Bringing automation test results to Testlab

This post is an introduction of how to bring your automation results into the Testlab tool. We also introduce options and questions relating to migration from earlier Testlab versions to the new version: Testlab – Automaton. You can read more about your migration options below.

 

Why bother – what is the motivation behind bringing the results into Testlab?

Before the test results are brought to Testlab, it is common to discuss the reason for doing that – why go through all the trouble? The test results most probably are already somewhere. Possibly in Jenkins, possibly in HTML report in the disk – anyway they are somewhere in a readable format. Why would you set up a system to bring the results into Test management / ALM tool, Testlab? The most common reason for bringing the results in to test management tool is the better findability, usability, and ability to connect the test results to requirements/use cases. When the testing information is brought to everyone to see the project can work more efficiently – and thus make better decisions. Also combining the testing situation of Manual and automation testing into one view help understand the overall quality situation better. If your test results are only on CI tool you usually see the latest testing situation well, but you can not see what requirements/user stories they test, what they do not test, and how the situation develops over time.

 

Screenshot of viewing test results in Testlab’s Test automation view

 

Bringing the test results to Testlab – what does it do?

If you are familiar with manual testing in Testlab, it probably helps you if we tell that you’ll be able to see similar test runs as for manual test cases, and see similar test cases for automation as for manual tests. You can use the same reports you are familiar with to report automation results as well. So this is the basic case which is only the beginning. Testlab has a rule-engine that allows you to define how you wish to see and use the test results. The reason why you want to define rules is that it will be easier for you to handle the results you get to the management tool – to communicate the project situation for example.

 

Central concepts of test automation

Before we hop into the details of how this test automation importing works in details, let’s go through the central concepts and terms we use in our tool and in this article.

Result file The original result file that your test automation tool produced and from where you want to import results to Testlab.
Ruleset Set of rules you use to handle the result file to be utilized in Testlab. If you have multiple different kinds of automated test results, you can have multiple rulesets to handle them.
Test run A test run is a container in Testlab where test results are saved – very much like for manual tests.
Imported results Test results in your test run that have been handled by ruleset from your result file from your test automation tool.
Source As a concept, it groups test results that you wish to compare together. Jenkins job is one example of a useful source.
Rules Individual rules in a ruleset that dictate how you import test results from your result files.
Identifier The identifier is a unique identifier which is distinct for each test case. It consists of a folder structure and name where the test resides in the tree.
Required test cases In Testlab the test results are always saved for a test case. Thus, when importing test results, if there isn’t a test case in Testlab yet where the results would be saved, they are considered as required test cases. You can then create these test cases automatically.

 

Importing my first test results

You probably have your automation tool in place already. Quite likely it is able to save the test results in xUnit XML format which is a standard for test automation tools. Testlab reads this format, as it does Robot frameworks output and TAP result files. If the tool you use does not produce any of these formats, let us know and we’ll see if we can add support for your used format. So when it comes to Testlab, the story starts from the point where you have run the tests using your automation tool and have the results in a file.

When automating testing, the tests are often launched from a CI tool like Jenkins or using some other automated method. Jenkins can be configured to send the test results to Testlab automatically after the tests have been run, or the results can be sent using an API. You can also drag and drop result files to Testlab manually from your computer so you can try how it works pretty easily.

 

Do it yourself!

We assume you have your Result file at hand. If not, and want to try how it works, grab one from the demo project that came with your Testlab ( Instructions a a bit later). In Testlab, head to “Test automation” view and select “Automation rules” -tab. This is the place where you can define what Testlab does with your automation test results. Let’s start by creating a new ruleset for reading your automation test results. Do that by clicking the + in “Rulesets” pane.
Add a ruleset

You can use this ruleset for reading this file and similar ones you might import to Testlab later, so you should name it accordingly. Now you can call it “My ruleset” or “Mithrandir” if you wish. Another required field is the Title of the test run that will be created by default using this ruleset. It can be overwritten later, but in case you do not give it a name, this one will be used. Give it a name now, you can use for example “My results”. You can also give defaults to Milestone, Version, and Environment if you wish, and mark some tags to be added to the test run. You can also define if you wish to create issues automatically if test runs contain failing tests, and how the issues are created. Once you save your changes, you have the ruleset ready where you can add your rules.

Second thing is to import your result file into Testlab. Find it in your computer and drag & drop it to the Testlab window. You should now see your test results in raw format, and you probably recognize the test names and results from this. The test cases are not yet permanently saved to Testlab yet – you still have to decide how you wish to do that. If you don’t have your own result files but want to try it out how it works, select a preloaded result file from the demo project. The results on the demoproject are already loaded in to the Testlab, but you can still follow the same process. See picture below.

 

Preloaded result file

The third phase is this decisionmaking. Basically you have two things to decide. What are the test cases you want to bring in, and how you want to see them in Testlab. Let’s start first with the simplest case – you wish to bring all test results in and have one test case in Testlab per original test result. We’ll go to other options and to the reasons what you might want to do them a bit later in this article. So to add a rule that gets you these test results you just need to add a rule by right-clicking over your test results (in the middle of the screen), keep the rule type as “Add and map” and click “Add a new rule”. This is the rule that gets your results in and also creates test cases automatically. It matches test cases that have identifier staring with text in “Value” box. Changing that dictates which test cases this rule picks.

Next, you need to click “Apply rules” because Testlab does not do that for yourself. You’ll see the “Preview test run” on the right part of the screen. This part of the screen shows the situation from Testlab’s point of view – what will be the name of the created test cases, test results and so on. For this case where you created one test for one, the values on the right should be really familiar for you.

The fourth and final phase is confirming what you see and saving the results. Click “Save results…” on the bottom right corner. You get a window where you can enter details for the test run to be created. The values are prefilled from ruleset you used, so you can just click save results and have the results saved right away, or you can modify them if you wish. Title of the test run is used to distinguish different test runs, so give it a recognizable name (or go with a one inherited from ruleset if it is fit for you). The source is prefilled to be your name when importing results manually. Let’s leave the other values as they are for now – we’ll cover those later. Just click “Save results”.

 

How you see the results

Now you have the results in – Great! Main place to peruse the automated test results is in “Automated test runs” tab which you should be seeing now. The top part that consists of imported test runs shows their relevant statistics. Result mappings show light green for those test cases that you have mapped and grey for those test cases you have chosen to ignore. This is what you want to see. If this bar shows pink it means there are some unmapped tests for which you probably want to add rules to map them. This can happen for example when you push results from an outside source with tight rules and somebody creates new tests that do not match the rules. For clarity – you don’t have to update the rules every time new test cases are pushed in – only when the existing rules do not catch them. So if you create new test cases with your automation tool, create a file from that and upload it to Testlab. If the identifier of the new test cases starts with the value entered, your existing rule will work just fine.

You’ll also see the test cases and their results in other parts the Testlab user interface. If you go to test case design, you’ll find the newly created automated tests cases there. You can also add additional information to these test cases like a description of the purpose of the test case or other fields to classify your test cases. You can also define which requirements/user stories the test cases verify in “Test coverage” view. This way you’ll see the coverage status for your requirements. To try it out just drag & drop test case over the requirement to mark the test case to cover the requirement. You can also use all the powerful reports to share the quality situation through automated testing.

 

Best practices

So now you have seen how we get test results in with a simple case. Sometimes this is exactly what you need and want, but it is also good to take heed of some of the best practices, especially if you are going to import results for a long time. When you have relevant test data organized well, it is easier for you to see the quality situation.

 

General practices

Classify, classify, classify! The same thing that applies to manual testing is also important for automated testing – the more Testlab knows what is the target of testing, the more relevant information you get out of it. The more relevant the information you get, the better decisions you can make. So if you have some sort of release schedule in use, create releases as milestones in Testlab and mark those Milestones to be used for test automation. This allows you to see what and how much you tested in certain milestone. Same goes for versions and environments.

Less is more: First, it feels intuitive to get all the possible automated test results to Testlab – as it has powerful reports and radiators that shows you your test execution data nicely. There is a reason for not to get all test results as they are. The reason is that we humans are bad at interpreting really huge loads of data in one go. If you would run a thousand tests every half an hour you would get 1,5 million results in a month. The detail when certain test failed in the past is rarely that relevant in that mass. Testlab offers you a few ways to keep the data relevant for you:

  1. Ignore irrelevant results: This is the most straightforward – if you bring result files that have irrelevant tests for you just ignore them completely so that you never get the results in the Testlab. This might make sense for example in a case where you have test cases that your build runs but are not relevant for your project.
  2. Aggregate test results: You might have a huge number of small test cases that you run, and each of them runs just a small detail. In the end, you are probably interested which detail it is only when the test fails. When the test runs successfully, you are probably happy just knowing the whole group of tests we run successfully. For this case, Testlab offers the possibility to map many imported test results to one Testlab test case. The advantage doing this is that you get a more human approachable amount of Test cases visible to your Testlab, it is easier to spot the area when something fails and if you map your requirements to the Test cases, maintaining the mapping gets a lot easier. As the test case details contain the information which imported test cases were run, you will not lose any information either. Also, if any of the imported test cases fail, the Testlab test case fails so you will not miss any fail situation.
  3. Save results on relevant intervals: It is not always wise to keep all the test results in Testlab. It is enough to act when something fails and know the overall situation. This holds true if you run automated tests, for example, many times a day. It is not likely that you will want to check the exact results of an individual run from the past anyway. To save the results on relevant intervals, you can overwrite the test runs in Testlab. Thus you will always import the results when the tests are run but keep only the latest. You can control this by naming conventions from for example Jenkins plugin.

 

Importing results from Robot framework

Testlab supports reading Robot framework own output format which enables getting a bit richer output than Xunit XML. Thus it is advised to send the results in Robot Frameworks own XML format instead of Xunit XML. This way you will get to see more information about what kind of test was run.

While generally, the anticipated outcome is to have Jenkins run the Robot tests, remember that you can also send the results directly to Testlab by dragging and dropping. This makes sense, especially when developing the tests and you want to share the progress to your team. Quite often the test done using Robot Framework are depicted on a high level (like usually acceptance tests are). This means you probably want to see the results in Testlab as they are in Robot Framework. Also, the suite structure is usually well suited for Testlab usage. This means that bringing the results in is easy – try using the Add and map for all results.

 

Automating manual system testing

When automating system testing, you might want to consider keeping the existing manual testing tree in place and put your automated test cases in the same tree. On some cases, this makes reporting and managing requirement linking easier. You can always make the distinction between the tests using the “type” field when needed.

 

Migrating from the previous versions of Testlab

Compared to earlier versions of Testlab, Testlab – Automaton brings in significant changes how the automated tests are stored to the system and presented in the user interface.

In Testlab – Automaton, test cases and test runs are typed as manual or automated. Results of manual tests are recorded to manual test runs and results of automated tests are stored to automated test runs. In the user interface, there are different views to manage and present these assets:

  • “Manual testing”: This view presents all manual test runs and offers an option to execute (manual) test cases. This view is basically the view “Test runs” in previous Testlab version.
  • “Test automation”: This view shows all automated test runs and offers a mapping workbench to defined rules how the tests should be mapped when the results are imported. This is a new view in Testlab – Automaton.

You can read more about these features in the Help manual of Testlab.

 

Test case mapping identifiers

Earlier, test cases could be automated by defining a custom field with a value that maps to the identifier of your automated test (or part of it). In Automaton, this changes so that a custom field is not anymore necessary and all this happens under the hood. When upgrading to Automaton, you have an option to migrate all test cases which have a test identifier mapping in a custom field to Automaton-version’s “automated test case” type. The mapping identifier will be moved from the custom field to the “Automation rule value” field in the migrated automated test case. How, when and pros and cons of this, please see questions in Migration FAQ below.

 

Existing test automation results

If you have pushed test results from outside sources such as Jenkins, you most likely have test runs in your projects which hold results for these automated test cases. When upgrading to Automaton, you have an option to migrate all these test runs to the new model. This way the test runs will be converted to automated type and the results are shown in the new Test automation view of Testlab. How, when and pros and cons of this, please see questions in Migration FAQ below.

 

Migration FAQ

a) I just want to continue the using Testlab without doing anything, can I skip all this migration stuff?

Yes, you can.

The only thing you should consider is that if you have automated tests in your projects (for example, you are using Jenkins to push results in), there might be some downsides for you doing so. See question c.

 

b) I have only manual test cases in my projects. As far as I know, we are not using automation at the time. What should I do?

Great, you probably don’t have to do anything. The only thing we recommend is that you should make sure that you are not using automation. For example, if you know you are using Jenkins integrated to your Testlab for some projects, you most likely are using automation. You also could ask your administrator if there are any automation features in use. If it happens that you are using automation, please see question c for your options to proceed.

 

c) We have automated test cases with test identifiers in our project. What are our options?

Your options are as follows:

  • Continue using Testlab as you are without migrating any data
    If you plan to continue pushing automated test results to your project(s), we don’t recommend this approach. If you still wish to do so, there are some downsides:

    • New results pushed will most likely create new test cases for your automated tests. This is because mapping via custom field value is not supported anymore and the (default) ruleset will create new test case stubs for your tests. Your old results (and test cases linked to them) will exist as expected, but you will end up with duplicate test cases for the results added via the new test automation logic. If this is to be avoided, we recommend you migrate your test cases.
    • If the project has duplicate test cases for automated tests, care must be taken when reporting data to filter in relevant tests (old tests or new tests).
    • The (default) ruleset will create the full depth test case tree for your automated test hierarchy. This means, that if your current automated tests are scoped to package level, new test hierarchy will not match the old one. This means that the new tests created will need some manual work to be organized to the same hierarchy matching the old test cases.
  • Migrate your automated test cases to the new rule-based system
    This means that your current (ready-approved) test cases which have mapping identifiers in a custom field, will be converted to automated (typed) test cases and the mapping identifiers from the custom field will be removed. Keep in mind, that if you Jenkins is configured to create new test cases for the automated tests, after the migration, you will need to update your ruleset to also do so. By default, after the migration, the default ruleset will not create new test cases – it will only map results to your existing test cases via the “Automation rule value” field. This migration has the following benefits:  

    • When pushing new results, the results will be mapped to the previously existing test cases in the project.
    • Reports will most likely work as they are. Only the type of the test case will be changed.
  • Migrate your automated test cases to the new rule-based system and migrate your existing test results
    As explained above, you have an option to migrate your test cases to the new system. Keep in mind though, that that does not automatically convert the results of these tests to the new system. You have an option to also migrate your results.
    When done so, all existing test runs which have only automated test cases in them will be converted to automated test runs. All test runs which have manual tests in them will be left unconverted. After the migration, manual test runs will be found in “Manual testing” view and automated test results will be found in “Test automation” view. This conversion has the benefit of migrating the data to the format of the new system so that the current and all the future features tied to the automated test runs will also work with your old result data.

 

d) We prefer not to migrate anything but continue using Testlab. Is there still something I have to do for automation to work? 

The old way of mapping test identifiers via custom fields is not supported anymore and the API where the results are pushed in has changed. Because of this, in minimum

  1. If you are using Jenkins, you must update your Testlab Jenkins plugin and reconfigure it. In earlier versions of Testlab, there are various settings in Jenkins Plugin side which have been moved to the settings of a ruleset. Please refer to Testlab’s documentation on how to manage rulesets and it’s settings.
  2. If you have some custom code pushing results via Testlab’s REST API, you most likely need to update your integration to work with Testlab’s rulesets.

Please see question c for any downsides you might have by skipping the migration of your existing automated test cases.

 

e) We are using Testlab as a Service and I would like to migrate. What do I do?

You should read these questions thoroughly and plan your migration with the administrative user (and persons responsible for the projects in your Testlab) and decide which projects you want to migrate in your installation. You will also have to check which custom field(s) you are using to map the automated tests.

When you know

  • which projects to migrate and
  • what is the title of the custom field used to map the automated tests in each project,

please submit a ticket to Meliora’s support and ask for the migration of your data. Please note that we will not automatically migrate any of your data and you need to submit a ticket for this to happen.

 

f) We have an on-premise installation of Testlab. How do we migrate the data when we upgrade to Testlab – Automaton?

You should read these questions thoroughly and plan your migration with the administrative user (and persons responsible for the projects in your Testlab) and decide which projects you want to migrate in your installation. You will also have to check which custom field(s) you are using to map the automated tests.

When you know

  • which projects to migrate and
  • what is the title of the custom field used to map the automated tests in each project,

you can use the migration scripts provided in the upgrade package to migrate each project. Information on how to (technically) run these scripts after the upgrade of the Testlab is included in the additional notes inside the upgrade package.

 

g) I have questions and need help deciding, what do I do?

Please contact Meliora’s support and just ask, we are happy to help!

 

Meliora Testlab Team

Facebooktwitterlinkedinmail


Tags for this post: automation features integration product release usage 


4.6.2019

Upcoming releases with Automation workbench

In this post, we will discuss a bit about our upcoming release schedule and offer a glimpse to the features in upcoming releases.

 

Challenges with test automation

Business challenges related to interpreting results of automated tests are obvious. More often, tests are created by developers with various different motives, the number of tests is often numerous and traceability to the specification is often lacking. Features are needed to make sense of these results and gain the business benefits outside the scope of regression or smoke testing – which in practice, often tends to be the use for automated tests.

If you are able to map and scope the tests efficiently, you

  1. have a better understanding of what is currently passing or failing in your system under testing,
  2. have your team collaborate better,
  3. ensure that key parties in your project can make educated decisions instead of being reactive and
  4. have a tool to really transition from manual to automated testing as existing test plans can be easily automated by mapping automated tests to existing test cases.

 

Automation in Testlab

In Testlab, results from automated tests can be mapped to test cases in your project. This way of working with your automated tests has benefits as the results can be tracked and reported in exactly the same way as the manual tests in your current test plan. What happens is that when results for automated tests are received (for example from a Jenkins’ job), a test run is added with the mapped test cases holding the appropriate results. The logic how the mapping works is currently fixed so that the test cases can be mapped with pre-defined fixed logic: If the ID of the automated test starts with an ID bound to a test case, it will receive results of this automated test.

 

Upcoming Automation workbench

Future versions of Testlab will include a new Test automation view with a workbench aimed at managing the import and mapping of your automated tests. A screenshot of the current prototype can be seen below (click for a peek):

Issue situation radiator

 

The workbench

  • allows you to define “sources” for your tests: a source is a new concept which allows you to configure how the incoming results from such source are handled. You might want to think the source as your “Jenkins job” or your Jenkins installation as a whole if your jobs follow the same semantics and practices. A source is defined with “rules”, which define how the automated tests are mapped. The workbench
  • features a rule engine, which will allow comprehensive mapping rules to be defined with
    • ignores (which tests are ignored),
    • creation rules (how test cases for tests are automatically created) and
    • mapping rules (such as “starts-with”, “ends-with”, regular expressions – defines how the results for the tests are mapped to your test plan) and
  • the workbench allows you to
    • easily upload new results by dragging and dropping,
    • execute any rules as a dry run to see how the results would be mapped before the actual import,
    • create any test cases by hand – if preferred over the creation rules – and
    • use pre-built wizards which help you to easily create the needed rules by suggesting appropriate rules for your source.

We feel this is an unique concept in industry to help you get better visibility in your automation efforts.

The prototype shown is work-in-progress and is due to be changed before release.

 

Upcoming releases

As our roadmap implies, The new major automation related features are planned to be released in Q3 of our 2019 release cycle. The team will be hard working with these new features which means that the Q2 release will be a bug-fix release only. That said, we will be releasing maintenance releases in between, but the next feature release will be the version with automation features in Q3/2019.

Meliora Testlab Team

Facebooktwitterlinkedinmail


Tags for this post: automation features integration product release usage 


4.6.2019

Changes to authentication with Atlassian plugins

When integrating Testlab with your Atlassian products such as JIRA, there might be credentials related at your Atlassian side. Previously, for example with JIRA, the integrations required you to create an user account to JIRA and configure the credentials (including the password) to your Testlab project. Atlassian has made some changes and at least with JIRA Cloud, using passwords in this kind of scenario has been deprecated.

Documentation for setting up our integrations have been updated. In this post, we also provide instructions what to do to get your integrations up and running.

 

Replacing passwords with API tokens

Atleast for some operations, the integrations rely on operations provided by JIRA’s so called REST APIs. It might vary a bit depending on your JIRA version, but at least with the current JIRA Cloud, you should replace all passwords with so called API tokens. What you should do is that for the (JIRA) user you are integrating with, you should create an API token and use the token as the password at Testlab side. For technical details, you can read more about JIRA’s authentication via the links provided at the end of this article.

When using JIRA Cloud, to create an API token,

  1. With the user account you are integrating with, log in to https://id.atlassian.com/manage/api-tokens
  2. Click Create API token
  3. Give your token an label and click Create
  4. Click Copy to clipboard and paste the token as a “password” when configuring the integration at Testlab side

That’s it. Your server installed JIRA might also require use of API tokens. If so, refer to your JIRA administrator or JIRA documentation on help how to do this.

 

Resources

Facebooktwitterlinkedinmail


Tags for this post: integration jira security usage 


13.4.2018

Testlab – Canis Majoris release

We’ve released a new version of Meliora Testlab – Canis Majoris. This version includes major additions to the REST interfaces of Testlab and several UI related enhancements. Please read more about the changes below.

 

REST API enhancements
REST API enhancements

The REST based integration API of Testlab has been enhanced with a number of new supported operations. With Canis Majoris, it is possible to add, update and remove

  • requirements,
  • test categories,
  • test cases and
  • issues.

Add operation maps to HTTP’s POST-method, update operation maps to PUT-method (and in addition, supports partial updates) and remove operation maps to DELETE-method. More thorough documentation for these operations can be found in your Testlab instance via the interactive API documentation (/api).

 

Test categories with fields

In previous versions of Testlab, test cases were categorized to simple folders with a name. In Canis Majoris, test categories have been enhanced to be full assets with

  • a rich-text description,
  • time stamps to track creation and updates,
  • a change history and
  • a possibility add comments on them.

 

In addition to the above
  • Date and time formats are now handled more gracefully in the user interface by respecting the browser sent locale.
  • Test cases of a requirement can now be easily executed by choosing “Run tests…” from
    • the tree of requirements or
    • from the table view of requirements.
  • Similarly, the test cases linked to a requirement can be easily added to a your work (test) set by choosing “Add to work set” from the table view of requirements.
  • Test case listing -report renders execution steps in an easy-to-read table.

 

Thanking you for all your feedback,

Meliora team


Canis Majoris

VY Canis Majoris (VY CMa) is one of the largest stars detected so far and is located 3900 light-years from Earth. The estimates on it’s size vary, but it is estimated to be 1400 to 2200 solar radii (distance from center of Sun to it’s photosphere).

The size of this object is difficult to comprehend. It would take 1100 years travelling in jet aircraft at 900km/hr to circle it once. Also, It would take over 7 000 000 000 (7 Billion) Suns or 7 000 000 000 000 000 (7 Quadrillion) Earths to fill VY Canis Majoris. There are also few videos on YouTube which try to explain the size for you. 

(Source: Wikipedia, A Sidewalk Astronomer blog)

Facebooktwitterlinkedinmail


Tags for this post: announce features integration product release screenshots 


12.4.2017

Testlab – Lilliput Sight released

Meliora is proud to announce a release of new major Testlab version: Lilliput Sight. This release includes some new major features such as asset templating but in the same time, bundles various smaller enhancements and changes for easier use. New features are described below.

 

Asset templating

 
Asset templating

Creating assets for your projects in a consistent way is often important. This release of Testlab includes support for templates which you can apply when adding new requirements, test cases or issues.

A template is basically a simple asset with some fields set with predefined values. As you are adding a new asset, you have an option for applying templates. When you apply a template, the asset is set with the values from the applied template. You can also apply multiple templates, so designing templates to suit the needs of your testing project is very flexible.

A set of new permissions (testcase.templates, requirement.templates, defect.templates) has been added to control who has access to using the templates. These permissions have been granted to all your roles which had the permission to edit the matching asset type.

 

Robot Framework output support
Robot Framework output support

When pushing automated results to Testlab, we now have native support for Robot Framework’s custom output file. By supporting the native format, the results include detailed information of keywords from the output which are pushed to Testlab as the steps executed.

The support for Robot Framework’s output has also been added to the Jenkins CI plugin. With the plugin, it is easy to publish Robot Framework’s test results to your Testlab project in a detailed format.

 

Copying and pasting steps

The editor for test case steps now includes a clipboard. You can select steps (select multiple steps by holding your shift and ctrl/cmd keys appropriately), copy them to the clipboard and paste them as you wish. The clipboard also persists between test cases so you can also copy and paste steps from a test case to another.

 

 

Filtering with operators in grids

The text columns in Testlab’s grid now feature operator filters which allow you to filter in data from the grid in more specific manner. You have an option of choosing the operator the column is filtered with such as “starts with”, “ends with”, … , and of course the familiar default “contains”.

With a large number of data in your project, this makes it easier to filter in and find the relevant data from your project.

 

 

Mark milestones, versions and environments as inactive

When managing milestones, versions, and environments for your project, you now can set these assets as active or inactive. For example, if a version is set as inactive, it is filtered out from relevant controls in the user interface. If your project has numerous versions, environments or milestones, keeping only the relevant ones active makes the use easier as the user interface is not littered with the non-relevant ones.

For versions and environments, the active flag is set in the Project management view. For milestones, the completed flag is respected as in completed milestones are interpreted as inactive.

 

Usability related enhancements
  • Editing asset statuses in table views: You can now edit statuses of assets in table views – also in batch editing mode.
  • New custom field type – Link: A new type of custom field has been added which holds a single web link (such as http, https or mailto -link).
  • Support deletion of multiple selected assets: The context menu for requirements and test cases now includes “Delete selected” function to delete all assets chosen from the tree.
  • Delete key shortcut: The Delete key is now a keyboard shortcut for context menu’s Delete-function.
  • Execution history respects revision selection: The execution history tab for a test case previously showed a combined execution history for all revisions of the chosen test cases. This has been changed in a way that the tab respects the revision selection in a similar manner to the other tabs in the Test case design view. When you choose a revision, the list of results in the execution history tab is only for the chosen revision of the test case.
  • Custom fields hold more data: Earlier, custom fields were limited to a maximum of 255 characters. This has been extended and custom fields can now hold a maximum of 4096 characters.
  • Test cases already run in test runs can be run again: If the user holds the permission for discarding a result of an already run test case, you can now choose and execute test cases already with a result (pass or fail) directly from the list of test cases in Test execution view. Earlier, you needed to discard all the results one by one for the tests you wish to run again.
  • Enhancements for presenting diagrams: The presentation view for relation and traceability diagrams has been improved – you can now zoom the view and pan the view in a more easier manner by dragging and by double clicking.
  • Copy link to clipboard: The popup dialog with a link to open up an asset from your Testlab has been added with a “Copy to clipboard” button. Clicking this button will copy the link directly to your clipboard.

 

Reporting enhancements
  • “Filter field & group values” option added for grouping reports: Requirement, test case, and issue grouping reports have been added with an option to apply the filter terms of the report to the values which are fetched via “Field to report” and “Group by”. For example, if you filter in a requirement grouping report for importance values “High” and “Critical”, choose to group the report by “Importance” and check the “Filter field & group values” option, the report rendered will not include reported groups for any other importance values than “High” and “Critical”.

 

Enhancements for importing and exporting
  • Export verified requirements for test cases: Test case export now includes a “verifies” field which includes the identifiers of requirements the test cases are set to verify.
  • Input file validation: The input file is now validated to hold the same number of columns in each read row than the header row of the file has. When reading the file, if rows are encountered with an incorrect number of columns, an error is printed out making it easier to track down any missing separator characters or such.

 

Thanking you for all your feedback,

Meliora team


Alice

Various disorientating neurological conditions that affect perception are known to exist. One of them is broadly categorized as Alice in Wonderland Syndrome in which the people experience size distortion of perceived objects. Lilliput Sight – or Micropsia – is a condition in which objects are perceived to be smaller than they actually are in the real world.

The condition is surprisingly common: episodes of micropsia or macropsia (seeing objects larger than they really are) occur in 9% of adolescents. The author of Alice’s Adventures in Wonderland – Lewis Carroll – is speculated to have had inspiration for the book from his own experiences in Micropsia. Carroll had been a well-known migraine sufferer which is one possible cause of these visual manifestations.

(Source: WikipediaThe Atlantic, Image from Alice’s Adventures in Wonderland (1972 Josef Shaftel Productions))

 

Facebooktwitterlinkedinmail


Tags for this post: announce features integration jenkins plugin product release reporting 


30.1.2017

Testlab – Raining Animal released

Meliora is proud to announce a new version of Meliora Testlab – Raining Animal. This version brings in a concept of “Indexes” which enable you to more easily collaborate with others and copy assets between your projects.

Please read on for more detailed description of the new features.

 

Custom fields for steps
Custom columns for steps

Execution steps of test cases can now be configured with custom columns. This allows you to customize the way you enter your test cases in your project.

Custom columns can be renamed, ordered in the order you want them to appear in your test case and typed with different kinds of data types.

 

Indexes
Collaborating with indexes

A new concept – Indexes – has been added which enables you to pick assets from different projects on your index and collaborate with them.

An index is basically a flat list of assets such as requirements or test cases from your projects. You can create as many indexes you like and you can share them between users in your Testlab. All users who have access to your index can comment on it and edit it – this makes it easy to collaborate with a set of assets in your Testlab. 

 

Copying assets between your projects

Each asset on your index is shown with the project it belongs to. When you select assets from your index, you have an option to paste the selected assets to your current Testlab project. This enables you to easily copy content from a project to another. 

 

SAML 2.0 Single Sign-On support

The authentication pipeline of Testlab has been added with an support for SAML 2.0 Single Sign-On support (WebSSO profile). This makes it possible to use SAML 2.0 based user identity federation services such as Microsoft’s ADFS for user authentication.

The existing CAS based SSO is still supported but providing SAML 2.0 based federation offers more possibilities for integrating Testlab with your identity service of choice. You can read more about setting up the SSO from the documentation provided.

Better exports with XLS support

The data can now be exported directly to Excel in XLS format. CSV export is still available, but exporting data to Excel is now more straightforward.

Also, when exporting data from the table view, only the rows selected in the batch edit mode are exported. This makes it easier for you to hand pick the data when exporting.

In addition to the above and fixes under the hood,
  • the actions and statuses in workflows can now be rearranged by dragging and dropping,
  • stopping the testing session is made more straightforward by removing the buttons for aborting and finishing the run and
  • a new permission “testrun.setstatus” has been added to control who can change the status of a test run (for example mark the run as finished).
 

Meliora team


Throughout history, a rare meteorological phenomenon in which animals fall from the sky has been reported. There are reports of fish, frogs, toads, spiders, jellyfish and even worms coming raining down from the skies.

Curiously, the saying “raining cats and dogs” is not necessarily related to this phenomenon and is of unknown etymology. There are other some quite bizarre expressions for heavy rain such as “chair legs” (Greek) and “husbands” (Colombian).

 

(Source: Wikipedia, Photo – public domain)

Facebooktwitterlinkedinmail


Tags for this post: announce features integration product release usage 


23.1.2016

New feature release: Oakville Blob

We are proud to announce a new major release of Meliora Testlab: Oakville Blob. A major feature included is a Rest-based API for your integration needs. In addition, this release includes numerous enhancements and fixes. Read on.

 

Integration API

 
Integration API

Meliora Testlab Integration API offers Rest/HTTP-based endpoints with data encapsulated in JSON format. The first iteration of the API offers endpoints for

  • fetching primary testing assets such as requirements, test cases and issues,
  • fetching file attachments and
  • pushing externally run testing results as test runs. 

We publish the API endpoints with Swagger, a live documentation tool, which enables you to make actual calls against the data of your Testlab installation with your browser. You can access this documentation today by selecting the “Rest API …” from the Help menu in your Testlab. You can also read more about the API from the corresponding documentation page.

Tagging users in comments

 
 
Tagging team members to comments

When commenting assets in your project such as issues or test cases, you can now tag users from your project to your comments. Just include the ID, e-mail address or name of the team member prefixed with @ character and this user automatically gets a notification about the comment. The notice scheme rules have also been updated to make it possible to send e-mail notifications for users who are tagged to the comments of assets. Also, a new out-of-box notice scheme “Commenting scheme” has been added for this.

See the attached video for an example how this feature works.

 

Execution history of a test case

Sometimes it is beneficial to go through the results of earlier times of execution for a test case. To make this as easy as possible, Test design view has been added with a new “Execution history” tab.

This new view lists all times the chosen test case has been executed with the details similar to the listing of test run’s items in Test execution view. Each result can be expanded to show the detailed results of each step.

 

Notification enhancements

When significant things happen in your Testlab projects, your team members are generated with notifications always accessible from the notifications panel at the top. Oakville Blob provides numerous enhancements to these notifications such as

  • notifications panel has been moved to the top of viewport and is now accessible from all Testlab views,
  • notifications panel can now be checked as kept open meaning, that the panel won’t automatically close on lost focus of the mouse cursor and
  • detailed documentation for all notification types has been added to the help manual.

In addition, the following new notification events have been added:

  • if an issue, a requirement or a test case which you have been assigned with is commented, you will be notified,
  • if during testing, a test case or it’s step is commented, the creator of the test run will get notified,
  • if a step of test which you have been assigned with is added with a comment during testing, you will be notified.

 

Targeted keyword search

The quick search functionality has been enhanced with a possibility to target the keyword to specific information in your project’s assets. By adding a fitting prefix: to your keyword, you can for example target the search engine to search from the name of a test case only. For example, 

  • searching with “name:something” searches “something” from the names of all assets (for example, test cases, issues, …) or,
  • searching with “requirement.name:something” searches “something” from the names of requirements.

For a complete listing of possible targeting prefixes, see Testlab’s help manual.

 

Usability enhancements

Several usability related changes have been implemented.

  • Easy access to test case step’s comments: Test execution view has been added with a new “Expand commented” button. By clicking this button the view expands all test run’s test cases which have been added with step related comments.
  • Rich text editing: The rich test editors of Testlab have been upgraded to a new major version of the editor with simpler and clearer UI.
  • Keyboard access:
    • Commenting: keyboard shortcuts have been added for saving, canceling and adding a new comment (where applicable).
    • Step editing: Step editor used to edit the steps of a test case has been added with proper keyboard shortcuts for navigating and inserting new steps.
  • Test coverage view: 
    • Test case coverage has been added with a new column: “Covered requirements”. This column sums up and provides access to all covered requirements of listed test cases.
    • Test run selector has been changed to an auto-complete field for easier use with lot of runs.

 

Reporting enhancements

Several reporting engine related changes have been implemented.

  • Hidden fields in report configuration: When configuring reports, fields configured as hidden are not anymore shown in the selectors listing project’s fields.
  • Sorting of saved reports: Saved reports are now sorted to a natural “related asset” – “title of the report” -order
  • Better color scheme: The colors rendered for some fields of assets are changed to make them more distinct from each other.
  • Test run’s details: When test runs are shown on reports, they are now always shown with the test run’s title, milestone, version and environment included.
  • Wrapped graph titles: When graphs on reports are rendered with very long titles, the titles are now wrapped on multiple lines.
  • Results of run tests report: The report now sums up the time taken to execute the tests.
  • Test case listing report: Created by field has been added.

 

Other changes

With the enhancements listed above, this feature release contains numerous smaller enhancements.

  • New custom field type – Unique ID: A new custom field type has been added which can be used to show an unique non-editable numerical identifier for the asset. 
  • Editing of test case step’s comments: The step related comments entered during the testing can now be edited from the Test execution view. This access is granted to users with “testrun.run” permission.
  • Confluence plugin: Listed assets can now be filtered with tags and, test case listing macro has an option to filter out empty test categories.
  • Test run titles made unique: Testlab now prevents you from creating a test run with a title which is already present if the milestone, version and environment for these runs are the same. Also, when test runs are presented in the UI, they are now always shown with this information (milestone, version, …) included.
  • “Assigned to” tree filter: The formerly “Assigned to me” checkbox typed filter in the trees has been changed to a select list which allows you to filter in assets assigned to other users too.
  • File attachment management during testing: Controls have been added to add and remove attachments of the test case during testing.
  • Dynamic change project -menu: The Change project -selection in Testlab-menu is now dynamic – if a new project is added for you, the project will be visible in this menu right away.
  • Permission aware events: When (the upper right corner floating) events are shown to users, the events are filtered against the set of user’s permissions. Now the users should be shown only with events they should be interested in.
  • Number of filtered test runs: The list of test runs in Test execution view now shows the number of runs filtered in.
  • UI framework: The underlying UI framework has been upgraded to a new major version with many rendering fixes for different browsers.

 

Thanking you for all your feedback,

Meliora team


Oakville

“On August 7, 1994 during a rainstorm, blobs of a translucent gelatinous substance, half the size of grains of rice each, fell at the farm home of Sunny Barclift. Shortly afterwards, Barclift’s mother, Dotty Hearn, had to go to hospital suffering from dizziness and nausea, and Barclift and a friend also suffered minor bouts of fatigue and nausea after handling the blobs. … Several attempts were made to identify the blobs, with Barclift initially asking her mother’s doctor to run tests on the substance at the hospital. Little obliged, and reported that it contained human white blood cells. Barclift also managed to persuade Mike Osweiler, of the Washington State Department of Ecology’s hazardous materials spill response unit, to examine the substance. Upon further examination by Osweiler’s staff, it was reported that the blobs contained cells with no nuclei, which Osweiler noted is something human white cells do have.”

(Source: WikipediaReddit, Map from Google Maps)

 

Facebooktwitterlinkedinmail


Tags for this post: announce features integration jenkins plugin product release reporting 


24.10.2015

Testlab Girus released

Meliora Testlab has evolved to a new major version – Girus. The release includes enhancements to reporting and multiple new integration possibilities.

Please read more about the new features and changes below. 

 

Automatic publishing of reports

Testlab provides an extensive set of reporting templates which you can pre-configure and save to your Testlab project. It used to be that to render the reports, you would go to Testlab’s “Reports” section and choose the preferred report for rendering.

Girus enhances reporting in a way where all saved projects can be scheduled for automatic publishing. Each report can be set with a

  • recurrence period (daily, weekly and monthly) for which the report is refreshed,
  • report type to render (PDF, XLS, ODF),
  • optionally, a list of e-mail addresses where to automatically send the report and
  • an option to web publish the report to a pre-determined web-address.

When configured so, the report will then be automatically rendered and sent to interested parties and optionally, made available to the relatively short URL address for accessing.

 

Slack integration

slack_rgb (1)

Slack is a modern team communication and messaging app which makes it easy to set-up channels for messaging, share files and collaborate with your team in many ways.

Meliora Testlab now supports integrating with Slack’s webhook-interfaces to post notifications about your Testlab assets to your Slack. The feature is implemented as a part of Testlab’s notice schemes. This way, you can set up rules for events which you prefer to be publishes to your targeted Slack #channel.

You can read more about the Slack integration via this link.

 

Webhooks

Webhooks are HTTP-protocol based callbacks which enable you ro react on changes in your Testlab project. Webhooks can be used as a one-way API to integrate your own system to Testlab.

Similarly to the Slack integration introduced above, the Webhooks are implemented as a channel in Testlab’s notice schemes. This way you can set up rules to pick the relevant events you wish to have the notices of.

An example: Let’s say you have an in-house ticketing system and you need to mark a ticket resolved each time an issue in Testlab is closed. With Webhooks, you can implement a simple HTTP based listener on your premises and set up a notification rule in Testlab to push an event to your listener everytime an issue is closed. With little programming, you can then mark the ticket in your own system as resolved. 

You can read more about the Webhooks via this link.

 

Maven plugin

Apache Maven is a common build automation tool used primarily for Java projects. In addition, Maven can be used to build projects written in C#, Ruby, Scala and other languages.

Meliora Testlab’s Maven Plugin is an extension to Maven which makes it possible to publish xUnit-compatible testing results to your Testlab project. If you are familiar with Testlab’s Jenkins CI plugin, the Maven one provides similar features easily accessible from your Maven project’s POM.

You can read more about the Maven plugin via this link.

 

Automatic creation of test cases for automated tests

When JUnit/xUnit-compatible testing results of automated tests are pushed to Testlab (for example from Jenkins, Maven plugin, …), the results are mapped and stored against the test cases of the project. For example, if you push a result for a test com.mycompany.junit.SomeTest, you should have a test case in your project with this identifier (or sub-prefix of this identifier) as a value in the custom field set up as the test case mapping field. The mapping logic is best explained in the Jenkins plugin documentation.

To make the pushing of results as easy as possible the plugins now have an option to automatically import the stubs of test cases from the testing results themselves. This way, if a test case cannot be found from the project for some mapping identifier, the test case is automatically created to a configured test category.

 

Jenkins plugin: TAP support

Test Anything Protocol, or TAP, is a simple interface between testing modules in a test harness. Testing results are represented by simple text files (.tap) describing the steps taken. The Jenkins plugin of Testlab has been enhanced in a way, that the results from .tap files produced by your job can be published to Testlab.

Each test step in your TAP result is mapped to a test case in Testlab via the mapping identifier, similarly to the xUnit-compatible way of working supported previously.

The support for TAP is best explained in the documentation of the Testlab’s Jenkins plugin.

 

Support for multiple JIRA projects in Webhook-based integration

When integrating JIRA with Testlab using the Webhooks-based integration strategy, the integration now supports integrating multiple JIRA projects to a single Testlab project. When multiple projects are specified and an issue is added, the user gets to choose which JIRA project the issue should be added to.

Due to this change, it is now also possible to specify which JIRA project the Testlab project is integrated with. Previously, the prefix and the key of the projects had to match between the JIRA and Testlab project.

 

Miscellaneous enhancements and changes
  • Results of run tests report added with new options to help you on reporting out your testing results:
    • Test execution date added as a supported field,
    • “Group totals by” option added which sums up and reports totals for each group of specified field,
    • reports have been added with columns and rows of sum totals and
    • test categories selection supports selection of multiple categories and test cases for filtering.
  • Updates on multiple underlying 3rd party libraries.
  • Bugs squished.

 

Sincerely yours,

Meliora team


Pithovirus

Pithovirus sibericum was discovered buried 30 m (98 ft) below the surface of late Pleistocene sediment in a 30 000-year-old sample of Siberian permafrost. It measures approximately 1.5 micrometers in length, making it the largest virus yet found. Although the virus is harmless to humans, its viability after being frozen for millennia has raised concerns that global climate change and tundra drilling operations could lead to previously undiscovered and potentially pathogenic viruses being unearthed.

A girus is a very large virus (gigantic virus). Many different giruses have been discovered since and many are so large that they even have their own viruses. The discovery of giruses has triggered some debate concerning the evolutionary origins of the giruses, going so far as to suggest that the giruses provide evidence of a fourth domain of life. Some even suggest a hypothesis that the cell nucleus of the life as we know it originally evolved from a large DNA virus.

(Source: Wikipedia, Pithovirus sibericum photo by Pavel Hrdlička, Wikipedia)

 

Facebooktwitterlinkedinmail


Tags for this post: announce features integration jenkins plugin product release reporting 


24.11.2014

New JIRA integration and Pivotal Tracker support

The latest release of Testlab has been added with a bunch of enhancements for integrations:

  • A new integration method (based on JIRA’s WebHooks) has been added. This brings us the support for JIRA OnDemand.
  • New JIRA integration method supports pushing JIRA’s issues as requirements / user stories to Testlab. This makes it possible to push stories from your JIRA Agile to Testlab for verification.
  • A brand new support has been added for Pivotal Tracker agile project management tool. You can now easily manage and track your project tasks in Pivotal Tracker and in the same time, test, verify and report your implementation in Testlab.

Read further for more details on these new exciting features.

 

Support for JIRA OnDemand

To this day, Testlab has supported JIRA integration with best in class 2-way strategy with a possibility to freely edit issues in both end systems. For this to work the JIRA has had to be installed with Testlab’s JIRA plugin, which provides the needed synchronization techniques. Atlassian’s cloud based offering, JIRA OnDemand, won’t allow us to install custom plugins which leads us to the situation that the 2-way synchronized integration possibility has been possible only with JIRA instances installed on customer’s own servers.

The new JIRA integration strategy has been implemented in a way, that it is possible to configure it in use by just using JIRA’s WebHooks. This requires no plugins and brings us the support for JIRA OnDemand instances. Keep in mind, that the new simpler integration strategy is also available for JIRA instances installed on your own servers.

 

One-way integration for JIRA’s issues and stories

The new integration method works in a way that issues and stories are created in JIRA and can be pushed to Testlab. This is possible for issues / bugs and also for requirements. For example, you can push stories from your JIRA Agile to Testlab as requirements which makes it possible to integrate the specification you end up designing in your JIRA as part of specification in your Testlab project which you aim to test.

You can read more about the different integration strategies for Atlassian JIRA here.

 

Configuring the integrations

integration_configuration

All new integrations, including the JIRA integration for issues and requirements and the Pivotal Tracker integration, are implemented in a way in which they can be taken in use by yourself. The project management view in Testlab has a new Integrations tab which allows you to configure the (Testlab side configuration) for these integrations. Integrations may also include some preliminary set up for them to work and instructions for these are provided in the plugin specific documentation.

 

 
Pivotal tracker integration

Pivotal Tracker is an agile project management tool for software team project management and collaboration. The tool allows you to plan and track your project using stories.

Your Testlab project can be integrated with Pivotal Tracker to export assets from Testlab to Pivotal Tracker as stories and, to push stories from Pivotal Tracker to Testlab’s project as requirements.

pivotalkuva

 
Importing Testlab assets to Pivotal Tracker

pivotal_import_testlabThe integration works in a two way manner. First, you have an option to import assets from your Testlab project to your Pivotal Project as stories to be tracked. You have an option to pull requirements, test runs, test sets, milestones, issues and even test cases from your Testlab project. When done so, a new story is created to your Pivotal Tracker project. It is as easy as dragging an asset to your project’s Icebox in Pivotal.

 

 
Pushing stories from Pivotal Tracker to Testlab

pivotal_importedYou also have an option to push stories from your Pivotal Tracker project to your Testlab project. This way, you can easily

  • push your stories from your Pivotal Tracker to Testlab project’s specification as user stories to be verified and
  • push bugs from your Tracker to Testlab project as issues.

Using Pivotal Tracker with Testlab is an excellent choice for project management. This way you can plan and track your project activities in Pivotal Tracker and in the same time, test, verify and report your implementation in Testlab. 

You can read more about setting up the Pivotal Tracker integration here.

 

To recap, the new features

  • bring you support for Pivotal Tracker, one of the best in industry agile project management tools,
  • add support for Atlassian JIRA integration where issues from JIRA are pushed to Testlab as issues and/or requirements and
  • add support for JIRA OnDemand.

We hope these new features make the use of Testlab more productive for all our existing and future clients.

 

 

 

Facebooktwitterlinkedinmail


Tags for this post: announce features integration jira pivotal tracker release screenshots 


25.9.2014

Integrating with Apache JMeter

Apache JMeter is a popular tool for load and functional testing and for measuring performance. In this article we will give you hands-on examples on how to integrate your JMeter tests to Meliora Testlab.

Apache JMeter in brief

Apache JMeter is a tool for which with you can design load testing and functional testing scripts. Originally, JMeter was designed for testing web applications but has since expanded to be used for different kinds of load and functional testing. The scripts can be executed to collect performance statistics and testing results.

JMeter offers a desktop application for which the scripts can be designed and run with. JMeter can also be used from different kinds of build environments (such as Maven, Gradle, Ant, …) from which running the tests can be automated with. JMeter’s web site has a good set of documentation on how it should be used.

 

Typical usage scenario for JMeter

A common scenario for using JMeter is a some kind of load testing or smoke testing setup where JMeter is scripted to make a load of HTTP requests to a web application. Response times, request durations and possible errors are logged and analyzed later on for defects. Interpreting performance reports and analyzing metrics is usually done by people as automatically determining if some metric should be considered as a failure is often hard.

Keep in mind, that JMeter can be used against various kinds of backends other than HTTP servers, but we won’t get into that in this article.

 

Automating load testing with assertions

The difficulty in automating load testing scenarios comes from the fact that performance metrics are often ambiguous. For automation, each test run by JMeter must produce a distinct result indicating if the test passes or not. The JMeter script can be added with assertions to tackle this problem.

Assertions are basically the criteria which is set to decide if the sample recorded by JMeter indicates a failure or not. For example, an assertion might be set up to check that a request to your web application is executed in under some specified duration (i.e. your application is “fast enough”). Or, an another assertion might check that the response code from your application is always correct (for example, 200 OK). JMeter supports a number of different kinds of assertions for you to design your script with.

When your load testing script is set up with proper assertions the script suits well for automation as it can be run automatically, periodically or in any way you prefer to produce passing and failing test results which can be pushed to your test management tool for analysis. On how to use assertions in JMeter there is a good set of documentation available online.

 

Integration to Meliora Testlab

Meliora Testlab has a Jenkins CI plugin which enables you to push test results and open up issues according to the test results of your automated tests. When JMeter scripted tests are run in a Jenkins job, you can push the results of your load testing criteria to your Testlab project!

The technical scenario of this is illustrated in the picture below.

jmeterkuva

You need your JMeter script (plan). This is designed with the JMeter tool and should include the needed assertions (in the picture: Duration and ResponseCode) to determine if the tests should pass or not. A Jenkins job should be set up to run your tests, translate the JMeter produced log file to xUnit compatible testing results which are then pushed to your Testlab project as test case results. Each JMeter test (in this case Front page.Duration and Front page.ResponseCode) is mapped to a test case in your Testlab project which get results posted for when the Jenkins job is executed.

 

Example setup

In this chapter, we give you a hands on example on how to setup a Jenkins job to push testing results to your Testlab project. To make things easy, download the testlab_jmeter_example.zip file which includes all the files and assets mentioned below.

 
Creating a build

You need a some kind of build (Maven, Gradle, Ant, …) to execute your JMeter tests with. In this example we are going to use Gradle as it offers an easy to use JMeter plugin for running the tests. For running the JMeter scripts there are tons of options but using a build plugin is often the easiest way.

1. Download and install Gradle if needed

Go to www.gradle.org and download the latest Gradle binary. Install it as instructed to your system path so that you can run gradle commands.

2. Create build.gradle file

apply plugin: 'java'
apply plugin: 'idea'
apply plugin: 'jmeter'

buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath "com.github.kulya:jmeter-gradle-plugin:1.3.1-2.6"
}
}

As we are going to run all the tests with plugin’s default settings this is all we need. The build file just registers the “jmeter” plugin from the repository provided.

3. Create src directory and needed artifacts

For the JMeter plugin to work, create src/test/jmeter -directory and drop in a jmeter.properties file which is needed for running the actual JMeter tool. This jmeter.properties file is easy to obtain by downloading JMeter and copying the default jmeter.properties from the tool to this directory.

 
Creating a JMeter plan

When your Gradle build is set up as instructed you can run the JMeter tool easily by changing to your build directory and running the command

# gradle jmeterEditor

This downloads all the needed artifacts and launches the graphical user interface for designing JMeter plans.

To make things easy, you can use the MyPlan.jmx provijmeterplanded in the zip package. The script is really simple: It has a single HTTP Request Sampler (named Front page) set up to make a request to http://localhost:9090 address with two assertions: 

  • Duration -assertion to check that the time to make the request does not exceed 5 milliseconds. For the sake of this example, this assertion should fail as the request probably takes more than this.
  • ResponseCode -assertion to check that the response code from the server is 200 (OK). This should pass as long as there is a web server running in port 9090 (we’ll come to this later).

It is recommended to give your Samplers and Assertions sensible names, as you refer directly these names later when mapping the test results to your Testlab test cases.

The created plan(s) should be saved to the src/test/jmeter -directory we created earlier, as Gradle’s JMeter plugin automatically executes all plans from this directory.

 

Setting up a Jenkins job

1. Install Jenkins

If you don’t happen to have Jenkins CI server available, setting one up locally couldn’t be easier. Download the latest release to a directory and run it with 

# java -jar jenkins.war --httpPort=9090

Wait a bit, and Jenkins should be accessible from http://localhost:9090 with your web browser. 

The JMeter plan we went through earlier made a request to http://localhost:9090. When you are running your Jenkins with the command said, the JMeter will fetch the front page of your Jenkins CI server when the tests are run. If you prefer to use some other Jenkins installation you might want to edit the MyPlan.jmx provided to point to this other address.

2. Install needed Jenkins plugins

Go to Manage Jenkins > Manage Plugins > Available and install

  • Gradle Plugin
  • Meliora Testlab Plugin
  • Performance Plugin
  • xUnit Plugin

2.1 Configure plugins

Go to Manage Jenkins > Configure System > Gradle and add a new Gradle installation for your locally installed Gradle. 

3. Create a job

Add new “free-style software project” job to your Jenkins and configure it as follows:

3.1 Add build step: Execute shell

Add a new “Execute shell” typed build step to copy the contents of your earlier set up Gradle project to job’s workspace. This is needed as the project is not in a version control repository. Setup the step as for example:

jmeter_executeshellplugin

.. Or something else that will make your Gradle project available to Jenkins job’s workspace.

Note: The files should be copied so that the root of the workspace contains the build.gradle file for launching the build.

3.2 Add build step: Invoke Gradle script

Select your locally installed Gradle Version and enter “clean jmeterRun” to Tasks field. This will run “gradle clean jmeterRun” command for your Gradle project which will clean up the workspace and execute the JMeter plan.

jmeter_gradleplugin

3.3 Add post-build action: Publish Performance test result report (optional)

Jenkins CI’s Performance plugin provides you trend reports on how your JMeter tests have been run. This plugin is not required for Testlab’s integration, but provides handy performance metrics on your Jenkins job view. To set up the action click “Add a new report”, select JMeter and set the Report files as “**/jmeter-report/*.xml”:

jmeter_performanceplugin

Other settings can be left to defaults or you can configure the settings for your liking.

3.4 Add post-build action: Publish xUnit test result report

Testlab’s Jenkins plugin works in a way, that it needs the test results to be available in so called xUnit format. In addition, this will generate test result trending graphs to your Jenkins job view. Add a post-build action to publish the test results resolved from JMeter assertions as follows by selecting a “Custom Tool”:

jmeter_xunitplugin

Note: The jmeter_to_xunit.xsl custom stylesheet is mandatory. This translates the JMeter’s log files to the xUnit format. The .xsl file mentioned is located in the jmeterproject -directory in the zip file and will be available in the Jenkins’ workspace root if the project is copied there as set up earlier.

3.5 Add post-build action: Publish test results to Testlab

The above plugins will set up the workspace, execute the JMeter tests, publish the needed reports to Jenkins job view and translate the JMeter log file(s) to xUnit format. What is left is to push the test results to Testlab. For this, add a “Publish test results to Testlab” post-build action and configure it as follows:

jmeter_testlabplugin

For sake of simplicity, we will be using the “Demo” project of your Testlab. Make sure to configure the “Company ID” and “Testlab API key” fields to match your Testlab environment. The Test case mapping field is set to “Automated” which is by default configured as a custom field in the “Demo” project.

If you haven’t yet configured an API key to your Testlab, you should log on to your Testlab as company administrator and configure one from Testlab > Manage company … > API keys. See Testlab’s help manual for more details.

Note: Your Testlab’s edition must be one with access to the API functions. If you cannot see the API keys tab in your Manage company view and wish to proceed, please contact us and we get it sorted out.

 

Mapping JMeter tests to test cases in Testlab

For the Jenkins plugin to be able to record the test results to your Testlab project your project must contain matching test cases. As explained in the plugin documentation, your project in Testlab must have a custom field set up which is used to map the incoming test results. In the “Demo” project field is already set up (called “Automated”). 

jmeterplanEvery assertion in JMeter’s test plan will record a distinguishing test result when run. In the simple plan provided, we have a single HTTP Request Sampler named “Front page”. This Sampler is tied with two assertions (named “Duration” and “ResponseCode”) which check if the request was done properly. When translated to xUnit format, these test results will get idenfied as <Sampler name>.<Assertion name>, for example:

  • Front page/Duration will be identified as: Front page.Duration and
  • Front page/ResponseCode will be identified as: Front page.ResponseCode

To map these test results to test cases in the “Demo” project,

1. Add test cases for JMeter assertions

Log on to Testlab’s “Demo” project, go to Test case design and

  • add a new Test category called “Load tests”, and to this category,
  • add a new test case “Front page speed”, set the Automated field to “Front page.Duration” and Approve the test case as ready and
  • add a new test case “Front page response code”, set the Automated field to “Front page.ResponseCode” and Approve the test case as ready.

Now we have two test cases for which the “Test case mapping field” we set up earlier (“Automated”) contains the JMeter assertions’ identifiers.

 

Running JMeter tests

What is left to do is to run the actual tests. Go to your Jenkins job view and click “Build now”. A new build should be scheduled, executed and completed – probably as FAILED. This is because the JMeter plan has the 5 millisecond assertion which should fail the job as expected.

 

Viewing the results in Testlab

Log on to Testlab’s “Demo” project and select the Test execution view. If everything went correctly, you should now have a new test run titled “jmeter run” in your project:

jmeter_testrun

As expected, the Front page speed test case reports as failed and Front page response code test case reports as passed.

As we configured the publisher to open up issues for failed tests we should also have an issue present. Change to Issues view and verify, that an issue has been opened up:

jmeter_defect

 

Viewing the results in Jenkins CI

The matching results are present in your Jenkins job view. Open up the job view from your Jenkins:

jmeter_jenkinsview

The view holds the trend graphs from the plugins we set up earlier: “Responding time” and “Percentage of errors” from Performance plugin and “Test result Trend” from xUnit plugin. 

To see the results of the assertions, click “Latest Test Result”:

jmeter_jenkinsresults

The results show that the Front page.Duration test failed and one test has passed (Front page.ResponseCode).

 

Links referred

 http://jmeter.apache.org/  Apache JMeter home page
 http://jmeter.apache.org/usermanual/index.html  Apache JMeter user manual
 http://jmeter.apache.org/usermanual/component_reference.html#assertions  Apache JMeter assertion reference
 http://blazemeter.com/blog/how-use-jmeter-assertions-3-easy-steps  BlazeMeter: How to Use JMeter Assertions in 3 Easy Steps
 https://www.melioratestlab.com/wp-content/uploads/2014/09/testlab_jmeter_example.zip  Needed assets for running the example
 https://github.com/kulya/jmeter-gradle-plugin  Gradle JMeter plugin
 http://www.gradle.org/  Gradle home page
 http://mirrors.jenkins-ci.org/war/latest/jenkins.war  Latest Jenkins CI release
 https://www.melioratestlab.com/wp-content/uploads/2014/09/jmeter_to_xunit.xsl_.txt  XSL-file to translate JMeter JTL file to xUnit format  
 https://wiki.jenkins-ci.org/display/JENKINS/Meliora+Testlab+Plugin  Meliora Testlab Jenkins Plugin documentation 

 

 

 

Facebooktwitterlinkedinmail


Tags for this post: example integration jenkins load testing usage 


 
 
Best-of-class cross-browser hosted SaaS quality and test management, testing and issue management tools to improve your quality. Site information.