13.7.2017

Testlab – Bookhouse Boy release

To celebrate the warmth of the summer months, we are proud to release a new feature version of Testlab: Bookhouse Boy.

 

Custom fields for projects
Custom fields for projects

Just like requirements, test cases, and issues, the project asset can now also be defined with custom fields. You can find the field settings for your project asset from Testlab > Manage company … view.

When you customize the fields for your project asset these field settings will take effect in all your projects. For example, you can use these fields to track some high-level project details inside Testlab such as project deadlines, resourcing related details or something else.

For reporting out these details two new project templates have been added:

  • “List of projects” which is basically a list of project details from your Testlab instance. The project filters in projects by the criteria set and lists out project details as a listing.
  • “Project grouping” which can be used to group project data by defined fields. This report includes a bar graph for the grouped data and a listing of projects matching the set criteria.

Please note: As these two reports expose details on project level, “report.all” permission does not grant permission for these two new reports. This way, these two reports might not show up with your permission set automatically. You should adjust the role your have in your projects to include “report.projectlistreport” and “report.projectgroupingreport” to have access to these reports.

 

Comments for test results
Test result comments

Each result of the test case can now be set with a distinct comment. Earlier, test case assets themselves and also steps of executed tests could be commented.

Now as you create test runs, the test cases get bound as items in these runs. As you list out test cases in your scheduled runs in the Test execution view, there is a new column in this table for an execution comment. Note that you set the comment when you wish – You don’t have to execute the test and give it a result to include a comment: This way you can also use this comment field for some information before execution of the actual tests.

The comments for executed (or scheduled) tests are also shown in test cases’ execution history view, on related reports and on test coverage view (for tests with a result set). Also for Jenkins integration, the comments sent via the API are also set to this field, if any.

 

Report page sizing and orientation

Reports can now be rendered in different page sizes and in chosen orientation. The size of the page can be set as A4 or the larger A3 and the report can be laid out in portrait or in landscape orientation. Please experiment with these settings if you have trouble fitting all your data on your reports.

 

Reporting enhancements
  • Test case related reports now support filtering in test cases from test sets and test runs. If you filter in test cases with test runs, keep in mind, that the latest execution status of the test case is then reported against these filtered in runs.
  • “Results for run tests” report now has an option to filter in latest executed result for each test case. This way, if the same test case is included multiple times, only the latest executed results are left in on the report.
  • A project can now be set with a logo which is included in each report. You can upload the logo of your choice by editing the project details.
  • Grouping reports can now be grouped also with custom field values.

 

Progress indication

The UI now includes automatic notification on server operations that might take some time to complete. Also, most operations expected to run some time such as imports of data also support showing the progress of the operation in this notification dialog.

 
Support for new file formats

Exporting data is now possible as modern Excel-files (.xlsx). Also, reports can now be exported as .docx-files for Microsoft Word.

 

Other changes
  • The changes in the tags of the asset are now stored and included in the changes history of requirements, test cases, and issues.
  • The number of defects found -column in test execution view can now be used as a filter in the table.

 

Thanking you for all your feedback,

Meliora team


Bookhouse Boys

With Bookhouse Boy, we wish to celebrate and give homage to the true visionaries behind Twin Peaks – one of the greatest television drama series created. After 25 years, a new season of this great television is currently running offering new bizarre twists in the story of interesting characters including the small town of Twin Peaks itself. With the excellent writing of Frost and visionary directing of Lynch, the series is sure to leave it’s mark to the history of television.

All Testlab releases are code-named with some weird fact, theory, historical event or something else that should give some food for your imagination. As such, a code-name from Twin Peaks is more than fitting.

(Image from Twin Peaks (the original series) / Lynch/Frost Productions)

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: announce features jenkins product release reporting screenshots 


29.6.2017

We are hiring!

facebooktwittergoogle_pluspinterestlinkedinmail


12.4.2017

Testlab – Lilliput Sight released

Meliora is proud to announce a release of new major Testlab version: Lilliput Sight. This release includes some new major features such as asset templating but in the same time, bundles various smaller enhancements and changes for easier use. New features are described below.

 

Asset templating
 
Asset templating

Creating assets for your projects in a consistent way is often important. This release of Testlab includes support for templates which you can apply when adding new requirements, test cases or issues.

A template is basically a simple asset with some fields set with predefined values. As you are adding a new asset, you have an option for applying templates. When you apply a template, the asset is set with the values from the applied template. You can also apply multiple templates, so designing templates to suit the needs of your testing project is very flexible.

A set of new permissions (testcase.templates, requirement.templates, defect.templates) has been added to control who has access to using the templates. These permissions have been granted to all your roles which had the permission to edit the matching asset type.

 

Robot Framework output support
Robot Framework output support

When pushing automated results to Testlab, we now have native support for Robot Framework’s custom output file. By supporting the native format, the results include detailed information of keywords from the output which are pushed to Testlab as the steps executed.

The support for Robot Framework’s output has also been added to the Jenkins CI plugin. With the plugin, it is easy to publish Robot Framework’s test results to your Testlab project in a detailed format.

 

Copying and pasting steps

The editor for test case steps now includes a clipboard. You can select steps (select multiple steps by holding your shift and ctrl/cmd keys appropriately), copy them to the clipboard and paste them as you wish. The clipboard also persists between test cases so you can also copy and paste steps from a test case to another.

 

 

Filtering with operators in grids

The text columns in Testlab’s grid now feature operator filters which allow you to filter in data from the grid in more specific manner. You have an option of choosing the operator the column is filtered with such as “starts with”, “ends with”, … , and of course the familiar default “contains”.

With a large number of data in your project, this makes it easier to filter in and find the relevant data from your project.

 

 

Mark milestones, versions and environments as inactive

When managing milestones, versions, and environments for your project, you now can set these assets as active or inactive. For example, if a version is set as inactive, it is filtered out from relevant controls in the user interface. If your project has numerous versions, environments or milestones, keeping only the relevant ones active makes the use easier as the user interface is not littered with the non-relevant ones.

For versions and environments, the active flag is set in the Project management view. For milestones, the completed flag is respected as in completed milestones are interpreted as inactive.

 

Usability related enhancements
  • Editing asset statuses in table views: You can now edit statuses of assets in table views – also in batch editing mode.
  • New custom field type – Link: A new type of custom field has been added which holds a single web link (such as http, https or mailto -link).
  • Support deletion of multiple selected assets: The context menu for requirements and test cases now includes “Delete selected” function to delete all assets chosen from the tree.
  • Delete key shortcut: The Delete key is now a keyboard shortcut for context menu’s Delete-function.
  • Execution history respects revision selection: The execution history tab for a test case previously showed a combined execution history for all revisions of the chosen test cases. This has been changed in a way that the tab respects the revision selection in a similar manner to the other tabs in the Test case design view. When you choose a revision, the list of results in the execution history tab is only for the chosen revision of the test case.
  • Custom fields hold more data: Earlier, custom fields were limited to a maximum of 255 characters. This has been extended and custom fields can now hold a maximum of 4096 characters.
  • Test cases already run in test runs can be run again: If the user holds the permission for discarding a result of an already run test case, you can now choose and execute test cases already with a result (pass or fail) directly from the list of test cases in Test execution view. Earlier, you needed to discard all the results one by one for the tests you wish to run again.
  • Enhancements for presenting diagrams: The presentation view for relation and traceability diagrams has been improved – you can now zoom the view and pan the view in a more easier manner by dragging and by double clicking.
  • Copy link to clipboard: The popup dialog with a link to open up an asset from your Testlab has been added with a “Copy to clipboard” button. Clicking this button will copy the link directly to your clipboard.

 

Reporting enhancements
  • “Filter field & group values” option added for grouping reports: Requirement, test case, and issue grouping reports have been added with an option to apply the filter terms of the report to the values which are fetched via “Field to report” and “Group by”. For example, if you filter in a requirement grouping report for importance values “High” and “Critical”, choose to group the report by “Importance” and check the “Filter field & group values” option, the report rendered will not include reported groups for any other importance values than “High” and “Critical”.

 

Enhancements for importing and exporting
  • Export verified requirements for test cases: Test case export now includes a “verifies” field which includes the identifiers of requirements the test cases are set to verify.
  • Input file validation: The input file is now validated to hold the same number of columns in each read row than the header row of the file has. When reading the file, if rows are encountered with an incorrect number of columns, an error is printed out making it easier to track down any missing separator characters or such.

 

Thanking you for all your feedback,

Meliora team


Alice

Various disorientating neurological conditions that affect perception are known to exist. One of them is broadly categorized as Alice in Wonderland Syndrome in which the people experience size distortion of perceived objects. Lilliput Sight – or Micropsia – is a condition in which objects are perceived to be smaller than they actually are in the real world.

The condition is surprisingly common: episodes of micropsia or macropsia (seeing objects larger than they really are) occur in 9% of adolescents. The author of Alice’s Adventures in Wonderland – Lewis Carroll – is speculated to have had inspiration for the book from his own experiences in Micropsia. Carroll had been a well-known migraine sufferer which is one possible cause of these visual manifestations.

(Source: WikipediaThe Atlantic, Image from Alice’s Adventures in Wonderland (1972 Josef Shaftel Productions))

 

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: announce features integration jenkins plugin product release reporting 


23.2.2017

Unifying manual and automated testing

Automating testing has been an ongoing practice to gain benefits for your testing processes. Manual testing with pre-defined steps is still surprisingly common and especially during acceptance testing we still often put our trust in the good old tester. Unifying manual testing and automated testing in a transparent, easily managed and reported way is particularly important for organizations pursuing gains from testing automation.

 

All automated testing is not similar

The gains from testing automation are numerous: Automated testing saves time, it makes the tests easily repeatable and less error-prone, makes distributed testing possible and improves the coverage of the testing, to bring up few. It should be noted though that not all automated testing is the same. For example, modern testing harnesses and tools make it possible to automate and execute complex UI-based acceptance tests and in the same time, developers can implement low-level unit tests. From the reporting standpoint, it is essential to be able to combine the testing results from all kinds of tests to a manageable and easily approachable view with the correct level of details.

 

I don’t know what our automated tests do and what they cover

It is often the case that testers in the organization waste time on manual testing of features that are already covered with a good set of automated tests. This is because the test managers don’t always know the details of the (often very technical) automated tests. The automated tests are not trusted on and the results from these tests are hard to combine to the overall status of the testing. 

This problem is often complicated by the fact that many test management tools report the results of manual and automated tests separately. In the worst case scenario, the test manager must know how the automated tests work to be able to make a judgment on the coverage of the testing. 

 

Scoping the automated tests in your test plan

Because the nature of automated tests varies, it is important that the test management tool offers an easy way to scope and map the results of your automated tests to your test plan. If is not often preferred to report the status of each and every test case (especially in the case of low-level unit tests) because it makes it harder to get the overall picture of your testing status. It is important to pay attention to the results of these tests though so that failures in these tests get reported.

Let’s take an example on how the automated tests are mapped in Meliora Testlab.

In the example above is a simple hierarchy of functions (requirements) which are verified by test cases in the test plan:

  • UI / Login -function is verified by a manual test case “Login test case“,
  • UI / User mgmnt / Basic info and UI / User mgmnt / Credentials -function is verified by a functional manual test case “Detail test case” and
  • Backend / Order mgmt -functions are verified by automated tests mapped to a test case “Order API test case” in the test plan.

Mapping is done by simply specifying the package identifier of the automated tests to a test case. When testing, the results of tests are always recorded to test cases:

  1. The login view and user management views of the application are tested manually by the testers and the results of these tests get recorded to test cases “Login test case” and “Detail test case“.
  2. The order management is tested automatically with results from automated tests “ourapp.tests.api.order.placeOrderTest” and “ourapp.tests.api.order.deliverOrderTest“. These automated tests are mapped to test case “Order API test case” via automated test package “ourapp.tests.api.order“.

The final result for the test case in step 2 is derived from the results of all automated tests under the package “ourapp.tests.api.order“. If one or more tests in this package fail, the test case will be marked as failed. If all tests pass, the test case is also marked as passed.

As automated tests are mapped via the package hierarchy of the automated tests, it makes it easy to fine tune the detail level you wish to scope your automated tests to your test plan. In the above example, if it is deemed necessary to always report out the detailed results on order delivery related tests, the “ourapp.tests.api.order.deliverOrderTest” automated test can be mapped to a test case in the test plan.

 

Automating existing manual tests

As testing automation has clear benefits to your testing process, it is preferred for the testing process and the used tools to manage it to support easy automation of existing manual tests. From the test management tool standpoint, it is not relevant which technique is used to actually automate the test, but instead, it is important that the reporting and coverage analysis stays the same and the results of these automated tests are easily pushed to the tool.

To continue on with the example above, let’s presume that login related manual tests (“Login test case“) are automated by using Selenium:

The test designers record and create the automated UI tests for the login view to a package “ourapp.tests.ui.login“. Now, the manual test case “Login test case” can be easily mapped to these tests with the identifier “ourapp.tests.ui.login“. The test cases themselves, requirements or the structure of these do not need any changes. When the Selenium based tests are run, later on, the result of these tests determine the result for the test case “Login test case“. The reporting of the testing status stays the same, the structure of the test plan is the same, and related reports are easily approached by the people formerly familiar with them.

 

Summary

Testing automation and manual testing are most often best used in a combination. It is important that the tools used for test management support the reporting of this kind of testing in as flexible way as possible.

 

(Icons used in illustrations by Thijs & Vecteezy / Iconfinder)

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: automation best practices example features product reporting usage 


30.1.2017

Testlab – Raining Animal released

Meliora is proud to announce a new version of Meliora Testlab – Raining Animal. This version brings in a concept of “Indexes” which enable you to more easily collaborate with others and copy assets between your projects.

Please read on for more detailed description of the new features.

 

Custom fields for steps
Custom columns for steps

Execution steps of test cases can now be configured with custom columns. This allows you to customize the way you enter your test cases in your project.

Custom columns can be renamed, ordered in the order you want them to appear in your test case and typed with different kinds of data types.

 

Indexes
Collaborating with indexes

A new concept – Indexes – has been added which enables you to pick assets from different projects on your index and collaborate with them.

An index is basically a flat list of assets such as requirements or test cases from your projects. You can create as many indexes you like and you can share them between users in your Testlab. All users who have access to your index can comment on it and edit it – this makes it easy to collaborate with a set of assets in your Testlab. 

 

Copying assets between your projects

Each asset on your index is shown with the project it belongs to. When you select assets from your index, you have an option to paste the selected assets to your current Testlab project. This enables you to easily copy content from a project to another. 

 

SAML 2.0 Single Sign-On support

The authentication pipeline of Testlab has been added with an support for SAML 2.0 Single Sign-On support (WebSSO profile). This makes it possible to use SAML 2.0 based user identity federation services such as Microsoft’s ADFS for user authentication.

The existing CAS based SSO is still supported but providing SAML 2.0 based federation offers more possibilities for integrating Testlab with your identity service of choice. You can read more about setting up the SSO from the documentation provided.

Better exports with XLS support

The data can now be exported directly to Excel in XLS format. CSV export is still available, but exporting data to Excel is now more straightforward.

Also, when exporting data from the table view, only the rows selected in the batch edit mode are exported. This makes it easier for you to hand pick the data when exporting.

In addition to the above and fixes under the hood,
  • the actions and statuses in workflows can now be rearranged by dragging and dropping,
  • stopping the testing session is made more straightforward by removing the buttons for aborting and finishing the run and
  • a new permission “testrun.setstatus” has been added to control who can change the status of a test run (for example mark the run as finished).
 

Meliora team


Throughout history, a rare meteorological phenomenon in which animals fall from the sky has been reported. There are reports of fish, frogs, toads, spiders, jellyfish and even worms coming raining down from the skies.

Curiously, the saying “raining cats and dogs” is not necessarily related to this phenomenon and is of unknown etymology. There are other some quite bizarre expressions for heavy rain such as “chair legs” (Greek) and “husbands” (Colombian).

 

(Source: Wikipedia, Photo – public domain)

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: announce features integration product release usage 


15.11.2016

Testlab – Earthquake Light released

Meliora Testlab – Earthquake Light – has been released. In addition to set of fixes, this release comes with a new advanced reporting mode which allows you to customize the criteria of data on your reports in an intuitive manner. We’ve also integrated with Stripe to make subscription handling and credit card payments as easy as possible.

Please read on for more detailed description of the new features.

 

Inline table editing
Reporting criteria in advanced mode

Most of the report templates available have been added with an option to switch the criteria form to the new advanced mode. In this mode, the criteria for picking the data on the report can be specified as a set of rules the reporting engine will use when rendering the report.

Reports in advanced mode work in similar manner to the earlier so called simple mode. You can save them, schedule them for publishing and so on.

 

Advanced operators
Various operators for each field

A rule on an advanced reports consists of the targeted field, an operator executed to determine if the rule matches and an optional value for the operator. The mode has been implemented with a full set of operators allowing you to define a complex set of rules.

The operators available depend on the type of the field.

 

 

Boolean operators and sub-clauses

The criteria may be defined with sub-clauses. Each clause is set with an boolean operator (match all [and], match any [or], match none [not]) which declares how the list of rules in the clause should be interpreted.

This allows you to define a complex set of rules to pick the data you need on your report.

 
Stripe subscriptions for credit card billing

The hosted Meliora Testlab has been integrated with Stripe – a leading credit card payment gateway – to make subscription and credit card handling for you as easy as possible.

If any action is needed from existing customers, we will contact all customers directly for instructions.

 

Meliora team


earthquakelights

For a very long time, people have occasionally reported seeing strange bright lights in the sky before, during or after an earthquake. For much of the modern times, these reports were considered questionable and skeptics say, there is no considerable proof of the existence of such phenomenon.

The lights reported are usually blueish and greenish dancing lights in the sky comparable a bit to the aurora borealis. Theories exist on the cause of the lights such as the electricity in certain rocks of the earth gets activated and gets discharged during the teutonic stress. It has even been suggested that the existence of the lights could be used to predict upcoming earthquakes.

The latest the earthquake lights were reported was with the New Zealand earthquake in November 2016 and there are even videos circulating documenting the event.

(Source: Wikipedia, National Geographic, Youtube, Photo from UC Berkeley Online Archive)

 

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: announce features product release reporting usage 


13.7.2016

Testlab – Ghost Boy release

We are proud to announce a new version of Meliora Testlab – Ghost Boy – which brings in features such as rapid batch editing of requirements, test cases and issues. A more detailed description of the new features can be read below.

 

Inline table editing
 
Inline table editing

The central assets of your Testlab projects – Requirements, test cases and issues – can now be inspected in a table view. You can choose a folder from your asset related tree and the table view will list all assets from this folder.

The data of the assets can be edited inline. This allows you to rapidly edit a set of assets and save your changes with a single click. Adding and editing assets can still be made in similar fashion as in earlier versions, but the table view brings in new alternative for rapid edits.

As the data is presented in a table, all regular table related functions such as sorting, filtering and grouping are available for you in addition with such features as exporting the set of assets to Excel, sending them via e-mail and printing.

 

 

Batch editing
 
Batch edits

Ever had a need to bump up the severity for a set of issues ? Or assign a batch of test cases to some user ?

The new table view with inline editing features a batch edit mode which allows you to pick a set of assets for batch editing. Then, all edits made to some asset in the table will be replicated to all chosen assets. This way, it is easy to make batch edits to a large set of assets while you are designing.

 

 

Project’s users listing

Earlier, granting access and roles for users in projects was done in the user management only. You chose an user and granted the needed roles for this user.

Ghost boy brings in a new Users tab to project management view which allows you to easily manage roles of your users in a project-centric way. You can also filter, sort, export, e-mail and print the listing if needed.

 

Miscellaneous enhancements and changes

With the new features listed above, this feature release contains small enhancements for executing tests:

  • Publish now -button added for published reports: When you configure a report to be automatically published, you can now press a button to force a publish of this report. This makes it easier set up automatically published reporting.
  • Test run selectors enhanced: The test run selectors in the test case tree and in the test coverage view has been changes to a filterable picker for better usage when there are a large amount of runs in your project.
  • Tagging assets in table view: The new table views have a button which allows you to tag the chosen assets with tags. Earlier, the tagging in issues table worked in a way that it tagged all visible issues in the table. Now, the assets to be tagged can be chosen in the “batch edit mode” for more flexible usage.
  • Continue tour on the current view: The tour found in the Help menu of Testlab now starts & continues on the current view.
  • Results for run tests report: The report now allows you to set the time of day for starting time and ending time the results are reported from.

 

Meliora team


Ghost Boy

Think of a situation where you would be unable to move in any way, communicate or interact with outside world. You would be fully aware and able to think – trapped in your own body with your thoughts. Or, you can think of this but never really comprehend what it must feel like.

Meet Martin Pistorius, a South African born in 1975, who fell into a coma in early teens but eventually regained consciousness by the age of 19. He was still unable to move and spent years locked-in his own body until one of his day carers noticed he might be able to respond to outside interaction. He has since recovered but still needs a speech computer to communicate.

You can listen more about this fascinating story from an Invisibilia podcast or check out a TED talk Martin gave in 2015 at a TEDx event.

(Source: Wikipedia, Invisibilia podcast, TED talks, Brain vector designed by Freepik)

 

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: announce features product release usage 


24.4.2016

Testlab – Dis Manibus released

The new release of Meliora Testlab – Dis Manibus – brings in long-awaited customization features, other minor features and fixes. The release makes it easier to manage your projects’ versions, environments and option values for different fields. A more detailed description of the new features can be read below.

 

Field option management

The option values for different fields can now be freely managed. Options can be managed for requirements, test cases and issues. For example, you can now customize test case priorities, requirement types, issue severities and so on.

To add new options or to edit or remove existing options, open up the project in Manage projects view and choose the appropriate tab for your asset. The option editor can be accessed from the Options column of the chosen field. Edited options are supported in all integrations, data importing and exporting, reporting and when copying projects. In addition, the REST API has been added with an endpoint for accessing the field meta data.

 

Version and Environment management

Adding new values for versions and environments in your project is easy by just entering a new value when needed. If you prefer to have a strict control on which versions and environments should be used in your project, Dis Manibus brings in a view to manage them by hand. The view has been added as a new tab to Manage project view.

You also have an option to control if the users in your project can themselves add new versions or not.

 

Saveable table filters

All tables in Testlab’s UI have been added with controls which enable you to save your current filter criteria for later use. The criteria can be named and it can be saved to your project to be used by all users or even globally to all your projects.

Miscellaneous enhancements

With the new features listed above, this feature release contains small enhancements for executing tests:

  • Executing test case steps in free order: The steps of test cases can be executed in an order preferred.
  • Discard result and run again -button: The table of test run’s test cases in Test runs view has been added with a new button which enables you to easily run an already run test case again. Clicking the button discards the current result of a test case, preserves the results for the test case’s steps and instantly opens up the window for running the selected test case.
  • Saved reports can be filtered and sorted: The listing of saved reports in Reports view has no filter controls to filter in reports. Finding reports is now easier if you have a number of reports in your project. The filter settings can also be saved to the server for later use.

 

Meliora team


Oakville

O U O S V A V V“, between the letters ‘D M‘, commonly stood for Dis manibus, meaning “dedicated to the shades”, is a sequence of letters commonly known as the Shugborough Inscription. The inscription is carved on the 18th-century monument in Staffordshire, England, and has been called one of the world’s top uncracked ciphertexts.

In recent decades there have been several proposals for possible solutions. None of the solutions satisfy the staff at Shugborough Hall though. And ofcourse, as with the all good mysteries, it is hinted that Poussin, the author of the original painting which is expressed in the relief of the monument, was a member of the Priory of Sion. This would mean that the ciphertext may encode secrets related to the Priory, or the location of the Holy Grail.

(Source: Wikipedia, Photograph by Edward Wood)

 

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: announce features product release usage 


23.1.2016

New feature release: Oakville Blob

We are proud to announce a new major release of Meliora Testlab: Oakville Blob. A major feature included is a Rest-based API for your integration needs. In addition, this release includes numerous enhancements and fixes. Read on.

 

Integration API
 
Integration API

Meliora Testlab Integration API offers Rest/HTTP-based endpoints with data encapsulated in JSON format. The first iteration of the API offers endpoints for

  • fetching primary testing assets such as requirements, test cases and issues,
  • fetching file attachments and
  • pushing externally run testing results as test runs. 

We publish the API endpoints with Swagger, a live documentation tool, which enables you to make actual calls against the data of your Testlab installation with your browser. You can access this documentation today by selecting the “Rest API …” from the Help menu in your Testlab. You can also read more about the API from the corresponding documentation page.

Tagging users in comments
 
 
Tagging team members to comments

When commenting assets in your project such as issues or test cases, you can now tag users from your project to your comments. Just include the ID, e-mail address or name of the team member prefixed with @ character and this user automatically gets a notification about the comment. The notice scheme rules have also been updated to make it possible to send e-mail notifications for users who are tagged to the comments of assets. Also, a new out-of-box notice scheme “Commenting scheme” has been added for this.

See the attached video for an example how this feature works.

 

Execution history of a test case

Sometimes it is beneficial to go through the results of earlier times of execution for a test case. To make this as easy as possible, Test design view has been added with a new “Execution history” tab.

This new view lists all times the chosen test case has been executed with the details similar to the listing of test run’s items in Test execution view. Each result can be expanded to show the detailed results of each step.

 

Notification enhancements

When significant things happen in your Testlab projects, your team members are generated with notifications always accessible from the notifications panel at the top. Oakville Blob provides numerous enhancements to these notifications such as

  • notifications panel has been moved to the top of viewport and is now accessible from all Testlab views,
  • notifications panel can now be checked as kept open meaning, that the panel won’t automatically close on lost focus of the mouse cursor and
  • detailed documentation for all notification types has been added to the help manual.

In addition, the following new notification events have been added:

  • if an issue, a requirement or a test case which you have been assigned with is commented, you will be notified,
  • if during testing, a test case or it’s step is commented, the creator of the test run will get notified,
  • if a step of test which you have been assigned with is added with a comment during testing, you will be notified.

 

Targeted keyword search

The quick search functionality has been enhanced with a possibility to target the keyword to specific information in your project’s assets. By adding a fitting prefix: to your keyword, you can for example target the search engine to search from the name of a test case only. For example, 

  • searching with “name:something” searches “something” from the names of all assets (for example, test cases, issues, …) or,
  • searching with “requirement.name:something” searches “something” from the names of requirements.

For a complete listing of possible targeting prefixes, see Testlab’s help manual.

 

Usability enhancements

Several usability related changes have been implemented.

  • Easy access to test case step’s comments: Test execution view has been added with a new “Expand commented” button. By clicking this button the view expands all test run’s test cases which have been added with step related comments.
  • Rich text editing: The rich test editors of Testlab have been upgraded to a new major version of the editor with simpler and clearer UI.
  • Keyboard access:
    • Commenting: keyboard shortcuts have been added for saving, canceling and adding a new comment (where applicable).
    • Step editing: Step editor used to edit the steps of a test case has been added with proper keyboard shortcuts for navigating and inserting new steps.
  • Test coverage view: 
    • Test case coverage has been added with a new column: “Covered requirements”. This column sums up and provides access to all covered requirements of listed test cases.
    • Test run selector has been changed to an auto-complete field for easier use with lot of runs.

 

Reporting enhancements

Several reporting engine related changes have been implemented.

  • Hidden fields in report configuration: When configuring reports, fields configured as hidden are not anymore shown in the selectors listing project’s fields.
  • Sorting of saved reports: Saved reports are now sorted to a natural “related asset” – “title of the report” -order
  • Better color scheme: The colors rendered for some fields of assets are changed to make them more distinct from each other.
  • Test run’s details: When test runs are shown on reports, they are now always shown with the test run’s title, milestone, version and environment included.
  • Wrapped graph titles: When graphs on reports are rendered with very long titles, the titles are now wrapped on multiple lines.
  • Results of run tests report: The report now sums up the time taken to execute the tests.
  • Test case listing report: Created by field has been added.

 

Other changes

With the enhancements listed above, this feature release contains numerous smaller enhancements.

  • New custom field type – Unique ID: A new custom field type has been added which can be used to show an unique non-editable numerical identifier for the asset. 
  • Editing of test case step’s comments: The step related comments entered during the testing can now be edited from the Test execution view. This access is granted to users with “testrun.run” permission.
  • Confluence plugin: Listed assets can now be filtered with tags and, test case listing macro has an option to filter out empty test categories.
  • Test run titles made unique: Testlab now prevents you from creating a test run with a title which is already present if the milestone, version and environment for these runs are the same. Also, when test runs are presented in the UI, they are now always shown with this information (milestone, version, …) included.
  • “Assigned to” tree filter: The formerly “Assigned to me” checkbox typed filter in the trees has been changed to a select list which allows you to filter in assets assigned to other users too.
  • File attachment management during testing: Controls have been added to add and remove attachments of the test case during testing.
  • Dynamic change project -menu: The Change project -selection in Testlab-menu is now dynamic – if a new project is added for you, the project will be visible in this menu right away.
  • Permission aware events: When (the upper right corner floating) events are shown to users, the events are filtered against the set of user’s permissions. Now the users should be shown only with events they should be interested in.
  • Number of filtered test runs: The list of test runs in Test execution view now shows the number of runs filtered in.
  • UI framework: The underlying UI framework has been upgraded to a new major version with many rendering fixes for different browsers.

 

Thanking you for all your feedback,

Meliora team


Oakville

“On August 7, 1994 during a rainstorm, blobs of a translucent gelatinous substance, half the size of grains of rice each, fell at the farm home of Sunny Barclift. Shortly afterwards, Barclift’s mother, Dotty Hearn, had to go to hospital suffering from dizziness and nausea, and Barclift and a friend also suffered minor bouts of fatigue and nausea after handling the blobs. … Several attempts were made to identify the blobs, with Barclift initially asking her mother’s doctor to run tests on the substance at the hospital. Little obliged, and reported that it contained human white blood cells. Barclift also managed to persuade Mike Osweiler, of the Washington State Department of Ecology’s hazardous materials spill response unit, to examine the substance. Upon further examination by Osweiler’s staff, it was reported that the blobs contained cells with no nuclei, which Osweiler noted is something human white cells do have.”

(Source: WikipediaReddit, Map from Google Maps)

 

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: announce features integration jenkins plugin product release reporting 


24.10.2015

Testlab Girus released

Meliora Testlab has evolved to a new major version – Girus. The release includes enhancements to reporting and multiple new integration possibilities.

Please read more about the new features and changes below. 

 

Automatic publishing of reports

Testlab provides an extensive set of reporting templates which you can pre-configure and save to your Testlab project. It used to be that to render the reports, you would go to Testlab’s “Reports” section and choose the preferred report for rendering.

Girus enhances reporting in a way where all saved projects can be scheduled for automatic publishing. Each report can be set with a

  • recurrence period (daily, weekly and monthly) for which the report is refreshed,
  • report type to render (PDF, XLS, ODF),
  • optionally, a list of e-mail addresses where to automatically send the report and
  • an option to web publish the report to a pre-determined web-address.

When configured so, the report will then be automatically rendered and sent to interested parties and optionally, made available to the relatively short URL address for accessing.

 

Slack integration

slack_rgb (1)

Slack is a modern team communication and messaging app which makes it easy to set-up channels for messaging, share files and collaborate with your team in many ways.

Meliora Testlab now supports integrating with Slack’s webhook-interfaces to post notifications about your Testlab assets to your Slack. The feature is implemented as a part of Testlab’s notice schemes. This way, you can set up rules for events which you prefer to be publishes to your targeted Slack #channel.

You can read more about the Slack integration via this link.

 

Webhooks

Webhooks are HTTP-protocol based callbacks which enable you ro react on changes in your Testlab project. Webhooks can be used as a one-way API to integrate your own system to Testlab.

Similarly to the Slack integration introduced above, the Webhooks are implemented as a channel in Testlab’s notice schemes. This way you can set up rules to pick the relevant events you wish to have the notices of.

An example: Let’s say you have an in-house ticketing system and you need to mark a ticket resolved each time an issue in Testlab is closed. With Webhooks, you can implement a simple HTTP based listener on your premises and set up a notification rule in Testlab to push an event to your listener everytime an issue is closed. With little programming, you can then mark the ticket in your own system as resolved. 

You can read more about the Webhooks via this link.

 

Maven plugin

Apache Maven is a common build automation tool used primarily for Java projects. In addition, Maven can be used to build projects written in C#, Ruby, Scala and other languages.

Meliora Testlab’s Maven Plugin is an extension to Maven which makes it possible to publish xUnit-compatible testing results to your Testlab project. If you are familiar with Testlab’s Jenkins CI plugin, the Maven one provides similar features easily accessible from your Maven project’s POM.

You can read more about the Maven plugin via this link.

 

Automatic creation of test cases for automated tests

When JUnit/xUnit-compatible testing results of automated tests are pushed to Testlab (for example from Jenkins, Maven plugin, …), the results are mapped and stored against the test cases of the project. For example, if you push a result for a test com.mycompany.junit.SomeTest, you should have a test case in your project with this identifier (or sub-prefix of this identifier) as a value in the custom field set up as the test case mapping field. The mapping logic is best explained in the Jenkins plugin documentation.

To make the pushing of results as easy as possible the plugins now have an option to automatically import the stubs of test cases from the testing results themselves. This way, if a test case cannot be found from the project for some mapping identifier, the test case is automatically created to a configured test category.

 

Jenkins plugin: TAP support

Test Anything Protocol, or TAP, is a simple interface between testing modules in a test harness. Testing results are represented by simple text files (.tap) describing the steps taken. The Jenkins plugin of Testlab has been enhanced in a way, that the results from .tap files produced by your job can be published to Testlab.

Each test step in your TAP result is mapped to a test case in Testlab via the mapping identifier, similarly to the xUnit-compatible way of working supported previously.

The support for TAP is best explained in the documentation of the Testlab’s Jenkins plugin.

 

Support for multiple JIRA projects in Webhook-based integration

When integrating JIRA with Testlab using the Webhooks-based integration strategy, the integration now supports integrating multiple JIRA projects to a single Testlab project. When multiple projects are specified and an issue is added, the user gets to choose which JIRA project the issue should be added to.

Due to this change, it is now also possible to specify which JIRA project the Testlab project is integrated with. Previously, the prefix and the key of the projects had to match between the JIRA and Testlab project.

 

Miscellaneous enhancements and changes
  • Results of run tests report added with new options to help you on reporting out your testing results:
    • Test execution date added as a supported field,
    • “Group totals by” option added which sums up and reports totals for each group of specified field,
    • reports have been added with columns and rows of sum totals and
    • test categories selection supports selection of multiple categories and test cases for filtering.
  • Updates on multiple underlying 3rd party libraries.
  • Bugs squished.

 

Sincerely yours,

Meliora team


Pithovirus

Pithovirus sibericum was discovered buried 30 m (98 ft) below the surface of late Pleistocene sediment in a 30 000-year-old sample of Siberian permafrost. It measures approximately 1.5 micrometers in length, making it the largest virus yet found. Although the virus is harmless to humans, its viability after being frozen for millennia has raised concerns that global climate change and tundra drilling operations could lead to previously undiscovered and potentially pathogenic viruses being unearthed.

A girus is a very large virus (gigantic virus). Many different giruses have been discovered since and many are so large that they even have their own viruses. The discovery of giruses has triggered some debate concerning the evolutionary origins of the giruses, going so far as to suggest that the giruses provide evidence of a fourth domain of life. Some even suggest a hypothesis that the cell nucleus of the life as we know it originally evolved from a large DNA virus.

(Source: Wikipedia, Pithovirus sibericum photo by Pavel Hrdlička, Wikipedia)

 

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: announce features integration jenkins plugin product release reporting 


7.7.2015

Meliora Testlab – Polybius – released

Meliora Testlab team is proud to announce a new version of Testlab – Polybius. In addition to smaller enhancements many of the new features revolve around parameterization of test cases. It is now possible to create test cases as templates and when executing them, enter the parameters of the template to easily create variants of the test case.

Please read more about the new features and changes below. 

 

Test case parameters

 
Test case templating with parameters

Test cases can now be embedded with parameters. Parameters are basicly tags entered to test cases’ description, preconditions, steps or to the expected end result. In the content text, parameters should be entered as

    ${PARAMETER}

When test case is entered with parameter values later on, the actual description shown to the tester will include parameter tags replaced with entered values.

Using parameters is an easy way to create many executable variants for a test case with similar description content. For example, if you have a set of test cases intended to be run with each different web browser variant, you can describe your test case with a appropriate ${BROWSER} tag. This way you can easily plan & execute this test case for different web browsers by entering fitting values for this parameter later on.

 

Parameter typing

Testlab includes a management view for all project’s parameters. You can configure your parameters as selection lists, which restricts the values that can be selected for these parameters.

 

Planning and running tests with parameters

To create test case variants for parameterized test cases you can now enter values for the parameters in Execution planning view. The editor features a column showing all possible parameters for the test case and allows you to enter values for them. When a parameterized test case (template) is added to a test set and it is set with parameter values, the test case becomes a parameterized instance to be run.

Test execution view has similar controls where you can enter parameter values for test cases already added to a test run.

 

Issue management enhancements
  • Issue is added with a new field “Found with parameters” which always includes the parameters of the test case the issue was added for (if any).
  • The issue listing includes a similar field as a column which allows you to easily filter in issues with specific parameter value.
  • When adding a new issue while executing a test case, the failed steps and their comments so far are now automatically copied to the issue’s description field.

 

Coverage and reporting with parameterized tests

Test coverage view with coverage views via requirements and test cases have been enhanced to include parameterized test cases. Each parameterized variant of a test case is now calculated as a single verifying test case in the coverage calculation. The view also shows the parameters the test cases were run with.

The reports also now include the test case parameters in appropriate places (Issue listing, requirement coverage, results of run tests and execution status of test cases report). Issue listing, results of run tests and execution status of test cases reports also have test case parameter values as filters which allows you to filter in specific results from your test runs.

 

Jenkins plugin changes

Meliora Testlab Jenkins plugin now supports passing test case parameters from Jenkins’ environmental variables. You can list the variables which are passed as they are with the results and will be included as parameters at Testlab side.

For example, test case(s) in your Testlab project might have a parameter titled ${BROWSER}. When running automated UI tests in your Jenkins, enter a value “BROWSER” to this new setting and ensure that your Jenkins job sets an environmental variable BROWSER to some sensible value. This way running the job sends and sets the ${BROWSER} test case parameter value to all run tests to a value matching to the environmental variable.

 

Importing test runs

A CSV import of test runs has been added. Importing test runs supports importing test run details, test cases run in them and testing results of test cases and their steps.

 

Centralized authentication (CAS) improvements

Registration of new user accounts

Previously, when users authenticated via an external source via CAS came into Testlab, if they had not yet been created with Testlab user account, they could not log on. This is because authorization to Testlab’s projects are done via Testlab’s user accounts the needed roles and permissions must be granted to these users for the access.

This version is improved in a way, that when a user without Testlab account gets authenticated via CAS, he/she is shown with a welcome screen allowing her to register a personal Testlab account by entering few details (e-mail address, full name). The Testlab user account is then automatically created and an e-mail messages is sent to the company administrators notifying them about the new account.

Granting roles to new accounts (Onpremise)

On-premise version of Testlab can be configured to automatically grant a set of roles in a set of projects when a new user account is registered via CAS.

Relaxing the SSL verification of the host name for ticket validations

When setting up CAS, the client can now be configured to skip the host name check of the SSL certificate when verifying CAS tickets. This may help setting up CAS with self-signed certificates or with proxy configurations.

 

Sincerely yours,

Meliora team


eisernermann

Fancy a game ? Polybius is an arcade game which is said to have induced various psychological effects on players. The story describes players suffering from amnesia, night terrors, and a tendency to stop playing all video games. Around a month after its supposed release in 1981, Polybius is said to have disappeared without a trace. [Wikipedia]

In the span of a week, three children really did fall ill upon playing video games at arcades in the Portland area. Michael Lopez got a migraine, the first he’d ever had, from playing Tempest. Brian Mauro, a 12-year-old trying to set the world record for playing Asteroids for the longest time, fell ill after a 28-hour stint. And only a week later 18-year-old competitive gamer Jeff Dailey died due to a heart attack after chasing the world record in Berserk. One year later 19-year-old Peter Burkowski followed suit for the same reason playing the same game. [Eurogamer]

(Photo by DocAtRS [CC BY-SA 3.0], via Wikimedia Commons)

 

 

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: announce features jenkins product release reporting video 


12.4.2015

New major release of Testlab – Eiserner Mann

With joy we announce a new version of Testlab – Eiserner Mann. This version brings some new feature enhancements and in addition, a more scalable way of loading data to the UI which helps you when you have projects with thousands of stories or test cases. All features are immediately live for our hosted customers.

Please read more about the new features and changes below. 

 

Test case coverage
Test case coverage

Testlab has always had the coverage analysis view from where you can see how your stories and requirements are covered in testing. For projects, which for some reason do not hold any stories or requirements, a new test case coverage view has been added which works in similar way compared to requirement coverage but reports the status of your testing against your test cases.

You have the same familiar controls for picking a milestone, a version or a test run to report the status of your testing against or, choose nothing to get all the latest results of your test cases against the whole test plan.

The new view is perfect for projects with only test cases and can also be used in combination with the requirement coverage view for all projects. We encourage you to use stories and requirements to make your test plan more manageable, but even if you won’t the new view offers you an easier way to get a grasp on the status of your testing.

 

Filtering report data with custom fields

The reporting engine of Testlab has been enhanced with a possibility to filter the data on your reports with your configured custom fields.

If your requirements, test cases and / or issues have custom fields, you can now use these fields to pick the data you wish to include on your reports. All fields are automatically included to the window you configure your report with.

 

New custom field types

Two new types of custom fields are now available to be used with your requirements, test cases and issues:

  • Issues: Allows you to select one or more issues to be linked to the asset. This way you can for example link issues to other issues or, in some other way you prefer.
  • Requirements: Allows you to select one or more requirements to be linked to the asset.

 

Requirement coverage report

A new report has been included which reports the coverage of testing against requirements by listing requirements and related test cases verifying them. This report can be used to inspect the coverage of testing and get a good picture of the status of your testing. The report basically includes the same point of view to your testing as does the Test coverage view of Testlab. 

 

Generating external links to assets

Using the new added link button makes it easy to generate a link which can be used to open up the asset in Testlab. This feature can be found in applicable views’ top right corner as a button with a link symbol.

 

Loading data on demand

With projects with thousands and thousands of assets such as requirements and test cases, the earlier way of fetching and caching the data of the project in the UI was sometimes a hinderance. This new version has a new strategy of loading the data on demand which makes your really large projects more swift UI-wise.

 
Deleting projects

If you wish to permanently delete a whole project from your Testlab it is now possible from the UI. Keep in mind, that deleting projects deletes all the data in them permanently – so use good care when using this functionality.

 

Other miscellaneous changes

In addition to the above, the following changes are also available:

  • Notifications on your dashboard are now folded in groups to make the notification panel more usable when there are lot of tasks assigned to you.
  • When creating requirements, the context menu in the tree of requirements now offers direct links to create different classes of requirements (stories, folders, etc).
  • Test cases can now be removed from test runs directly from the Test execution view. Earlier, this was only possible while running the actual test run.

 

Sincerely yours,

Meliora team


eisernermann

Der Eiserne Mann (The Iron Man) is an old iron pillar partially buried in the ground in the German national forest of Naturpark Kottenforst-Ville, about two kilometers north-east of the village Dünstekoven. It is a roughly rectangular metal bar with about 1.47 m above ground and approximately 2.7 m below ground. The pillar is currently located at a meeting of trails which were built in the early 18th century through the formerly pathless forest area, but it is believed to have stood in another nearby location before that time.

A metallurgical investigation in the 1970s showed that the pillar is made of pig iron. After the long exposure to the weather, the iron man shows signs of weathering but there is remarkably little trace of rust. [Wikipedia]

 

 

 

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: announce features product release reporting 


24.11.2014

New JIRA integration and Pivotal Tracker support

The latest release of Testlab has been added with a bunch of enhancements for integrations:

  • A new integration method (based on JIRA’s WebHooks) has been added. This brings us the support for JIRA OnDemand.
  • New JIRA integration method supports pushing JIRA’s issues as requirements / user stories to Testlab. This makes it possible to push stories from your JIRA Agile to Testlab for verification.
  • A brand new support has been added for Pivotal Tracker agile project management tool. You can now easily manage and track your project tasks in Pivotal Tracker and in the same time, test, verify and report your implementation in Testlab.

Read further for more details on these new exciting features.

 

Support for JIRA OnDemand

To this day, Testlab has supported JIRA integration with best in class 2-way strategy with a possibility to freely edit issues in both end systems. For this to work the JIRA has had to be installed with Testlab’s JIRA plugin, which provides the needed synchronization techniques. Atlassian’s cloud based offering, JIRA OnDemand, won’t allow us to install custom plugins which leads us to the situation that the 2-way synchronized integration possibility has been possible only with JIRA instances installed on customer’s own servers.

The new JIRA integration strategy has been implemented in a way, that it is possible to configure it in use by just using JIRA’s WebHooks. This requires no plugins and brings us the support for JIRA OnDemand instances. Keep in mind, that the new simpler integration strategy is also available for JIRA instances installed on your own servers.

 

One-way integration for JIRA’s issues and stories

The new integration method works in a way that issues and stories are created in JIRA and can be pushed to Testlab. This is possible for issues / bugs and also for requirements. For example, you can push stories from your JIRA Agile to Testlab as requirements which makes it possible to integrate the specification you end up designing in your JIRA as part of specification in your Testlab project which you aim to test.

You can read more about the different integration strategies for Atlassian JIRA here.

 

Configuring the integrations

integration_configuration

All new integrations, including the JIRA integration for issues and requirements and the Pivotal Tracker integration, are implemented in a way in which they can be taken in use by yourself. The project management view in Testlab has a new Integrations tab which allows you to configure the (Testlab side configuration) for these integrations. Integrations may also include some preliminary set up for them to work and instructions for these are provided in the plugin specific documentation.

 

 
Pivotal tracker integration

Pivotal Tracker is an agile project management tool for software team project management and collaboration. The tool allows you to plan and track your project using stories.

Your Testlab project can be integrated with Pivotal Tracker to export assets from Testlab to Pivotal Tracker as stories and, to push stories from Pivotal Tracker to Testlab’s project as requirements.

pivotalkuva

 
Importing Testlab assets to Pivotal Tracker

pivotal_import_testlabThe integration works in a two way manner. First, you have an option to import assets from your Testlab project to your Pivotal Project as stories to be tracked. You have an option to pull requirements, test runs, test sets, milestones, issues and even test cases from your Testlab project. When done so, a new story is created to your Pivotal Tracker project. It is as easy as dragging an asset to your project’s Icebox in Pivotal.

 

 
Pushing stories from Pivotal Tracker to Testlab

pivotal_importedYou also have an option to push stories from your Pivotal Tracker project to your Testlab project. This way, you can easily

  • push your stories from your Pivotal Tracker to Testlab project’s specification as user stories to be verified and
  • push bugs from your Tracker to Testlab project as issues.

Using Pivotal Tracker with Testlab is an excellent choice for project management. This way you can plan and track your project activities in Pivotal Tracker and in the same time, test, verify and report your implementation in Testlab. 

You can read more about setting up the Pivotal Tracker integration here.

 

To recap, the new features

  • bring you support for Pivotal Tracker, one of the best in industry agile project management tools,
  • add support for Atlassian JIRA integration where issues from JIRA are pushed to Testlab as issues and/or requirements and
  • add support for JIRA OnDemand.

We hope these new features make the use of Testlab more productive for all our existing and future clients.

 

 

 

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: announce features integration jira pivotal tracker release screenshots 


20.10.2014

New major release of Testlab

Meliora is more than happy to announce a new major release of Testlab – the most advanced and easy to deploy ALM/Test management solution for all enterprises – codenamed Razzle Dazzle. This version includes management of Milestones, much enhanced reporting and effort estimation features, to name a few.

Please read more about the new features below and as usual, all features are immediately live for our hosted customers. 

 

Milestones video

 
Milestones

In previous Testlab versions, you had an option to use versions and environments to organize your testing efforts and manage your project goals. This mostly works fine but there are situations when having only (usually technical) versions for tracking management might get cumbersome.

Testlab now features Milestones – project reference points which give your project the needed structure for tracking and for easier handling of assets. In software projects, these are typically software releases, sprints, product branches, parts of your software platform or something else that you aim to track your project in.

Some notable features for Milestone management:

  • Specification assets (user stories, requirements, …) can be targeted to a milestone. This makes separation of different parts of your project easier inside the Testlab project.
  • Similarly, test cases and issues may be targeted to a milestone.
  • A new major view has been added to manage your milestones. This view also offers an always up to date view on how the testing of a milestone is progressing:
    • Statistics on how the design of milestone’s specifications is progressing,
    • view on the latest execution results of test cases assigned to the milestone and
    • the list of issues targeted to the milestone.
  • Milestones can be optionally set up to inherit other milestones: When inherited, the milestone’s specification, test cases and issues can be seen to affect the (possibly) later milestones. This is especially handy when you are managing the testing efforts of your complex platform-oriented software product. For example, you might have customer specific variants of your product as separate milestones which are set up to inherit the “base platform milestone” in your project. This way for example, the requirements from the base platform’s specification are inherited to all your customer specific milestones.
  • A new dashboard widget has been added which allows you to track the progress of a selected milestone.
  • Reports have been added with possibility to filter content per milestone.
  • A new custom field type has been added with milestone selection.
  • Search function will return matching milestones by identifier and title.
  • Coverage can be reported against a milestone.
  • Integration related Plugins (Confluence, Jenkins CI) have been updated to support passing the Milestone information back and forth.

Keep in mind, that using Milestones is totally optional. You can start your project as earlier and add your milestones later on when you need one. Using them is recommended though, as the milestone view offers a handy view on your testing progress.

 

New report templates

Reporting of Testlab has been revised with a number of new report templates. In addition to the new reports presented below, the number of existing report templates have been added with new filtering and configuration possibilities.

Results for run tests A template to report out testing results for run test cases. The template can be configured to pick and filter the relevant test cases from your project and includes a grouping of your choice and in addition, a detailed list of each executed test. RunTestsReport
Testing effort estimation Report will include all test cases matching the entered filtering criteria and the report will include the average, minimum, maximum and the latest durations for the execution of the test cases. The times calculated will help you to estimate the effort it could take to execute the set of test cases reported. EffortEstimationReport
Issue grouping This report type is used to get an overall view of your project’s issues summed and grouped up against two attributes of your choice. By configuring this report properly you can get important insight in the state of your project’s issues.  IssueGroupingReport
Issues in time

A time-series graph about issues summed up according to a chosen field. Report will be generated for chosen time period and will include all issues matching the entered filtering criteria.

This report helps you to summarize how the issues in your project have progressed in time. You can observe the changes against a field of issue of your choice.

IssueFieldTimeseriesReport
Test case grouping Similar to the Issue grouping report but calculates and reports against test cases of your project. TestCaseGroupingReport
Test cases in time Similar to the Issues in time report but calculates and reports against test cases of your project. TestCaseFieldTimeseriesReport
Requirement grouping Similar to the Issue and Test case grouping reports but calculates and reports against requirements of your project. RequirementGroupingReport
Requirements in time Similar to the Issues and Test cases in time reports but calculates and reports against requirements of your project. RequirementFieldTimeseriesReport

 

Exporting reports video

 
Excel and OpenOffice export of reports

All PDF-rendered reports can now be exported as

  • MS Excel documents, making it possible to easily analyze your data further in the spreadsheet domain, and
  • OpenOffice Writer documents, allowing you to format the report further in your word processing software.

 

Testing effort estimation video

 
Testing effort estimation

To this day, Testlab has been recording the execution time when you execute your test cases. Utilizing this information, new features have been added to help estimate your future testing effort.

  • In Execution planning, a new column “Average time taken” has been added which calculates and shows the average time it has taken previously when this test case has been executed. These values are used to calculate the total time it would take to execute the set of tests currently in the test set editor. This way, you can estimate the total average time it would take to execute a set of hand picked test cases.
  • In Test execution, we’ve added a new “Time taken” column to indicate how long it took to execute the test in the test run. The summary of a test run also includes a total time it has taken to execute the tests in the selected test run.
  • Milestone’s testing progress in the Milestones view shows the total time it has taken to execute test cases in the milestone.
  • Testlab captures the time it takes to execute a test case automatically when you execute the tests. The time a single test case execution can be edited in Test execution view which allows you to correct the statistics for test cases if the time taken to execute the test case has been incorrectly captured.
  • A new “Testing effort estimation” report has been added to report out further statistics for execution of test cases.

 

Copying projects

Previously, you had an option to configure a single “Template” project with a set of information and settings which would be copied to all new projects. Now, you have an option to select a project from which you would like to copy the content from and in addition, pick the information which is copied. This way, when you are adding a new project, you can always copy the content of your choice from some other project to make the starting of your new project as easy as possible!

 

Copying old revisions of assets

When you edit ready-approved assets (test cases or requirements) Testlab creates new revisions automatically to the database. Old revisions are accessible by selecting an old revision from the top right corner of applicable editing view. Now, if you for some reason have the need for rolling back changes or taking a previous version of test case back to use, you can copy the old revision content as a new asset in your project. Just select the asset and revision of your choice and select “Copy as new…” from the menu and Testlab will copy the selected revision as an new asset.

 

Sincerely yours,

Meliora team


dazzlecamouflage

During World War I and in a lesser extent in World War II the seas of the world saw some of the wildest and most psychedelic ship painting designs ever. This is known as dazzle camouflage, which works by making it difficult to estimate the camouflaged ship’s range, speed and heading. The evidence for the success of dazzle camouflage, also know as razzle dazzle, was at best mixed.

If you missed this little part of our history, dazzle your neighbours and go and educate yourself with a Stuff you missed in history class podcast of the topic.

 

 

 

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: announce features product release reporting 


25.9.2014

Integrating with Apache JMeter

Apache JMeter is a popular tool for load and functional testing and for measuring performance. In this article we will give you hands-on examples on how to integrate your JMeter tests to Meliora Testlab.

Apache JMeter in brief

Apache JMeter is a tool for which with you can design load testing and functional testing scripts. Originally, JMeter was designed for testing web applications but has since expanded to be used for different kinds of load and functional testing. The scripts can be executed to collect performance statistics and testing results.

JMeter offers a desktop application for which the scripts can be designed and run with. JMeter can also be used from different kinds of build environments (such as Maven, Gradle, Ant, …) from which running the tests can be automated with. JMeter’s web site has a good set of documentation on how it should be used.

 

Typical usage scenario for JMeter

A common scenario for using JMeter is a some kind of load testing or smoke testing setup where JMeter is scripted to make a load of HTTP requests to a web application. Response times, request durations and possible errors are logged and analyzed later on for defects. Interpreting performance reports and analyzing metrics is usually done by people as automatically determining if some metric should be considered as a failure is often hard.

Keep in mind, that JMeter can be used against various kinds of backends other than HTTP servers, but we won’t get into that in this article.

 

Automating load testing with assertions

The difficulty in automating load testing scenarios comes from the fact that performance metrics are often ambiguous. For automation, each test run by JMeter must produce a distinct result indicating if the test passes or not. The JMeter script can be added with assertions to tackle this problem.

Assertions are basically the criteria which is set to decide if the sample recorded by JMeter indicates a failure or not. For example, an assertion might be set up to check that a request to your web application is executed in under some specified duration (i.e. your application is “fast enough”). Or, an another assertion might check that the response code from your application is always correct (for example, 200 OK). JMeter supports a number of different kinds of assertions for you to design your script with.

When your load testing script is set up with proper assertions the script suits well for automation as it can be run automatically, periodically or in any way you prefer to produce passing and failing test results which can be pushed to your test management tool for analysis. On how to use assertions in JMeter there is a good set of documentation available online.

 

Integration to Meliora Testlab

Meliora Testlab has a Jenkins CI plugin which enables you to push test results and open up issues according to the test results of your automated tests. When JMeter scripted tests are run in a Jenkins job, you can push the results of your load testing criteria to your Testlab project!

The technical scenario of this is illustrated in the picture below.

jmeterkuva

You need your JMeter script (plan). This is designed with the JMeter tool and should include the needed assertions (in the picture: Duration and ResponseCode) to determine if the tests should pass or not. A Jenkins job should be set up to run your tests, translate the JMeter produced log file to xUnit compatible testing results which are then pushed to your Testlab project as test case results. Each JMeter test (in this case Front page.Duration and Front page.ResponseCode) is mapped to a test case in your Testlab project which get results posted for when the Jenkins job is executed.

 

Example setup

In this chapter, we give you a hands on example on how to setup a Jenkins job to push testing results to your Testlab project. To make things easy, download the testlab_jmeter_example.zip file which includes all the files and assets mentioned below.

 
Creating a build

You need a some kind of build (Maven, Gradle, Ant, …) to execute your JMeter tests with. In this example we are going to use Gradle as it offers an easy to use JMeter plugin for running the tests. For running the JMeter scripts there are tons of options but using a build plugin is often the easiest way.

1. Download and install Gradle if needed

Go to www.gradle.org and download the latest Gradle binary. Install it as instructed to your system path so that you can run gradle commands.

2. Create build.gradle file

apply plugin: 'java'
apply plugin: 'idea'
apply plugin: 'jmeter'

buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath "com.github.kulya:jmeter-gradle-plugin:1.3.1-2.6"
}
}

As we are going to run all the tests with plugin’s default settings this is all we need. The build file just registers the “jmeter” plugin from the repository provided.

3. Create src directory and needed artifacts

For the JMeter plugin to work, create src/test/jmeter -directory and drop in a jmeter.properties file which is needed for running the actual JMeter tool. This jmeter.properties file is easy to obtain by downloading JMeter and copying the default jmeter.properties from the tool to this directory.

 
Creating a JMeter plan

When your Gradle build is set up as instructed you can run the JMeter tool easily by changing to your build directory and running the command

# gradle jmeterEditor

This downloads all the needed artifacts and launches the graphical user interface for designing JMeter plans.

To make things easy, you can use the MyPlan.jmx provijmeterplanded in the zip package. The script is really simple: It has a single HTTP Request Sampler (named Front page) set up to make a request to http://localhost:9090 address with two assertions: 

  • Duration -assertion to check that the time to make the request does not exceed 5 milliseconds. For the sake of this example, this assertion should fail as the request probably takes more than this.
  • ResponseCode -assertion to check that the response code from the server is 200 (OK). This should pass as long as there is a web server running in port 9090 (we’ll come to this later).

It is recommended to give your Samplers and Assertions sensible names, as you refer directly these names later when mapping the test results to your Testlab test cases.

The created plan(s) should be saved to the src/test/jmeter -directory we created earlier, as Gradle’s JMeter plugin automatically executes all plans from this directory.

 

Setting up a Jenkins job

1. Install Jenkins

If you don’t happen to have Jenkins CI server available, setting one up locally couldn’t be easier. Download the latest release to a directory and run it with 

# java -jar jenkins.war --httpPort=9090

Wait a bit, and Jenkins should be accessible from http://localhost:9090 with your web browser. 

The JMeter plan we went through earlier made a request to http://localhost:9090. When you are running your Jenkins with the command said, the JMeter will fetch the front page of your Jenkins CI server when the tests are run. If you prefer to use some other Jenkins installation you might want to edit the MyPlan.jmx provided to point to this other address.

2. Install needed Jenkins plugins

Go to Manage Jenkins > Manage Plugins > Available and install

  • Gradle Plugin
  • Meliora Testlab Plugin
  • Performance Plugin
  • xUnit Plugin

2.1 Configure plugins

Go to Manage Jenkins > Configure System > Gradle and add a new Gradle installation for your locally installed Gradle. 

3. Create a job

Add new “free-style software project” job to your Jenkins and configure it as follows:

3.1 Add build step: Execute shell

Add a new “Execute shell” typed build step to copy the contents of your earlier set up Gradle project to job’s workspace. This is needed as the project is not in a version control repository. Setup the step as for example:

jmeter_executeshellplugin

.. Or something else that will make your Gradle project available to Jenkins job’s workspace.

Note: The files should be copied so that the root of the workspace contains the build.gradle file for launching the build.

3.2 Add build step: Invoke Gradle script

Select your locally installed Gradle Version and enter “clean jmeterRun” to Tasks field. This will run “gradle clean jmeterRun” command for your Gradle project which will clean up the workspace and execute the JMeter plan.

jmeter_gradleplugin

3.3 Add post-build action: Publish Performance test result report (optional)

Jenkins CI’s Performance plugin provides you trend reports on how your JMeter tests have been run. This plugin is not required for Testlab’s integration, but provides handy performance metrics on your Jenkins job view. To set up the action click “Add a new report”, select JMeter and set the Report files as “**/jmeter-report/*.xml”:

jmeter_performanceplugin

Other settings can be left to defaults or you can configure the settings for your liking.

3.4 Add post-build action: Publish xUnit test result report

Testlab’s Jenkins plugin works in a way, that it needs the test results to be available in so called xUnit format. In addition, this will generate test result trending graphs to your Jenkins job view. Add a post-build action to publish the test results resolved from JMeter assertions as follows by selecting a “Custom Tool”:

jmeter_xunitplugin

Note: The jmeter_to_xunit.xsl custom stylesheet is mandatory. This translates the JMeter’s log files to the xUnit format. The .xsl file mentioned is located in the jmeterproject -directory in the zip file and will be available in the Jenkins’ workspace root if the project is copied there as set up earlier.

3.5 Add post-build action: Publish test results to Testlab

The above plugins will set up the workspace, execute the JMeter tests, publish the needed reports to Jenkins job view and translate the JMeter log file(s) to xUnit format. What is left is to push the test results to Testlab. For this, add a “Publish test results to Testlab” post-build action and configure it as follows:

jmeter_testlabplugin

For sake of simplicity, we will be using the “Demo” project of your Testlab. Make sure to configure the “Company ID” and “Testlab API key” fields to match your Testlab environment. The Test case mapping field is set to “Automated” which is by default configured as a custom field in the “Demo” project.

If you haven’t yet configured an API key to your Testlab, you should log on to your Testlab as company administrator and configure one from Testlab > Manage company … > API keys. See Testlab’s help manual for more details.

Note: Your Testlab’s edition must be one with access to the API functions. If you cannot see the API keys tab in your Manage company view and wish to proceed, please contact us and we get it sorted out.

 

Mapping JMeter tests to test cases in Testlab

For the Jenkins plugin to be able to record the test results to your Testlab project your project must contain matching test cases. As explained in the plugin documentation, your project in Testlab must have a custom field set up which is used to map the incoming test results. In the “Demo” project field is already set up (called “Automated”). 

jmeterplanEvery assertion in JMeter’s test plan will record a distinguishing test result when run. In the simple plan provided, we have a single HTTP Request Sampler named “Front page”. This Sampler is tied with two assertions (named “Duration” and “ResponseCode”) which check if the request was done properly. When translated to xUnit format, these test results will get idenfied as <Sampler name>.<Assertion name>, for example:

  • Front page/Duration will be identified as: Front page.Duration and
  • Front page/ResponseCode will be identified as: Front page.ResponseCode

To map these test results to test cases in the “Demo” project,

1. Add test cases for JMeter assertions

Log on to Testlab’s “Demo” project, go to Test case design and

  • add a new Test category called “Load tests”, and to this category,
  • add a new test case “Front page speed”, set the Automated field to “Front page.Duration” and Approve the test case as ready and
  • add a new test case “Front page response code”, set the Automated field to “Front page.ResponseCode” and Approve the test case as ready.

Now we have two test cases for which the “Test case mapping field” we set up earlier (“Automated”) contains the JMeter assertions’ identifiers.

 

Running JMeter tests

What is left to do is to run the actual tests. Go to your Jenkins job view and click “Build now”. A new build should be scheduled, executed and completed – probably as FAILED. This is because the JMeter plan has the 5 millisecond assertion which should fail the job as expected.

 

Viewing the results in Testlab

Log on to Testlab’s “Demo” project and select the Test execution view. If everything went correctly, you should now have a new test run titled “jmeter run” in your project:

jmeter_testrun

As expected, the Front page speed test case reports as failed and Front page response code test case reports as passed.

As we configured the publisher to open up issues for failed tests we should also have an issue present. Change to Issues view and verify, that an issue has been opened up:

jmeter_defect

 

Viewing the results in Jenkins CI

The matching results are present in your Jenkins job view. Open up the job view from your Jenkins:

jmeter_jenkinsview

The view holds the trend graphs from the plugins we set up earlier: “Responding time” and “Percentage of errors” from Performance plugin and “Test result Trend” from xUnit plugin. 

To see the results of the assertions, click “Latest Test Result”:

jmeter_jenkinsresults

The results show that the Front page.Duration test failed and one test has passed (Front page.ResponseCode).

 

Links referred

 http://jmeter.apache.org/  Apache JMeter home page
 http://jmeter.apache.org/usermanual/index.html  Apache JMeter user manual
 http://jmeter.apache.org/usermanual/component_reference.html#assertions  Apache JMeter assertion reference
 http://blazemeter.com/blog/how-use-jmeter-assertions-3-easy-steps  BlazeMeter: How to Use JMeter Assertions in 3 Easy Steps
 https://www.melioratestlab.com/wp-content/uploads/2014/09/testlab_jmeter_example.zip  Needed assets for running the example
 https://github.com/kulya/jmeter-gradle-plugin  Gradle JMeter plugin
 http://www.gradle.org/  Gradle home page
 http://mirrors.jenkins-ci.org/war/latest/jenkins.war  Latest Jenkins CI release
 https://www.melioratestlab.com/wp-content/uploads/2014/09/jmeter_to_xunit.xsl_.txt  XSL-file to translate JMeter JTL file to xUnit format  
 https://wiki.jenkins-ci.org/display/JENKINS/Meliora+Testlab+Plugin  Meliora Testlab Jenkins Plugin documentation 

 

 

 

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: example integration jenkins load testing usage 


9.9.2014

How to: Work with bugs and other issues

Testers role in the project is often seen in two major areas: building confidence in that software fulfill it’s need and on the other hand finding parts where it does not, or might not. Just pointing finger where the problem lies is seldom enough: the tester needs to tell how the problem was encountered and why the found behaviour is a problem. Same thing goes with enhancements ideas. This post is about how to work with bugs ( what we call defects in this post ) and other findings ( we call all the findings issues ) from writing those to closing them.

 

Using time wisely

Author encountering a bug

While small scale projects can cope with findings written on post-it notes, excel sheets or such, efficient handling of large amounts of issues is next to impossible. The real reason for having more refined way of working with issues is to enable the project to be able to react to findings as needed. Some issues need fast response, some are best listed for future development, but not forgotten. When you work right with issues, people in the project find the right ones to work with at given time, and also understand what the issue was written about.

As software projects are complex entities, it is hard to archive such thing as perfect defect description, but they need to be pretty darn good to be helpful. If a tester writes a report that the receiver does not understand, it will be sent back or even worse – disregarded. That said, getting best bang for buck for tester’s time writing the defect is not always writing an essay. If the tester sits next to developer, just a descriptive name for issue might be enough. Then tester just must remember what meant by the description! Same kind of logic goes with defect workflows. On some projects the workflows are not needed to guide the work, but on some it really makes sense for an issue to go trough certain flow. There is not a single truth. This article tells some of the things you should consider. You pick what you need.

 

Describing the finding

First thing after finding an defect is not always trying to write it down to error management system as fast as possible. Has the thing been reported earlier? If not, the next thing is to find out more about what happened. Can you replicate the behaviour? Does the same thing happen elsewhere in the system under test? What was the actual cause of the issue? If you did a long and complex operation during the testing, but you can replicate the issue with more compact set of actions, you should describe the more compact one.

After you have understanding about the issue, you should write ( a pretty darn good ) description of the issue. The minimum is to write the steps what you did. If you found out more about the behaviour, like how in similar conditions it does not happen, write that down too. If the issue can be found executing the testcase, do not rely on that reader would read test test case. Write down what is relevant on causing the issue. Attached pictures are often a great way of showing what went wrong, and attached videos a great way of showing what you did to cause that. They are really fast to record and a good video shows exactly what needs to be shown. Use them.  https://www.melioratestlab.com/taking-screenshots-and-capturing-video/

Consider adding an exact time stamp you tested the issue. The developers might want to dig in to the log files – or even better, attach the log files yourself. It is not always clear if something is a defect. If there is any room for debate, write it also down why you think the behaviour is a defect.

Besides writing a good description, you should also classify your defects. This helps the people in project to find the most important ones for them from the defect mass. Consider the following:

tags

Field Motive
Priority Not all issues are as important at the moment.
Severity Severity tells about the effect of the issue. That can be different than the priority. Then again, as in most projects severity and priority go hand in hand. Then it is just easier to use just one field.
Assignment Who is expected to take the next step in handling the issue.
Type Is the issue a defect, enhancement idea, task or something else.
Version detected What version was tested when the issue was found.
Environment On what environment was the issue found.
Application area To what application area issue has an effect.
Resolution For closed issues, the reason why the defect was closed.

 

Add the fields to your project that are meaningful for you to classify defects and hide the ones that are not relevant. If you are not sure if you benefit from classification field, do not use the field. Use tags. You can add any tags to issues and find the issues later or to report them. Easy.

 

Issue written – what next?

Now that you wrote an error report, did you find it by conducting an existing test? If not, should there be a test that would find that given defect? After the issue has been corrected you probably get to verify the fix, but if the issue seems likely to re-emerge, write a test case for it.

defectflow

Writing an issue begins its life cycle. At bare minimum the defect has two stages – open when you work with it and closed when you have done what you wanted to do, but most of the time the project benefits from having more stages in workflow. Mainly the benefits come from easier management of large amounts of findings. It helps to know what needs to be done next, and by whom. Having information in what state the issues are, the project will be able to distinguish bottlenecks easier. Generally the bigger the project, the more benefits there are to be gotten from having more statuses. That said, if the statuses do not give you more relevant information about the project, do not collect them. Simpler is better.

Status Motive
New Sometimes it makes sense to verify the issues before adding them to further handling by someone else than the initial reporter. New distinguishes these from Open issues.
Open Defects that have been found, but no decision how the situation will be fixed has been made yet.
Assigned Defect is assigned to be fixed.
Reopened Delivered fix did not correct the problem or the problem occurred again later. Sometimes it makes sense to distinguish the opens from reopened to see how common it is for problems to re-emerge.
Fixed Issues are marked fixed when the mentioned issue has been corrected.
Ready for testing After the issue has been fixed this status can be used to mark ones that are ready for testing.
Postponed Distinguish those open issues that you are not planning to work with for a while.
Closed After the findings have been confirmed to cease to be relevant, the issue can be closed. Basically after fixing the issue will be tested.

When working with defects and changing their statuses, it is important to comment the issues when something relevant changes. Basically if the comment adds information about found issue, it probably should be added. The main idea behind this is that it is later possible to come back to discussion so you don’t need to guess why something was done or wasn’t done about the issue. Especially it is important to write down how the change will be fixed (when there is a room for doubt), and finally how the fix has been implemented so that tester will know how to re-test the issue. If the developer can open the way how the issue was fixed it helps tester find possible other issues that have arised when the fix has been applied.

 

Digging in the defect mass

So now that you have implemented this kind of classification for working around issues, what could you learn from statistics? First off, even with sophisticated rating and categorization the real situation can easily hide behind numbers. It is more important to react to each invidual issue correctly than to rate and categorize issues and only later react to statistics. That said, in complex projects the statistics, or numbers, help understand what is going on in the project and help find the focus on what should be done on the project.

Two most important ways to categorize defects are priority and status. Thus a report showing open issues per priority, grouped by status is a very good starting point to look at current issue situation. Most of the time you handle defects defferently from other issues, so you would pick the defects to one report and other types of defects to other. Now, this kind of report might show you for example that there is one critical defect assigned, 3 high priority defects open and 10 normal priority defects fixed. The critical and high priority defects you would probably want to go trough individually to make at least sure that they get fixed as soon as they should so they do not hinder other activities, and for the fixed ones you would look if something needs to be done to have them enabled for re-testing. If at some point you see some category growing, you know who should you ask questions. For example high number in assigned defects would indicate bottleneck in development and prolonged numbers in “ready for testing” a bottleneck in testing.

Another generally easy to use report is the trend report for open issues by their statuses or issues’ development. As the report shows how many open issues there has been at given time, you’ll see the trend – if you can close the issues the same pace you open them.

This was just a mild scratch in the area of working with defects. If you have questions or would like to point out something, feel free to comment.

Happy hunting!

Joonas

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: best practices issues testing usage 


9.7.2014

Testlab – Flannan Isles released

Summer is here! We’re happy to announce a new feature release of Testlab: Flannan Isles. The version has a number of new features and user interface enhancements described in detail below. As expected, all features are immediately available for all our hosted users. Enjoy! 

 

video_tagging

 
Tagging assets

Ever had a situation where you’d like to pin some data in your system of choice to find it later on or, group it up with similar data ? This is now possible in Testlab as it implements tagging of assets where you can pin your assets with keywords of your choice.

  • Tag requirements, test cases, test sets, test runs, issues and reports
  • Easy tagging of single assets
  • Tag multiple assets by selecting your assets from a tree or, tag multiple issues using issue listing filtering
  • Easy to use project tag cloud – click a tag to search content by your tag. Or search manually by entering a search word in format “tag:your tag”.
  • Filter tree content by searching with “tag:your tag”
  • Reporting and coverage analysis now support filtering with tags

Tagging assets is especially useful in situations, where you have a need to group some of your assets for later use. For example, you might want to tag all your requirements targeted for future release for easier management of your next release.

 

video_directediting

 
Direct editing of requirements and test cases

The views for managing the central assets of Testlab, requirements and test cases, have been enhanced with more snappy direct editing mode.

In previous Testlab versions we had separate viewing of assets and a transition to an editing mode. From now on, the forms and controls on requirement and test case editing are always editable with just a click for everyone with needed permissions.

In addition, the form controls have now a more clear disabled look to indicate what is editable and what is not.

 

video_controlandmonitorautomatedtests

 
Control and monitor automated tests

Dashboard of Testlab now features a new widget Jenkins Jobs which enables you to monitor and trigger jobs in your Jenkins CI server.

  • View your jobs’ build trend, latest build result and build number directly on your Testlab dashboard
  • Launch builds directly from the widget
  • Open up the related views on your browser from your Jenkins server with just a click
  • Pick the jobs you prefer – even from multiple different Jenkins servers if needed. Up to four different Jenkins servers can be controlled from a single dashboard.
  • Easy UI – if you are familiar with Jenkins’ UI you feel at home with this one
  • Integrates directly from your browser utilizing the Jenkins’ Meliora Testlab plugin at your Jenkins server

 

Reporting enhancements

The reports of Testlab have been enhanced with following addition:

  • New filtering options added for
    • Listing of requirements: covered & assignee
    • Listing of test cases: status & priority & assignee
    • Listing of issues: resolution & environment & related test case & resolved in version
  • Issues per priority report has been added with a sub report listing all issues counted to the totals
  • Similarly, execution status of test cases now includes a list of related test cases

 

Regenerating requirement identifiers

Requirement management now has a feature in the menu to regenerate IDs. During design, this can be used to easily regenerate all IDs in some folder or even for entire project. The identifiers will be regenerated in 1, 1.1, 1.1.1, 1.1.2, 1.1.3, … format prefixed with the same prefix as in the selected folder. If IDs are regenerated for the whole project the identifiers will be prefixed with project’s key.

Keep in mind, that regenerating IDs will overwrite and possibly change the current IDs – so use this feature sparingly.

 

Sincerely yours,

Meliora team


letter

The Flannan Isles are a small island group close to Scotland, west of the Isle of Lewis. In December 1900, three men occuping the lighthouse vanished, leaving behind an unfinished meal and a mystery that’s never been conclusively solved.

The only sign of anything amiss in the lighthouse was an overturned chair by the kitchen table. No sign of the three men was found, either inside the lighthouse or anywhere on the island. Many rumours of their disappearance surfaced, from a murder to sea serpent eating the men and to foreign spies abducting them, but no conclusive evidence was never found in the investigations.

Photo by Marc Calhoun from geograph.co.uk.

 

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: announce features jenkins release reporting video 


27.5.2014

Taking screenshots and capturing video

This article introduces you on an easy way to capture and annotate screenshots during testing. We show you couple of easy ways to use screen capturing and recording tool Monosnap.

The latest Testlab release brings you inbuilt integration to Monosnap, a handy screen capturing tool with possibility of annotating the screenshots before uploading. Testlab supports desktop clients of Monosnap for Windows and Mac OS X operating systems. You are ofcourse free to use any screen capturing tool you prefer but we feel Monosnap really stands out from the crowd feature-wise and in the ease of use.

 

Why take screenshots or record video

When you are testing software on your workstation taking screenshots is a great way of documenting issues. A picture is worth a thousand words, right ? For example, when you are testing and an issue such as a defect is encountered capturing a screenshot, annotating the capture by highlighting the issue in an exact way and uploading it to Testlab usually tells the team members very well what went wrong. If the capturing tool allows you to annotate the shot, it’s perfect – the amount of textual description you need to enter for the defect is typically much less when you can mark and highlight the relevant parts of the screenshot.

The benefits of using screenshots in issue management are quite self-evident. But screenshots and recorded screencaptures can be quite beneficial in requirement management too. For example, when you are documenting new features on existing user interfaces, taking a screenshot and annotating it properly is a great addition to documenting your requirements. Same applies to test cases: If a test case is testing a complex user interface a well annotated screenshot or two can be a great help for a tester when testing.

 

Monosnap introduced

monosnapMonosnap is a collaboration tool for taking screenshots, sharing files and recording video from your desktop. The tool is available for multiple platforms (such as a Google Chrome extension, for iPhone and iPad) but here we talk about the desktop installable clients for Microsoft Windows and Mac OS X operating systems as they can be integrated and used seamlessly with Testlab.

When Monosnap is installed and run it runs as a desktop application and is accessible in a way depending on your operating system. For Mac OS X, the tool is available in your desktop’s menu bar as an icon. Similarly in Windows, the tool is available in your so called system tray and as a hovering hotspot on your desktop if you prefer.

For capturing screenshots the basic way of working with Monosnap is as follows:

  1. You capture an area of your desktop by selecting “Capture Area” from Monosnap’s menu or pressing the appropriate keyboard shortcut.
  2. A Monosnap window appears with the captured area shown. The window has functions to annotate the capture: For example draw different shapes on it and write text on the capture.
  3. When you are happy with the capture you can upload it to a service of your choice or save the capture on your disk.

For capturing video, you

  1. select “Record Video” from monosnap’s menu or press the appropriate keyboard shortcut.
  2. Monosnap’s recording frame appears. You move and resize this frame to the area on your desktop which you would like to record as a video capture. You also have options to record video from your workstations web cam, record audio from your microphone if you prefer.
  3. To start recording you press the Rec button. You can annotate the video during recording by drawing different shapes on it. When you have recorded your video you press Rec button again to stop the capture.
  4. When recorded, the video is encoded to a MP4 format and depending on your workstation if might take a few seconds. A window appears with the encoded video in it which you can preview before uploading. You can then upload the captured video to a service of your choice or access the encoded video file on your disk. 

 

Using Monosnap with Testlab

To use Monosnap with Testlab you have two options: Take screen captures with Monosnap and upload them manually to Testlab by dragging and dropping or, integrating Monosnap to Testlab’s WebDAV interface which allows you to upload captures to Testlab with a click of a button.

 
Uploading manually

When uploading manually no pre-configuration is needed. You can use Monosnap in a way you prefer and when you have a capture ready upload it to Testlab in same way you would upload a regular file attachment. Keep in mind though, that Monosnap makes this quite easy as it features a “Drag bar” on the right hand side of the capture window. From this, you can just grab and drag the capture on your Testlab browser window and attach it to the asset open in the window just by dropping.

If dragging and dropping is not possible for some reason, as a workaround, you can ofcourse save the capture on your disk and upload it regularly to Testlab.

To see how it actually works play the video below:

video_reports

 

 

WebDAV integration

Monosnap is great in a way that it supports a possibility of uploading the captures with a click of a button to service of your choice. This enables Testlab to act as a WebDAV storage for which into the Monosnap can push the captures to. When configured, you can just push the Upload button of Monosnap and the capture is automatically uploaded to Testlab and attached to the asset open in your Testlab browser window.

To make use of this feature some pre-configuration is needed:

  1. Open up Monosnap’s menu and select “Preferences…” or “Settings…”. Monosnap’s settings window opens up.
  2. Select “General” tab and configure the following:
    • After screenshot: Open Monosnap editor
    • After upload: Do not copy
    • Open in browser: no
    • Short links: no
  3. Select the “Account / WebDAV” view and configure the following:

    For Mac OS X:

    • Host: https://COMPANY.melioratestlab.com/api/attachment/user
      Note: Replace COMPANY with the virtual host of your own Testlab. For example, if you are using hosted Testlab from mycompany.melioratestlab.com enter “https://mycompany.melioratestlab.com/api/attachment/user” to this field. For on-premise installations, set this field to match the protocol, host name and the port of your server to a /api/attachment/user context.
    • Port: Leave as blank (shows as gray “80”)
    • User: User ID of your Testlab account
    • Password: Password of your Testlab account
    • Folder: Leave as blank (shows as gray “/var/www/webdav”)
    • Base URL:Leave as blank (shows as gray “http://127.0.0.1/webdav”)
      Click “Make default” button to make the configured WebDAV service as the default upload service of Monosnap. When set, the Upload button always uses this service by default.

       

      For Microsoft Windows:

    • Host: COMPANY.melioratestlab.com
    • Note: Replace COMPANY with the virtual host of your own Testlab. For example, if you are using Testlab from mycompany.melioratestlab.com enter “mycompany.melioratestlab.com” to this field.
    • Port: HTTPS or HTTP port of your Testlab server – if you are using hosted Testlab enter 443
    • User: User ID of your Testlab account
    • Password: Password of your Testlab account
    • Directory: /api/attachment/user
    • Base URL: Leave as blank

The preconfiguration is documented in detail in the “Screenshots and recording” section of the Testlab’s integrated help manual.

Keep in mind, that the pre-configuration step needs to be done only once. Once you’ve configured your Monosnap to upload captures to Testlab it just works – no need to configure it again later.

Where is the capture uploaded to

When captures are uploaded via Testlab’s WebDAV interface the uploaded captures are automatically attached to the asset which is currently open in your Testlab browser window. So when uploading, make sure you have an asset (a requirement, a test case or an issue) open in your Testlab window in a way, that a file can be attached to it. If for example, your Testlab user account wouldn’t have proper permissions to attach files to assets the uploading will just silently fail.

To see WebDAV integrated Monosnap in action play the video below:

video_reports

 

 

Advantages gained

Having easy to use screen capture tools make your documenting easier and speeds up work in multiple tiers: Documenting issues and other assets is faster and people dealing with the documented assets have a clearer understanding on the issue at hand.

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: example features screenshots usage video 


25.4.2014

Exploratory testing with Testlab

In this article we introduce you to the recently released new features enabling more streamlined workflow for exploratory testing.

Exploratory testing is an approach to testing where the tester or team of testers ‘explores’ the system under test and during the testing generates and documents good test cases to be run. In a more academic way, it is an approach to software testing that is concisely described as simultaneous learning, test design and test execution.

Compared to scripted testing – where test cases and scenarios are pre-planned before execution – exploratory testing is often seen as more free and flexible. Each of these methodologies have their own benefits and drawbacks and in reality, all testing is usually something in between of these two. We won’t go into methodological detail in this article as we focus on how to do the actual test execution in explorative way with Testlab. We can conclude though, that exploratory testing is particularly suitable if requirements for the system under test are incomplete, or there is lack of time.

 

Pre-planning exploratory test cases

As said, all testing approaches can be usually placed in between a fully pre-scripted and fully exploratory approach. It is often recommended to consider if pre-planning the test cases in some way would be beneficial. If the system under test is not a total black box meaning there are some knowledge or even specifications available it might be a wise idea to add so called stubs for your test cases in pre-planning phase. Pre-planning test case stubs might give you better insight in testing coverage as in pre-planning, you have an option to bind the test cases to requirements. We’ll discuss using requirements in exploratory testing in more detail later.

For example, one approach might be that you could just add the test cases you think you might need to cover some or all areas of the system without the actual execution steps. The actual execution steps, preconditions and expected results would be filled out in exploratory fashion during the testing. Alternatively, you might be able to plan the preconditions and expected results and just leave the actual execution steps for the testers.

Keep in mind, that pre-planning test cases does not and should not prevent your testers from adding totally new test cases during the testing. Additionally, you should consider if pre-planning testing might affect your testers’ way of testing. Sometimes this is not desirable and you should take into account the experience level of your testers and how the different pre-planning models fit into your testing approach and workflow in overall. 

 

Exploratory testing session

Exploratory testing is not an exact testing methodology per se. In reality, there are many testing methods such as Session-based testing or pair testing which are exploratory in a way. As Testlab is methodology agnostic and can be used with various different testing methods, in this article we combine all these methods by just establishing the fact that the testing must be done in a testing session. The testing method itself can be any method you wish to use but the actual test execution must be done in a session which optionally specifies

  • the system or target under testing (such as a software version),
  • environment in which the testing is executed in (such as production environment, staging environment, …) and
  • sets a timeframe in which the testing must be executed in (a starting date and/or a deadline).

To execute test cases in a testing session add a new test run to your project in Testlab. Go to the Test runs view and click the Add test run… button to add a new blank testing session.

New testing session as a test run

When the needed fields are set you have an option to just Save the test run for later execution or, to Save and Start… the added test run immediately. The test run is added to the project as blank meaning it does not have any test cases bound to it yet. We want to start testing right away so we click the Save and Start… button.

 

Executing tests

The set of functionality while executing is a good match for an exploratory testing session. As said, test execution in Testlab enables you to

  • pick a test case for execution,
  • record testing results for test cases and their steps,
  • add issues such as found defects,
  • add comments to test cases and for executed test case steps,
  • add existing test cases to be executed to the run,
  • remove test cases from the run,
  • re-order test cases in the run to a desired order,
  • add new test cases to the test case hierarchy and pick them for execution,
  • edit existing test cases and their steps and
  • cut and copy test cases around in your test case hierarchy.

The actual user interface while executing looks like this:

exploratory_runwindow

The left hand side of the window has the same test case hierarchy tree that is used to manipulate the hierarchy of test cases in test planning view. It enables you to add new categories and test cases and move them around in your hierarchy. The hierarchy tree may be hidden – you can show (and hide) it by clicking the resizing bar of the tree panel. The top right shows you the basic details of the test run you are executing and the list below it shows the test cases picked for execution in this testing session.

The panel below the list of test cases holds the details of a single test case. When no test cases for execution are available, the panel disables itself (like shown in the shot above) and lists all issues from the project. This is especially handy when testing for regression or re-testing – it makes it easy to reopen closed findings and retest resolved issues.

The bottom toolbar of buttons enable you to edit the current test case, add an issue and record results for test cases. The “Finish”, “Abort” and “Stop” buttons should be used to end the current testing session. Keep in mind, that finishing, aborting and stopping testing have their own meaning which we will come to later in this article.

 

Adding and editing test cases during a session

When exploring, it is essential to be able to document the steps for later execution if an issue is found. This way scripted testing for regression is easier later on. Also, if your testing approach aims to document the test cases for later use by exploring the system you must be able to easily add them during execution.

If you have existing test cases added which you would like to pick for execution during a session you can drag the test cases from the test hierarchy tree on to the list of test cases. Additionally, you can add new test cases by selecting New > Test case for the test category you want to add the new test case to. Picking test cases and adding new via inline editing is demonstrated in the following video:

 

 

 

Editing existing test cases is similar to the adding. You just press the Edit button at the bottom bar to switch the view to editing mode. The edits are made in identical fashion compared to adding.

 

Ending the session

When you have executed the tests you want you have three options:

Finish, abort or stop

It is important to understand the difference which comes from the fact that each executed session is always a part of a test run. If you wish to continue executing tests in the same test run at the later time you must Stop the session. This way the test run can be continued later on normally.

If you conclude that the test run is ready and you wish not to continue it anymore you should Finish it. When done so the test run is marked as finished and no testing sessions can be started on it anymore. It should be noted though, that if you discard a result of a test case from the run later on the run is reset back to Started-state and is again executable.

Aborted test runs are considered discarded and cannot be continued later on. So, if for some reason you think that the test run is not valid anymore and should be discarded you can press the Abort run button.

 

Asset workflows and user roles in exploratory testing

Requirements, test cases and issues have an asset workflow tied to them via project’s workflow setting. This means that each asset has states that they can be in (In design, Ready for review, Ready, …) and actions which can be executed on them (Approve, Reject, …). In exploratory testing having a complex workflow for project’s test cases is usually not desirable. For example, having a workflow which requires review of test cases from another party makes no sense when testers should be able to add, edit and execute test cases inline during testing.

That said, if using default workflows it recommended to use the “No review” workflow for your projects. 

 

No review workflow

 

If executing test cases which are not yet approved as ready Testlab tries to automatically approve them on behalf of the user. This means, that if the test case’s workflow allows it (and the user has the needed permissions to do so) the test case is automatically marked as approved during the session. This way using more complex workflows in a project with an exploratory testing approach might work if the transitions between test case’s states are suitable. That said, as the testers must be able to add and edit test cases during executing having a review based workflow is useless.

The asset workflows’ actions are also tied to user roles. For the testers to be able to document test cases during execution the tester users should also be granted a TESTDESIGNER role. This ensures that the users should have the needed permissions to add and edit test cases they need.

 

Using requirements in exploratory testing approach

When designing test cases in exploratory testing sessions the test cases are added without any links to requirements. In Testlab, testing coverage is reported against the system’s requirements and in testing parlance, a test case verifies a requirement it is linked to when the test case is executed and passes.

It is often recommended to bind the added test cases to requirements at the later stage. This way you can easily report what actually has been covered by testing. It should be noted, that the requirements we talk here don’t have to be fully documented business requirements for this to work. For example, if you would just like to know which parts of the system have been covered you might want to add the system’s parts as project’s requirements and bind the appropriate test cases to them. This way a glance at Testlab’s coverage view should give you insight which parts of the system have been tested successfully.

Better yet, if you did pre-plan your test cases in some form (see above) you might consider adding a requirement hierarchy too and linking your test case stubs to these requirements. This would give you insight into your testing coverage straight away when the testing starts.

 

Summary

In this article we talked about the new test execution features of Testlab enabling you to execute your tests using exploratory testing approaches. We went through the advantages of pre-planning some of your testing, using requirements in your exploratory testing, talked about testing sessions and noted how Testlab’s workflows and user roles should be used in exploratory testing.

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: example exploratory features testing usage 


25.4.2014

New features: Enhanced reporting, exploratory testing and easy screenshots

A new quarterly feature release of Testlab (codenamed Last Resort) is out! This update gives you enhanced reporting, better support for exploratory testing, easier screen and video capturing and a bunch of other enhancements. Read on.

 

video_reports

 
Enhanced reporting

Reporting in Testlab is now much better than before. You can now preconfigure report parameters and publish your reports so that they can be launched with a single click. Reports can be saved to three different scopes: to a project, to all projects or to personal use only.

  • New report view added for adding and launching reports
  • Preconfigure report parameters and publish your reports with a descriptive name (for example “Critical open issues of BETA” or “Last month testing activity”)
  • Save reports to your project, to all your projects or as private report just for you.
  • Launch your reports with a single click
  • Time bound reports such as progress reports can be configured with relative dates. For example, you can configure a testing progress report to always be reported from start of the current month to current day. Generating a monthly progress report this way is just a click away.
  • New report type added: Testing activity. This reports a summary of all testing activities performed during a time frame or for a set of versions including executed tests and updated assets such as issues, requirements or test cases.
  • Listing reports now include all configured custom fields and comments.
  • All reports now take into account the project’s time zone and user’s locale for date and time formatting.
  • Testlab’s search function also looks up published reports.

 

video_exploratory_testing

 
Exploratory testing

Pre-planning your testing and test cases is not always possible or desirable. Exploratory testing is an approach to testing where the tester or team of testers ‘explores’ the system under test and during the testing generates and documents good test cases to be run.

This type of testing is now well supported in Testlab. It is possible to add, remove and edit test cases during execution. It is also possible to add notes or comments to test cases during execution. This can enhance communication between testers and even between different testing sessions. 

  • Add test cases to the test run during a testing session.
  • Remove test cases during a testing session.
  • Edit existing test cases and their steps during execution.
  • Comment test cases during execution and have access to comments entered for test cases’ steps on previous runs.
  • Execute test cases in any order by jumping between sessions’ test cases.

All new functions are implemented in a way that they available to all testing approaches.

 

video_verify_track_and_report

 
Capturing screenshots and video

Testlab integrates with Monosnap, a powerful screenshot tool, allowing you to easily capture and annotate screenshots. You can also record a screencast or video. Testlab offers an interface to Monosnap so that you can add the content to Testlab with a click of a Upload button. Alternatively, you can just drag-and-drop captured content from Monosnap to attach it easily to Testlab’s asset.

Captured content is uploaded in a secure way directly to Testlab and stored as file attachment to your project asset.

  • Capture screenshots and annotate them – great way to highlight any issues.
  • Easy and fast.
  • Record video and upload MP4 encoded screencast to Testlab.
  • Integration supports the desktop client of Monosnap for Windows and Mac OS X.

 

Other enhancements

In addition to major features described above, the version includes enhancements such as:

  • Test cases can be easily executed directly from the test hierarchy tree context menu. In addition to this, the issue view enables you to easily run the test case the issue is related to – handy, when you just want to re-test and verify if the issue is really fixed or not.
  • Attached images are shown directly in UI by hovering on them with your mouse cursor.
  • When adding a new test run the dialog now features “Save and start …” button to start executing with the new test run immediately.
  • Major performance enhancements in situation when the project has a huge number of executed tests or test runs.

 

Sincerely yours,

Meliora team


letterOn every British nuclear submarine, there is a safe. Inside that safe is another safe. And inside that safe is a handwritten letter from the British Prime Minister, to be opened only if the country has been decimated by nuclear war.” (This American Life, episode 399).

United Kingdom’s nuclear doctrine is unique in a way that each Prime Minister can decide the orders for the nuclear submarines in the event that the British government has been incapacitated by a nuclear strike. These orders are handwritten by the Prime Minister, placed in a safe on each submarine and are destroyed unopened after the Prime Minister leaves office. What action each Prime Minister would have ordered is only ever known to the outgoing Prime Minister. If the orders were ever to be carried out, the action taken would be the last official act of Her Majesty’s Government.

 

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: announce exploratory features release reporting screenshots 


11.4.2014

Heartbleed: Testlab not affected

heartbleed

There has been a lot of discussion lately about serious security vulnerability CVE-2014-0160 commonly called Heartbleed.

The issue resides in a commonly used cryptographic software library OpenSSL which potentially might compromise sensitive data such as user credentials and cryptographic keys from the servers.  

We would like to inform that Meliora Testlab service is not affected.

The transport confidentiality is and has been guaranteed and communicated data successfully encrypted so users of Testlab are not required to change their passwords because of Heartbleed vulnerability. We would like to remind though that it is a good practice to change your password regularly and to make sure your password strength is good enough.

 

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: product security 


19.3.2014

CeBIT 2014 was great

Meliora Ltd participated to CeBIT 2014 – Europe’s premier tech trade show.
We had a great opportunity to present Meliora Testlab to people and made a big number of interesting contacts.

Today, well-designed and engineered, business driven IT solutions are becoming a key distinguishing and competitive factor for a company’s business. This message was generally well received by the people we discussed with.

cebit_photo

Meliora team

Register today to Meliora Testlab – if you did not already. Then check our demo project to get an overview of Testlab usage!
Testlab helps you to get a simple but efficient test process in place.

 

facebooktwittergoogle_pluspinterestlinkedinmail


4.2.2014

Choosing the test management tool

Choosing the best option to oneself rarely is an easy one. The problem is similar when you are buying jeans for yourself and when you are choosing a software for your company – the salesman tells the brand they represent is the best option and probably recommends the more expensive options. The price often reflects the quality, but does not guarantee it. If you want the best bang for your buck, you need to know about the product you are purchasing. In the case of test management tool the choice can be made easier with some preparation. This article is about what you should remember in order to be able to choose the best tool for your purposes.

Different approaches for choosing the tool

One option you have is to outsource the problem. This can be a good approach especially if QA tools are far from your organizations field of expertise. You will easily find QA companies or companies specialized in selling software to do the research for you. If you use this approach, remember to be a little bit skeptical about the advice you get. Not all research conducted this way is without ulterior motives. Companies recommending solutions to you might have monetary interest to recommend specific solution or simply they might have more experience about one tool over the others and thus they might recommend it even if it would not be the best one for you.

Another thing you should think early on is the approach of choosing the tool and taking it into use. Usually, you’ll the the most benefit from your tool by using a single tool with long term benefits in mind, but consider piloting the test management tool in a real project before making the final decision. This way you will get real feedback from your own professionals and have a possibility to withdraw from using the tool if it doesn’t suit your needs.

An another article about deploying a test tool using a pilot project is in works so stay tuned.

 

Tool as a service or installed on own server ?

Do you already know if you want the tool installed on your own premises or do you know you want it as a care-free service ? Usually the reason for installing the tool in your own premises is a concern about security. Another thing is that not all SaaS solutions offer you a possibility to integrate to your other systems. As a rule of thumb, if you do not have a reason to have the solution in your own premises, you will have it easier as a service. Most important benefits in SaaS installations are

  • No need to install or maintain the software. You can use that time for productive work
  • You always have the latest version.
  • No need for long commitments. Buy what you need.

When you choose to sign up for a service make sure you have a way to export your data relevant to yourself in a format you can use later on. Optimally this means some easily usable text based format such as CSV or XML. To be on the safe side, contact your vendor and ask if you have a possibility of getting all your data if you decide to give up using the service. Even after the project is over, you might need the data. For example when developing a new version of the software you might want to revise previous requirements / user stories and not to start from scratch and maybe check the found defects not to make same mistakes again.

 

Features to consider when choosing a test management tool

 

What features do you value ?

Test management tool comparison sheet

When purchasing just about anything, including these jeans or test management tool, the main point is it should suit your needs. When deciding a tool a commonly used practice is to first list your requirements for the tool and then see how the tools fare against your requirements; The idea is that the tool should support your way of work.

A good practice is to look at the tool features with open eyes meaning that you should be open to any features which you originally weren’t looking for. Sometimes you might find features that really help you even that you didn’t list it beforehand or the tool might not have certain feature, but another good way to accomplish the thing why you wanted the feature in the first place. Making really strict rating system for the evaluation probably guides you towards features you are already familiar with without taking into account that the tools always develop new features over time. Then again, you should value the features you really benefit from and give less attention to features that seem nice but you are not sure if you really need them.

The above said, I have included here an evaluation sheet to help you on your evaluation efforts. Usually evaluating many tools takes time and this sheet also helps you compare different features of the tools. When giving points to different features you can give higher points to solutions that are more usable. For example if your organization needs to import data often you will value the ability to do it as easily as possible without needing to install additional plugins or software. In the end, you don’t need to choose the one that got the most points if your gut feeling tells otherwise.

You should also prepare for future needs. For example if you have not used formal requirements in conjunction with testing, you might want to ensure that your tool supports requirement management. As features, it is also best to take into account general use and maintenance of the tool. If the tool requires installation of client software or plugins, does those support your possibly heterogeneous workstation environment? If installations are needed, how do you deliver them to users’ workstations ? 

One possibly important feature group that is closely tied to your way of working is the capability for integrations. Which other tools do you use in your development and which of those should be tied to testing. Generally you benefit most from the integrations that support your daily work. For example if your developers use continuous integration, you might want to have integration between the tools so that your testers will see what unit tests there are and what is their run status at any given moment. This feature helps people to work together and thus saves precious time.

The architecture of the tool does not affect you directly, but you should look how often the tool is updated. The tools with steady update history are more probably to have updates also in the future.

 

What do the tools cost in the end ?

 

Cumulative yearly costs for different pricing modelsOne prime factor in our buying decision is of course the overall cost of the service. When we buy jeans, we see pretty accurately the price we are going to pay. With development oriented software the final price is bit more complicated. In addition to the actual cost of the tool we have to take into account the hosting costs and how much time people need to spend on the maintenance.

Different tools have different pricing structures. If we pay for named users we need a lot more of them than floating licenses depending on the way of working in your organization (typically, one in three of your named users should usually be enough floating licenses in comparison). If you buy software as a service, the yearly costs are simply twelve times the monthly fee. For on premise installations the best thing for you to do is to approximate the yearly costs including the hosting, maintenance and updates. Approximate how much time the tool maintenance requires and take it into account too. If you buy the software, take also the maintenance fees in account.

Think what is the value of minimum up-front investments for you. With monthly fee pricing you can stop using or downgrade the licenses when you need to. Then again, by committing to longer subscription you might get discount.

 

What goes in comes out – and the format matters

All testing tools offer some kind of reporting system, but they are not equal. Make sure you get the information reported that you need. A good reporting system is easy to use and gives you the information you need in format that is easy to understand. Some tools have a reporting system that is comprehensive but requires a lot of work to setup and results might be easily misinterpreted. Then again, some tools allow you to see the basic information, but they do not really tie it together to meaningful reports to help you make decisions based on them. Ok, you might know which cases failed and passed on the last run, but what is the big picture – for example how thoroughly have you tested certain area of your software project? Or you might know the amount of issues, but not the rate which in the defects are closed.

 

Avoid shotgun marriages with the tool

You should also think of some sort of exit-strategy. What happens when you want to change the tool or keep a pause in it’s usage? Will you get your data out in a format that can be used later? If not, it is harder for you to change the tool if you would grow unsatisfied with the service you get. The problem is almost the same in SaaS and installed services. If you don’t get the data in usable format, which today means a .csv -file, you can not import it straight to another tool.

 

What do you need from your vendor and can the vendor deliver you that ?

A working tool as itself is never enough – support and service level your vendor can deliver is as important. If you need support, will you get help from the vendor? And if you do, what is the procedure? With some vendors getting real help for technical problems is notoriously hard as you need to fight through different tiers of customer support before getting to someone who can really help. And that takes time. Then again, if you go for a hosted service, you will need technical support less as the tool is maintained by the vendor. Some companies also offer a more consultative touch to their support: How should you use your requirements effectively to support testing? What sort of workflow should you use in certain project? This can be a lifesaver for some companies where all the involved people are not experts as you get real world expert answers when you need them with reasonable costs.

If you need training, consultation and support, you should give value what the vendor is capable of offering. Can the vendor or someone else you trust provide you that? Do you have needs to customize the tool now or possible later? For example, if you would like to integrate your tool to your own third party product – can the vendor deliver that, and if yes, for what kind of price? Best vendors can support you in all your needs, and still do not cost you your arm and leg.

 

Try before you buy

In evaluation, it is essential that you have the possibility to try out the tool in real world scenario before making the decision. That is the only way to be sure that the tool works as you have expected it to work. That can be sometimes a bit tricky as some features will reveal how they really work only after using the tool in real project. Many tools offer you a trial period to test the software which is good, but testing the software in real world scenario often requires more time. If you can, consider piloting the chosen tool with a project or two before committing yourself to big investments.

 

facebooktwittergoogle_pluspinterestlinkedinmail


3.2.2014

Migrating from using Excel

This article gives you some pointers on how to migrate your current Excel-based QA and testing process to Meliora Testlab.

If you’ve been working in IT-development field you’ve probably been in a project which is managed by hand with spreadsheets, most likely Excel. We’ve all been there. In this article we go through some challenges on migrating your current process to a modern test management tool and give concrete advice on importing your data.

 

Typical spreadsheet based testing scenario

To describe a typical manual test management scenario with spreadsheets let’s start off with the artifacts involved.

 

Artifacts in typical spreadsheet based testing scenario

 

To design an application or a service, some kind of (requirement) specification must be made. It is quite common for a specification to be written as a separate document or better yet, composed of some kind of requirements such as use cases, user stories or business requirements in a separate requirement management system.

When the application is implemented there must be a way to verify that the implementation works in a way designed. The specification is used to design a test plan which typically consists of test cases and steps to execute during the testing. These steps are often described in a spreadsheet which makes up the test plan for the testing.

In addition, the results of the testing must be tracked and reported somehow. Typically the spreadsheet based test plan is written in a way that it can be copied and filled out to form a test report of the testing.

 

Simple diagram on using Excel

 

The above diagram describes a common process for working in the spreadsheet based environment. Analysts designing the application are responsible for writing down the specification document. This document is then stored somewhere and studied by the persons responsible for testing.

A person responsible for managing the testing, Test Manager in the diagram above, then familiarizes him/herself of the specification and writes down how the application should be tested to verify that it works in a way the specification describes. A spreadsheet based test plan is written, stored somewhere and delivered for the testers doing the actual testing.

When a tester is going thru the steps and testing the application he/she copies and fills out the test report as a spreadsheet document. The results of the tests are filled up and comments about the defects encountered are written down.

During the testing round it is quite common to encounter issues, such as defects. These should be fixed and the tester goes on and sends the test report to the developers responsible for fixing the issues. As the issues are resolved the developer informs the manager. Fixed issues should be re-tested so the process is cyclic in a way so that the test plan (and possibly the specification) is updated and new re-testing round is done.

There are variations to this of course. But typically, a some kind of test plan is written as a spreadsheet, stored somewhere and used as a reporting template for interested parties.

 

Challenges

The scenario above poses some challenges which are often accentuated with the size of the project.

  • The management requires lot of dialog and direct communication between the parties. For example, if a tester should only be testing some part of the test plan communicating the ‘what should be tested’ to the tester is laborous.
  • Keeping the different assets and documents in sync is tedious. If and when the specification changes it is often hard to know which test cases should be updated.
  • Linking the different assets together is often hard or impossible. For example, it is often hard to know what actually should be re-tested for some defect found during the testing.
  • Document management and versioning takes a lot of work. How are you sure that all the parties involved are reading the same versions of the documents ? In addition, it is quite common for the documents to go sour over time so having the trust that your test plan is up to date is difficult.
  • Getting to know the latest status of the testing is tough with a large collection of testing reports and other documents.
  • Managing the documents and assets is usually hard. When the documents are sent around and delivered to 3rd parties over time it is practically impossible to control anymore who has access to what.
  • Handling the documents takes a lot of work. Typically, the more people you have working the more time you waste passing documents around and keeping them up to date.

 

Doing things better

To ease testing related work and work out the challenges mentioned above migrating your process to a centralized test management tool is preferred. Because all your testing related assets can be stored centrally it eases the effort related to passing documents around and keeping them up to date. It drives collaboration as these tools usually offer features for assignments, commenting and sharing content. With centralized data store reporting is a breeze with up to date views on your current testing and issue status.

 

Simple diagram on using Testlab

 

The above diagram shows a typical scenario when a similar testing process is handled with a centralized test management tool, Meliora Testlab. A project in Testlab includes all quality related assets of a single entity under testing, such as a software project, a system or an application. It represents a clear separation between different projects in your company.

Analysts are responsible for compiling up the features, needs and requirements for the application and creating the user stories, use cases, business requirements or some other types of assets which fit your software process. These requirements are verified by test cases which are added with execution steps. These test cases are then linked to the requirements they are verifying – to achieve a full transparency from application features through test cases, test runs the test cases are executed during the testing ending up to the issues found in the testing.

 

Migrating spreadsheet based data to Testlab

To migrate existing quality related assets from spreadsheets to your Testlab project is easy. You can import

To make the import as easy as possible for you please download the needed simplified template files below.

Requirements
(.xls, Excel) spreadsheeticon
Requirements
(.CSV) spreadsheeticon
Test cases
(.xls, Excel) spreadsheeticon
Test cases
(.CSV) spreadsheeticon
Issues
(.xls, Excel) spreadsheeticon
Issues
(.CSV) spreadsheeticon

The CSV files are exported as UTF-8 with ; as field separator and as a quote character. The files are importable as they are to Testlab without any further configuration and work with format auto-detection.

Keep in mind that the files are simplified to contain only the most common fields of the assets. A detailed description about the format supported and how to use the import features in Testlab is described here.

 

How do I import the data

To import data download the template files above for the assets you want to import. Make yourself familiar with the format from the links presented above and fill out the files to contain the data you want to import.

Alternatively, if you already have your data in spreadsheet format alter it properly so that it conforms to the format Testlab supports.

When you are happy with the data you have to export it in CSV (Comma Separated Values) format. Some guidance can be found in the following links for Excel and LibreOffice/Open Office

Make sure you export the file in a format Testlab supports it: A safe choice is export the file in UTF-8 or ISO-8859-1 character set, use semi-colon (;) as a field separator and double quote () as a text delimiter character.

Log into the project you would like to import the assets into. From Testlab / Import -menu select the type of the asset you would like to import and run the import tool for your CSV formatted file. You have an option to run the import in “Dry run” -mode first which just validates your import file for any errors or warnings. The process output shows you how the import is done and in the end gives you a summary about encountered warnings or such. After you are happy with the result run the actual import again by unchecking the “Dry run” option.

Keep in mind, that for the assets to show up in your view you might have to refresh the hierarchy tree you’ve imported the assets into.

 

I only have test cases

You only have test cases described in a spreadsheet but no requirements defined ? This poses no problem as a good strategy is to first import your test cases to Testlab and later on create the needed requirements to your project. You can bind the imported test cases to these requirements easily by dragging and dropping.

What is gained in this strategy is a good visibility to your testing status later on. For example, let’s assume that you have a set of user interface related acceptance tests in your project but no related requirements described. You would just like to have a visibility on how different views in your application’s user interface are tested. Then you might want to just create a matching requirement in your Testlab project for each application view and bind the appropriate test cases to these requirements (views). After this, the Coverage view of your project (and other requirement related reports) in Testlab always show the testing status and issues matched to the views of your application. Voila!

 

Automated testing

If you’ve automated a part of your testing you might have quite many unit (or other automated) tests for which you would like to follow the status for. When pushing automated test results to your Testlab project you should have a matching test case in your Testlab project for your tests. For this, using the import is a handy way of creating your Testlab side test cases. If you just export a list of your unit tests to a text file it is quite trivial to form a suitable CSV file with the template described above which can be used to import the test case stubs to Testlab.

 

Advantages gained

There your have it. By moving your QA process from highly manual labour intensive spreadsheet based process to a modern quality management solution you drive collaboration, save time and gain better insight on the quality, testing and issue status of your projects. For any questions and advice needed, please contact Meliora and our team of experts are more than happy to help with you.

 

facebooktwittergoogle_pluspinterestlinkedinmail


24.1.2014

New feature release with importing and exporting

We are proud to announce a new major release of Testlab (Henrietta Lacks). This update brings you new major features such as importing and exporting data from spreadsheets, quick searching of assets in your project, enhancements to our plugins and many more!

 

video_organize_as_projects

 
Importing and exporting data

For easy adoption Testlab now offers import feature for your existing test cases, requirements or issues with a CSV format using office tools such as Microsoft Excel. Data from your projects can also be exported in identical format.

The feature supports importing requirements and their hierarchy, test categories, test cases including steps and issues. We also support importing values for custom fields, file attachments and comments.

video_adapts_to_your_process

 
Searching assets

Enjoy a fast and thorough searching function to find the assets you are interested in your project. Type in a keyword and you’ll get an organized result list for matching assets for easy access.

The search is implemented as an always accessible search field in your Testlab toolbar. The use of this field should be familiar for all users of some kind of embedded search, such as Mac’s Spotlight.

video_verify_track_and_report

 
Coverage enhancements

The coverage view is essential for knowing what has been tested for your application – how your design has been verified. We added some features for coverage reporting such as:

  • Filtering in features which have testing results in your project. This helps to filter out all the features from the view which you never intended to be tested in your version or a run you’re reporting against.
  • Content popups for related assets: By hovering your mouse on coverage related test cases and issues you’ll get instant popup describing the hovered asset.
  • Added “Not in test run” status as a reported state in results bar. This tells you if there are tests still to be scheduled for the feature.

 

video_manage_users

 
Drag and drop files

Attaching files to your assets in Testlab cannot get easier. You can now drag and drop files on your browser to automatically upload and attach them. You can even drop multiple files to upload them all at once (feature requires dropzone support and works currently in Chrome, Firefox, Safari 6+ and IE10+).

 

  

JIRA plugin enhancements

Our two-way JIRA plugin has been enhanced with the following features:

  • Issue types which are synchronized to Testlab can be now chosen. This way you can for example synchronize only defects and leave new features unsynchronized.
  • The fields which are not automatically mapped to Testlab’s fields can now be configured to be mapped to Testlab’s custom fields. This way you can synchronize custom field content or synchronize a missing JIRA field (such as estimate) to a custom field of your choice in Testlab.

 

Session management UI

Testlab now offers an user interface for administrators for which they can see all active Testlab sessions. This way they can transparently see how many licenses are in use and optionally terminate sessions from their Testlab.

 

Other enhancements

In addition to the above, the version features numerous other enhancements. Some of most notable include:

  • Jenkins plugin class level mapping: Test cases in Testlab can now be mapped to Jenkins’ run tests at parent level. For example, mapping a test case with “com.example.tests.ui” in Testlab will be set as failed if any tests in Jenkins with identifier starting with “com.example.tests.ui” fail.
  • Reordering test case steps: Steps of a test case can be now reordered by dragging and dropping.
  • Adding issues for test run’s test cases: A new icon button has been added to test runs panel which allows you easily add a new issue for a test case in an existing test run. For example, this way you can add issues later on to the test run if some are missed during the execution.
  • More quick links: The user interface now contains more quick links between assets. For example test run panel’s “Added issues” counts can be clicked to open up the related assets in a popup window.
  • Requirement class icons: Different requirement classes (user stories, use cases etc) now feature different icons. This way the different types of requirements are easily distinguishable in the user interface.
  • Rich text editor tab / de-tab: Pressing tab or shift+tab in rich text editor now indents and outdents text for easier editing.
  • Better keyboard navigation: Keyboard navigation in the user interface is now better, especially in the hierarchy trees.
  • Opening attachments inline: Attached files with browser supported content types are now opened inline directly in the browser window instead of forcing the document to be saved to disk.

 

Sincerely yours,

Meliora team


henrietta_lacksHenrietta Lacks (1920 – 1951) was an African-American woman who passed away of cervical cancer in 1951. Little did she know that her fatal tumors would hold a key to numerous medical advancements (polio vaccines and chemotherapy for example). Her biopsied cells were found to really thrive outside her body and would be used to create the first immortal cell line for medical research so that her cells are still alive today – over 60 years after her passing. Her cells were used to create the so called HeLa cell line for scientific research – currently the oldest and most commonly used human cell line. They were also the first human cells successfully cloned in 1955.

Henrietta’s cells have been mailed to scientists around the globe for research and scientists have grown some 20 tons of cells. Henrietta is still living today, around the globe and bigger than ever. Find out more by listening Radiolab’s great short story about the subject!

 

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: announce features jenkins jira release video 


4.12.2013

Introduction video

Today, we’re happy to bring you a brief introduction screencast about Testlab. The video will introduce you to the central concepts of Testlab and how they are presented in the user interface. We will give a glance to

  • requirements management,
  • test case design,
  • execution planning and test runs,
  • issue management and
  • test coverage.

Keep in mind that this introduction skips some central features of Testlab such as reporting but should give you some insight to the use of Testlab. To view the introduction please click below.

 

Introduction to Meliora Testlab

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: demo example screencast usage video 


18.11.2013

Brand new Demo project available

We are happy to publish a renewed version of the Demo project for you all to familiarize yourself with Testlab. You can find the project by logging to your Testlab and choosing project “Demo”.

 

Boris has a problem

The new Demo project tells a story of Boris Buyer who has an online book store. He’s selling some books in his web store but has been lately noticing declining sales. He decides to develop his web store further by the aid of his brother in law who owns an IT-company specializing in web design. They hatch up a plan to develop Boris’ online business further with few agile development milestones.

The data in “Demo” project has all the requirements, user stories, test cases, test sets, test runs, testing results and issues to make it easier for you to grasp on the concepts of Testlab. The project starts at January 1st of 2013, has one milestone version completed and happily installed in production, second milestone completed but still in testing and a third milestone for which the planning has been just started.

timetableThe timeline diagram above presents a rough sketch of “Demo” project’s progress showing the phases of the project at the top and the relevant assets found in Testlab at the bottom. Purple highlight on the timeline presents the moment in time the project is currently in.  

 

Great, where can I read more ?

A detailed description, a story and hints for what to look in Testlab can be found at

https://www.melioratestlab.com/demo-project

All new registrants will get this link in their welcome message you get when you sign up.

 

Just to note – all existing customers and registrants have their old Demo project renamed as “Old demo” so any changes you’ve made to your previous demo project aren’t lost.

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: demo example usage 


18.10.2013

Quality is not a feature

RoadmapQuality is a central part of a software development process. It shouldn’t be seen as an expense because higher quality correlates with lower development costs and faster delivery times. All this requires a vision about quality and good practices – not the most common “we’ll find the issues at the end of the project and fix them” -strategy.

Quality adds value, not expenses

Software projects with budget overruns and that are late on deadline are not necessarily in problems before the testing has been started. A high number of issues found at this stage usually add up to the problems and are very expensive to fix.

A study by Capers Jones in United States states that an average development cost for a function point is 1000$ and maintenance costs adds up an another 1000$. Software with higher than average quality has an average development cost for a function point of 700$ and $500 for maintenance. This adds up as an 40% saving for the software lifetime [Capers Jones: Applied Software Measurement: Global Analysis of Productivity and Quality, Mcgraw-Hill Professional 2008].

For a software project costing 400 000$ this could mean a saving of 160 000$. Should you improve your return of investment – by improving your quality ? 


Laatu ei ole lisättävä ominaisuus 

Laatu on keskeinen ohjelmistokehitysprosessin tekijä. Sitä ei pidä pitää kustannustekijänä, vaan korkeampi laatu korreloi matalampien kehityskustannusten ja nopeamman toimitusajan kanssa. Tämä edellyttää laatuvisiota ja hyviä toimintatapoja – eikä varsin yleistä ”löydetään virheet kehitysprojektin lopussa” -strategiaa.

Laatu luo lisäarvoa eikä kustannuksia

Ohjelmistonkehitysprojektit, jonka lopputuloksena ovat suuret budjetti- ja aikatauluylitykset, eivät välttämättä ole ongelmissa ennen testauksen aloitusta. Tässä vaiheessa löytyvät suuret määrät havaintoja aiheuttavat sitten ylityksiä kustannuksiin ja aikatauluihin.

Yhdysvaltalaisen Capers Jonesin selvitysten mukaan ohjelmistonkehityksen keskimääräiset kehityskustannukset olivat 1000$ per toimintopiste (FP, function point) ja ylläpitokustannukset toiset 1000$. Ohjelmistoilla, joiden laatu on keskitason yläpuolella, vastaavat luvut ovat 700$ per toimintopiste kehityskustannukselle sekä 500$ ylläpitokustannuksille. Tämä tarkoittaa 40% kustannussäästöä ohjelmiston elinkaarelle [Capers Jones: Applied Software Measurement: Global Analysis of Productivity and Quality, Mcgraw-Hill Professional 2008].

Kustannuksiltaan 400 000 ohjelmistolle tämä tarkoittaisi 160 000 kustannussäästöä. Parannatko sinä investointisi tuottoa – parantamalla laatuasi?

facebooktwittergoogle_pluspinterestlinkedinmail


19.9.2013

Testlab integrates to Confluence and Jenkins, “Toynbee Tile” released

Meliora is pleased to announce Testlab “Toynbee Tile” with integration plugins to Atlassian Confluence and Jenkins CI included. Please read more about the new features below.

E-mail notifications

The version includes fully configurable E-mailing notice schemes. A scheme allows you to configure the rules and events for which Testlab will send your users notifications via e-mail.

E-mails are neatly formatted as HTML messages with asset details included. E-mail message also includes an easy button for you to jump to Testlab with and open up the related asset.

In addition to the notification e-mails all requirement, test case and issue related e-mails are now sent with the new template when the “Send as E-mail” function in Testlab is used.

 

Notice schemes and e-mailing rules

Testlab allows you to design your own schemes for e-mail notifications. These schemes are shared for all projects in your company and can be re-used if applicable. 

A scheme is a collection of asset related rules which define the events which trigger the sending of a messages and receivers for these messages.

We’ve included a set of default schemes in Testlab to make the use of e-mail schemes as easy as possible for you.

 

 

Atlassian Confluence integration 

Toynbee Tile brings in with it a support for Atlassian Confluence plugin. The plugin includes macros for your Confluence site which allow you to include

  • lists of requirements,
  • lists of test cases and/or
  • lists of issues

to your Confluence pages.

Macros are fully configurable with options to pick the relevant assets from your Testlab projects and also render an easy to use “Open in Testlab” action for each asset for easy transition from Confluence to Testlab.

 

Jenkins CI integration 

Jenkins CI is a popular open-source continuous integration server. A typical Jenkins build often includes automated tests which might be for example unit tests or integration tests.

Meliora Testlab Jenkins plugin allows you to publish the results of the automated tests to your Testlab project. The plugin pushes the results to Testlab by creating a test run with automated tests mapped to Testlab’s test cases.

The plugin allows you to automatically open up issues in Testlab if tests in your Jenkins build fails. Issues can also be automatically assigned to some user in your Testlab project.

If you are using Jenkins for your development this makes Testlab a great choice for your test management efforts.

 

In addition to the major changes described above, some notifiable changes include:

Project’s time zone

Each project has now a default time zone which is used to format for example dates and times in the e-mail messages sent from Testlab. This information will also be used with reporting in the future.

Snappier user interface

Toynbee Tile includes an improved drawing strategy for the browser client. This should make your user interface more snappier than before in every day use.

Improved printing

When printing out individual assets (requirements, test cases or issues) from Testlab we now use the same layout as is used with e-mails.

That’s it for now! We hope the integrations to Confluence and Jenkins make the use of Testlab even more easier for you in your enterprise. All plugins will be released to their respective repositories (Atlassian Marketplace, Jenkins update center) as soon as possible but in the mean time all plugins are available to our customers by request.

Sincerely yours,

Meliora team



Toynbee tile
? The Toynbee tiles are tiles embedded in asphalt of streets in numerous major cities in the United States and South America with unknown origin. All tiles contain some variation of a message “TOYNBEE IDEA, IN MOVIE 2001, RESURRECT DEAD, ON PLANET JUPITER“. Since the 1980s several hundred tiles have been discovered.

What ? Yes, someone or some group of people are and have been embedding this message to the streets of major cities for 30 years now. Why – nobody exactly knows but some theories exist and there is a nice documentary about this intriguing subject with one in it for all interested parties there. 

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: announce confluence features jenkins plugin product release 


6.9.2013

Test requirements to the rescue!

Bridging the gap between customer needs and design is often difficult. This blog post addresses some of these issues and introduces you to the concept of a “test requirement” to help you tackle this problem.

Too often it happens that the project’s outcome is not what the customer would have wanted. There are numerous reasons for this, but two major reasons can be seen over and over again

1) Traditional (V-model) projects relying on pre-written business requirements run into misunderstandings when designing test cases from these requirements.

2) Agile projects tend to have issues with the visibility of quality.

Test requirements help customer and supplier to talk the same language what is needed of the software and at the same time having better understanding of the software’s quality.

 

Common problems with the V-model projects

Before going in to what these test requirements actually are, let’s review some commonly encountered issues in testing projects. First we’ll review the V-model. First off, very often the requirements are too vague to specify exactly how the software works and how it can be tested. This is true even when serious effort is given to the task of writing the requirements. It is understandable. It is not cost efficient, if even possible, to write so good requirements that describe unambiguously what the customer needs. Also on bigger projects these requirements tend to change which increases the expences of the the project.

For a test designer a changed requirement can be a nightmare if the information is only on somebody’s E-mail or meeting minutes. As the supplier of the software project wants to maximize the profits, it’s interest is usually to fulfill the requirements but not much more. This often leads to a situation where customer’s testers report bugs because software does not work as customer wants, but supplier does not want to fix them for the fixed price as the requirement does not directly specify how the feature should work.

 

Developing and testing using agile methods

If the V-model projects aren’t always that rosy, the agile testing is not completely without it’s problems: The customer is expected to allocate time to the project so that the project’s implementation would always be going to the right direction. It is accepted that even if the project does some wild implementation guesses/experiments that are later scratched and rewritten, it is acceptable, as the project’s implementation as a whole can still be very effective. This stays true if the customer steers the project in real time to the right direction, but often the project learns about the customer needs later than what would be optimal.

If the project is a bigger one and has many testers, having the test team test the right things at the right time is essential, and having visibility on what actually has been tested allows the test manager to accomplish that. The lack of this visibility is sometimes a problem on agile projects. The problem is often worsened if and when people on the project change while the documentation of the implementation is kept at minimum. 

 

The definition of test requirement

Ok, I’ve stated some often seen problems on testing and development, and supposedly I should have some ideas how to reduce them with test requirements, but what are those test requirements? In short, they describe what will be tested, but not how it will be tested. What if you are not using requirements as such, you are using for example user stories you might ask. It does not change a thing – if you can use something, user story for example, as a basis of design, you can use it as a source of a test requirement too. For the sake of simplicity I will call all these sources requirements for the rest of this blog post.

A test requirement is short – it should describe one relatively independent area of a requirement it is based on. It is agile – it can change or it can be rejected if needed. A test requirement can have additional information like a status for review or importance. All this can be utilized on decision making and reporting.

 

The “how” of test requirements

Now we know a little bit what test requirements are, but how do we use them? First off, there are numerous correct ways. You can pick the practices that best fit your development environment. Essential thing to remember when working with test requirements is to start working on them as early as possible. You can start when you have the requirement. Just split the requirement to the pieces you plan to test. You can include things you presume how it will work. Sometimes your assumptions will be wrong but it doesn’t matter. As the scope of a test requirement is small they are super fast to review later. More often the test requirements act as a valuable communication tool and increase the understanding between the customer and the supplier.

I mentioned reviewing. Test requirements should be reviewed by the customer and supplier both. It is an extra step, yes, but it is fast and efficient one. It can happen on any phase of the project, but the sooner the better. The point is that after the review both parties will have a better understanding about the upcoming solution.

 

Benefits of using test requirements

There are also other ways to work with requirements and cooperate between involved parties, so what are the benefits for using the test requirements?

  • It is simple. Splitting the requirements is an easy task and communication with these even simpler. ”Is this what you want?” Yes/No. ”Can it be done?” Yes/No/Yes, but it will be expensive/Depends (So yes, you will not always get the definitive answer right away about the solution, but having the answer for some helps already a lot).
  • It is effective. For every detail you define before development you have a fair chance to save time from unwanted design, implementation, testing, fixing and building.
  • Splitting down the requirements to small pieces makes talking about them easy, and using a format which defines how they will be tested is intuitive. Having this kind of detail in actual requirement is not good as making these requirements would be too time consuming and probably would lead to unoptimal design. It is better to talk about requirements with the development team. Waiting until development team has done the development according to general requirements is not good either as the outcome likely isn’t optimal for the customer.
  • Defining test requirements adds a visible and measurable layer that can be used to track the development and quality of the project.
  • Using test requirements is a good way of involving test team early on and using what they are good at – predicting what might go wrong. Experienced tester recognizes the possible weak points of the design. When you have test requirements ready, designing the actual test cases is easier. This is good as that phase in the project tends to be busier, and now you have already done part of the work.
  • Another advantage is the added precision when discussing about the found defects. When pointing a defect to more precise test requirement which has been reviewed, it is easier for tester to point out why the defect has been reported.

 

Working with test requirements using Meliora Testlab

Meliora Testlab has been designed to utilize the concept of test requirements and gives the user a powerful toolset to design and report test requirements and plan test cases from them. Testlab also contains strong collaboration features which enable project to define either stricter or looser workflows – just what suits the company. Testlab’s automatic notification features make sure people get notified when they should act. This makes the managers work easier.

 

facebooktwittergoogle_pluspinterestlinkedinmail


25.6.2013

Meliora Testlab “Lonely Cicada” released

We are proud to announce a new version of Meliora Testlab “Lonely Cicada”. This version brings you many usability enhancements and notably, a possibility to integrate your Testlab to your own central authentication user registry.

The version is live immediately and all the features and changes described below are now available! 

Centralized authentication 

Lonely Cicada implements a support for an external authentication source with SSO (single sign-on or, more appropriately, enterprise reduced sign-on). This allows your Testlab to be integrated to your organization’s own user registry, for example Active Directory. This way the password authentication is made against your centralized credentials with reduced time spent on login prompt.

This feature is available for all Enterprise Suite installations. You can read more about the feature from here.

 

Streamlined requirement management

This version implements a set of changes to requirement management functionality which integrates requirement design and test case design more deeply.

  •  Test case hierarchy is integrated to requirement design view. This allows easier transitions between requirements and test cases.
  • Verifying relations can be drag’n’dropped directly between the trees.
  • Test case edit can be initiated directly from requirement design view and adding a new verifying test case is now easier via the requirement design view.
  • Requirement tree now holds a “Show verifying test cases” checkbox which allows you to show the requirements’ verifying test cases in the requirement tree when preferred.
  • Comments are moved directly to the details view from the earlier placement of a separate tab to make them more accessible.
  • “Search:”-filter in requirement tree now searches the content of custom fields.
  • When adding a new requirement the requirement can be directly added to the workflow status desired. Previously, the asset was always created to the initial status and a separate edit for approving the requirement was required.

We believe these enhancements make working with requirements more easier for all our users.

 

Quick links

Testlab user interface now holds a set of so called quick links which allow you to quickly move to the asset referred. 

  • Coverage view tree view is now interactive so that the actual coverage data can be drilled down to individual assets. You are now able to click the test results and calculated summaries to open up a floating panel allowing access the assets included.
  • Clicking the quick link icons in
    • test set editor in execution planning,
    • test run details in test runs view,
    • issue listing and issue form
  • .. take you directly to the clicked asset. 
 

Persisted UI settings

Listings and grids in Testlab allow you to pick the columns, sort and group the content as you wish. Great, and this is now enhanced by the fact that the settings you’ve chosen are now saved to your browser settings. This way the views open up as you’ve set them up in your last session without the need for resetting them again.

 

 

Issue management enhancements

For issue management, we’ve:

  • Added a button to the issue listing which allows you to add the test case related to the issue to your work set. This makes it easier for you to pick the test cases to your next test run via the issues view.
  • Added a “severity” field for issues. If your project’s issue management process requires a separate severity for the issues in addition to the priority field just enable this field in your project’s settings and you are good to go.
  • Added “Trivial” level for priority.
  • The title bar of the issue listing now holds the count of filtered in issues.
  • Added quick links to the related assets such as issue’s test case and test run (see above).

 

Switching projects

When working with Testlab you are always connected to a specific project of yours. Previously switching between your projects required you to relogin to Testlab. The new version allows you to quickly switch between the project’s you have access to by selecting the project from Testlab > Change project -menu.

 

Other usability enhancements

In addition to the above, some notable enhancements include:

  • Test case design:
    • Comments have been moved directly to the details tab. This make the comments more accessible for users.
    • “Search:”-filter in test case tree now searches the content of custom fields.
    • Test cases can now be added directly to the workflow status desired. Previously, a separate edit was required for example to approve the test case.
    • Test case related reports can now be generated via context menu of the test case tree.
  • Execution planning:
    • Listing of test sets is not grouped anymore by default to make the view more accessible.
    • Test set editor now shows the priority of added test cases.
    • Test set tree has now a hover tooltip to show the description of the hovered test set.
  • Test runs:
    • Test runs can now be permanently deleted. A “testrun.delete” permission is required which is by default granted to all TESTMANAGERS (and ofcourse, administrator users).
    • Test run listing is now grouped by version by default.
  • File attachments:
    • Streamlined attachment controls: Attachments are now listed inline and removed by clicking the cross-symbol after the attachment.
    • From now on, the history regarding attachments now show the file size of the added attachment.
  • Permissions:
    • “user.add” permission now grants the editing permission for users.
    • All users in PROJECTMANAGER roles now have the permission to add users, but can only grant access to the project they are currently logged in to.
  • Custom fields:
    • Added a new custom field type which allows you to select an user.

We would like to thank you for all your feedback and we hope this version makes the use of Testlab even more productive than before.

Sincerely yours,

Meliora team


Lonely Cicada? There is a genus of cicada in North America (Magicicada) which spend most of their 13 or 17 year lives underground. After 13 or 17 years, mature cicada nymphs emerge at any given locality, synchronously and in tremendous numbers. After such a prolonged developmental phase, the adults are active for about 4 to 6 weeks. Within two months of the original emergence, the life cycle is complete, the eggs have been laid and the adult cicadas are gone for another 13 or 17 years.

Time after time something disturbs this cycle of individual cicadas and a single cicada can emerge from it’s safe-haven of underground out of this 17-year cycle and enter the brave new world without the company of any of it’s kind. If something should define lonely we think this is it.

Happen to be at east coast North America ? There is an emergence of cicadas happening right now and you can actually build a sensor to track cicadas at your location. Or just track them via online map.

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: announce features release 


4.6.2013

Testauspäivät 2013

A Testing conference for Testers, other Developers and Test/Project Managers. Two tracks of talks and one for exercises

We are there today – if you see us, let’s have a talk :)

facebooktwittergoogle_pluspinterestlinkedinmail


31.5.2013

Easier authentication for Meliora Testlab

Are your users affected by password fatigue from entering different username and password combinations? Do you want to reduce IT costs due to lower number of IT help desk calls?

Meliora Testlab has support for an external authentication source with SSO (single sign-on or, more appropriately, enterprise reduced sign-on). This means that your users log in to Meliora Testlab with their most used username and password! And at the same time reduce time spent on login prompt.

 

Benefits

Most valuable features of Testlab’s new authentication provider are:

  • Integrate to your existing Active Directory
  • Support for various other user databases and authentication methods such as LDAP, DBMS, RADIUS, or X.509 certificates
  • Improves security through not exposing user passwords out of your company network
  • Ability to enforce existing uniform enterprise authentication policies

 

Example case

Your organization uses Microsoft Active Directory as authentication source and Meliora Testlab as a SaaS service. Meliora provides you with a simple Testlab Authenticator module (green module in the picture below), that is responsible for authenticating your users from Active Directory and providing identification data securely to Testlab.

A technical overview of the components and interactions between them is shown in the following picture.

 

When an unauthenticated user accesses Testlab, user’s browser gets automatically redirected to your Testlab Authenticator. If needed, Authenticator asks user credentials, validates them from Active Directory and creates a service ticket which is passed to Testlab via redirection. Testlab then validates the ticket from your Authenticator and if valid, grants access to Testlab.

The lifetime of the Authenticator ticket is configurable and is by default 10 hours. This means that if the same user re-logins to Testlab during this time the access is granted automatically without the need of entering the credentials again.

 

Supported technologies

For SSO, we support using the standard SAML 2.0 WebSSO profile or alternatively, CAS.

 

Security considerations

There are many security issues to consider when implementing an authentication solution. Meliora has made every effort to create an excellent solution for secure authentication.

Here are some highlights:

  • User credentials never leave company’s intranet
  • For CAS, a single simple and standard web application component is installed to company’s network (Testlab Authenticator, TA)
  • Solution is based on tried and well tested technologies (SAML 2.0 WebSSO standard or CAS)
  • No direct calls from Internet to company’s directory server (Active Directory or other authentication source)

Please, contact us for more information. We are happy to tell you more details about the solution.

Make your life easier and get super-easy authentication to your Testlab!

Regards,
Meliora team

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: announce features plugin product security 


26.4.2013

New version of Testlab released [Pitch drop]

Meliora is pleased to announce an update for Meliora Testlab codenamed “Pitch drop”. This version brings you a number of enhancements and optimizations and is available immediately for our hosted version customers. The version includes many features requested by our clients and we would like to thank you for all the feedback!

Jira integration

The version offers features for integrating your Testlab with Atlassian Jira issue tracker. Implementation includes a true two-way synchronization module which enables you to synchronize the issues in real time between your Testlab and Jira deployments. This way you can continue to use your established method of working with issues with Jira and be more productive with requirement and test case planning and test execution with Testlab.

This feature is available for all Enterprise Suite installations. You can read more about the feature from here.

 

Real time project events

To keep you up to date with events in the project you are currently logged in Testlab delivers you real time events. The events are presented as floating notifications at top right corner of the user interface. Clicking the notification opens up the associated asset in your Testlab.

In addition to the regular project events such as changes in the project assets all assignments are delivered as sticky notifications. This means that when an asset is assigned for you the notification will stick to your user interface as long as you click the notification to open up the asset or dismiss the notification.

 

Notice board

A new dashboard widget is offered to easily communicate with the users in your testing project.

A project notice board offers a place to store your shared project assets and publish announcements. Announcements can be targeted to specific user roles and optionally tagged as important which delivers the announcement to all users on next login in a pop up window. By default test managers and project managers can manage the content of project’s notice board but new permission keys are included to control who can publish the content. In addition to this all administrator users of your company can publish company wide announcements which are shown to users in all projects in your company.

 

Easier re-testing

Testlab’s test case hierarchy tree is enhanced with a new set of filters which make it easier to highlight the test cases for re-testing.

Filters make it easy to pick all not passed test cases from current test case hierarchy as a whole, for a specific version or for a single test run. When re-testing previously tested features use these filters in execution planning view to easily add test runs for test cases previously failed, blocked or not tested.

Combine the above with the use of your personal work set in execution planning and adding test runs for re-testing couldn’t be easier!

 

Streamlined issue editing

Editing window for issues has a new streamlined layout which makes commenting of issues easier.

 

Commenting enhancements

When publishing comments the content is now automatically linkified so that all external links are rendered as clickable links. In addition to this comments can now be deleted.

 

Performance improvements

In addition to the features described above Pitch drop includes some magic under the hood to make the use of Testlab even snappier than before with network transport enhancements and more intelligent on-need-basis refresh logic for asset hierarchies.

Sincerely yours,

Meliora team


Pitch drop? In 1927 Professor Parnell heated a sample of pitch and poured it into a glass funnel with a sealed stem. Three years were allowed for the pitch to settle, and in 1930 the sealed stem was cut. From that date on the pitch has slowly dripped out of the funnel – so slowly that now, 80 years later, the ninth drop is only just forming. In the 80 years that the pitch has been dripping no-one has ever seen the drop fall. You can try your luck here.

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: features jira product release 


5.4.2013

Hello World [@Testlab]

 

3 minute guide to running your first test case with Testlab

So, you just registered and you want to run tests? This intro guides you to create and run your first test case.

Prerequisites:

  • your own Testlab site: for example https://yourcompany.melioratestlab.com
  • you have an user account for Testlab (with company administrator priviledges)

Register if needed here: https://www.melioratestlab.com/sign-up

 

Step 0: Create a project

Login to your Testlab and choose Testlab –> Manage Projects…

Click lower left-hand corner green plus-sign to create a new project.

Fill in details: for now, choose

  • name: Hello world,
  • project key: HLO and
  • Workflow: No review

See on-line manual if you want to check details of different workflows and/or fields.

Click Save to create project and then logout and login to your new project.

 

Step 1: Create a requirement

All test cases should be based on requirements.So we’ll define a requirement for the domain under test.

Choose view Requirements from left panel and from context menu New –> Requirement…

You can use keyboard shortcut Ctrl+N to do the same.

Enter requirement data:

  • requirement name: World should be greeted  and
  • description: As a world, I should be greeted.
  • (optionally other attributes as you see fit)

In this case, requirement summarizes in business terms what should be achieved.

Save the requirement by pressing shortcut key Ctrl+S or by clicking button Save.

Reviewing a defined requirement is an important step and normally is made as a separate phase by a client or product owner. For now, review your requirement and when satisfied, mark it ready by choosing Edit (shortcut Ctrl+E) and Mark as ready.

 

 

Step 2: Create a test case

You want to plan *how* to test your requirement. We’ll create a test case to describe exactly how we are going to test that a requirement is filled.

Choose view Test case design from left panel and use either context menu or keyboard shortcut Ctrl+Shift+N to create a new test category. Enter category name:

  • category name: human interface

Then you are ready to create a new test case. Press Ctrl+N (or context menu. again) to add test case with following data:

  • name: Listen up!
  • tab steps: Add step by choosing add step…. Enter Listen carefully and You hear hello. You can easily create more test case steps by using tab-key to move to next field in step editor.

Save test case by clicking Save (Ctrl+S). Next, review your test case and when you are satisfied, mark it ready by choosing Edit (Ctrl+E) and Mark as ready.

Choose view Test coverage to link your newly created test case to your requirement. You can do this by dragging the test case (lower left corner) to the requirement (upper left corner). Now test case is linked to the requirement. Basically this link means that requirement A should be tested with a test case B. Normally every requirement should have one or more test cases that *verify* that requirement is fulfilled correctly.

You should see immediately at the right side of the screen the effect of linking the test case to the requirement. Requirement REQ1 has now one test case (but no results yet. though. We’ll get there in 30 seconds).

 

 

Step 3: Make a test run

Choose view Execution planning from left panel and from top left, choose you personal working set Work set.

Now you can add the test case you want to execute to lower right-hand corner (labeled as test cases for test set Work set). You can do this by expanding lower left-side tree with test cases and dragging the test cases to test case list at lower right side.

You could also drag whole folders to test case list: this would add all test cases in folder to the test case list.

When you have dragged the Listen Uptest case to the list, click button Add test run…

This will open a pop-up window and create a test run with all checked test cases so that they can be executed. Enter a test run name:

  • Title: Sprint 1 run

and click Save. Now there is a new test run in upper right side with status Not started.

You could have entered also some other relevant data like expected completion date for this test run if you were planning the testing rouds well in advance.

 

 

 

Step 4: Run test(s)

We are ready to go: choose view Test runs, click the just created test run named Sprint 1 run from upper right side. If there were dozens of test cases, you could select a subset of test cases to run now, but we are going to select the all! Just select all test cases from lower right and click Run selected… to run selected tests.

This will open a test execution window. Each test case data will be presented to you: after clicking through test case step you have to choose either

  • Pass (test case executed successfully) or
  • Fail (test case did not run successfully)

 for test case result. Other (normally more rare) options would be

  • Blocked (test case cannot be run at the moment) or
  • Skip (skip running of test case: there is no result recorded for this test case)

If there were a problem with a test case execution and expected result was not achieved, it would be a good practice to *always* add an issue by clicking a button Add issue…

For now, just press Pass to record a successful result to your test case.

Voilà, you have completed your first test run and recorded the results to Testlab!

 

 

Bonus step (4+1)

To see where your project stands in regards to testing progress, choose again view Test coverage. Right hand side will show you the project testing status in relation to the project requirements. You should see that your single business requirement is well covered and passed all tests (=Green!).

 

 

 

 

 

 

Happy testing :)

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: usage 


28.3.2013

Testlab integrates with Atlassian Jira

We are happy to announce an integration module for integrating your Testlab with Atlassian Jira issue tracker. We’ve been working hard to bring you a true two-way synchronization module which enables you to synchronize the issues in real time between your Testlab and Jira deployments.

Advantages

Simple: this module enables you to seamlessly integrate Jira issues with Testlab’s test cases and other test management data. If your organization already has an established method for working with issues, with this plugin you can continue to handle issues like you are used to *and* be more productive with test case planning and test execution with Testlab!

How it works

The module is implemented as a plugin at Jira side and as an integration interface for the plugin at Testlab side. The following diagram describes the synchronization principle at high level.

Both peers have an event listener which publishes the changes in issues in real time. This means that when an issue is updated, commented or a file is attached to it in Jira the change is immediately pushed to Testlab and vice versa. As said, we currently sync issue details, comments and attached files.

In addition to real time push a scheduled (every 15 minutes) synchronization job is triggered. This means that Jira pushes all issue updates from the last synchronization point of time to Testlab. Testlab processes the data and replies with a push of it’s issue changes in similar fashion. This allows us to synchronize the issues between the systems even if the real time push fails for some reason for example due to network problems. We allow you to choose a conflict resolution strategy when overlapping updates occur to always honor the latest data, the Jira data or Testlab’s data.

The actual data between Jira and Testlab is pushed through a secured and key authenticated network channel. Just make sure your Jira allows secured HTTPS to make the push from Testlab to Jira secure.

Great! How do I get it?

The integration module is available to all our enterprise suite customers.

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: announce features jira plugin product 


15.3.2013

Testlab Yamaguchi is *ready* – in case you missed it

We are pleased to announce an official launch of Meliora Testlab, codenamed Yamaguchi, for following browsers:

  • Internet Explorer (version 8 or later)
  • Mozilla Firefox
  • Apple Safari
  • Google Chrome

Testlab makes it easier than ever to test your software – and enables you to improve your customer satisfaction.

For any organization it is essential that their quality management process is consistent, repeatable and effective. This leads to improved application quality and more effective and cost efficient software project implementation.

Meliora Testlab gives the test team a centralized tool for managing requirements, test cases, defects and testing project tasks. As a modern browser based test management solution, Meliora Testlab provides better visibility to your test management process and assets ensuring the delivery of high quality software.

So… sign up and try it now

And note – If you have a hosted version of Meliora Testlab – congratulations, you have already upgraded!

As a launch celebration we are currently offering all new customers two user licences with price of one for three months as long as you sign up before the end of April. Benefit from this offer now and enjoy four very affordable months of Testlab when combined with the 14 day trial period for all sign ups!

 

Look forward

We as a team have worked quite hard for few months to bring you this release – thank you for your feedback – and there are some very interesting features coming out in future. Few notable features for enterprise users are integration to Atlassian’s Jira. In addition to this we are looking into a Jenkins plugin which would automatically feed results of automated tests from your CI environment to Testlab.

We will use this blog as communication medium and we will keep you updated. 

 

Sincerely yours,

Meliora team


Yamaguchi, you ask? Well, we like to pick codenames for our releases from something interesting and in this case it comes from a unimaginable story of Tsutomu Yamaguchi. As the guys from Radiolab put it, “On the morning of August 6th, 1945, Tsutomu Yamaguchi was in Hiroshima on a work trip. He was walking to the office when the first atomic bomb was dropped about a mile away. He survived, and eventually managed to get himself onto a train back to his hometown … Nagasaki.” Have a blast and listen for Radiolab’s great podcast of his story after you sign up and familiarize yourself with Testlab.

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: product release 


8.3.2013

Press release: Meliora Testlab

Meliora launches Testlab – a hosted enterprise-grade quality and test management tool to improve your quality.

For an organization it is essential that quality management process is consistent, repeatable and effective. This leads to improved application quality and more effective and cost efficient software project implementation. Meliora Testlab gives the test team a centralized tool for managing requirements, test cases, defects and testing project tasks. As a modern browser based test management solution, Meliora Testlab provides better visibility to your test management process and assets ensuring the delivery of high quality software.

Meliora Ltd, a company founded to provide commercial services in quality management field, today announced the launch of www.melioratestlab.com, a hosted browser based test management tool for managing your application lifecycle including requirement-, test case- and issue management features. With concurrent licensing model Testlab is well suited for a team of any size: a small team working on a single product or a large enterprise.

By combining requirement-, test case- and issue management to a single product the quality process is more transparent and easier to manage. It drives collaboration among development teams and it’s members, quality functions, business analysts and project overseers. Operating from one test management platform standardizes requirement definition and management, release and build management, test planning and scheduling, issue tracking and reporting – all with complete traceability.

As a hosted solution Meliora Testlab is easy to access and effortless to set-up. You’ll get your Testlab running in minutes with 14 day free no-obligations-trial by visiting www.melioratestlab.com. For enterprise customers with specific needs an on-site deployment is offered. Additional quality- and Testlab centric services are available including Testlab customer support, training and fast-track start packages to get on your way in a best way possible.

With Meliora’s customer service oriented team, continuous agile development practices and competitive pricing Meliora Testlab is a low risk and affordable investment to increase your product quality and revenue.

For more information on Meliora Testlab, please visit https://www.melioratestlab.com

About Meliora

Meliora’s mission is to provide best-of-class quality management tools to help you improve your quality. Meliora Ltd was founded in 2012 and is located in Jyväskylä, Finland.

facebooktwittergoogle_pluspinterestlinkedinmail


26.1.2013

Meliora Testlab is open for everyone

Our new web-based and hosted service, Meliora Testlab, is now open for everyone.

Efficiency in testing process and customer satisfaction are essential to any modern enterprise. Requirement management, test management or quality improvement – Meliora Testlab is tool for you.

You can register in two minutes at www.melioratestlab.com. By filling a simple form you will have your own server (for example https://mycompany.melioratestlab.com) ready immediately. Service will be absolutely free and fully fuctional for 30 days.

We are extremely interested in your feedback. Right upper corner has an envelope widget that enabled you to send your  feedback to us right away.

Meliora team

 

facebooktwittergoogle_pluspinterestlinkedinmail


Tags for this post: product 




Posts

Tag cloud

 
 
Best-of-class cross-browser hosted SaaS quality and test management, testing and issue management tools to improve your quality. Site information.