Posts tagged with: usage


23.2.2017

Unifying manual and automated testing

Automating testing has been an ongoing practice to gain benefits for your testing processes. Manual testing with pre-defined steps is still surprisingly common and especially during acceptance testing we still often put our trust in the good old tester. Unifying manual testing and automated testing in a transparent, easily managed and reported way is particularly important for organizations pursuing gains from testing automation.

 

All automated testing is not similar

The gains from testing automation are numerous: Automated testing saves time, it makes the tests easily repeatable and less error-prone, makes distributed testing possible and improves the coverage of the testing, to bring up few. It should be noted though that not all automated testing is the same. For example, modern testing harnesses and tools make it possible to automate and execute complex UI-based acceptance tests and in the same time, developers can implement low-level unit tests. From the reporting standpoint, it is essential to be able to combine the testing results from all kinds of tests to a manageable and easily approachable view with the correct level of details.

 

I don’t know what our automated tests do and what they cover

It is often the case that testers in the organization waste time on manual testing of features that are already covered with a good set of automated tests. This is because the test managers don’t always know the details of the (often very technical) automated tests. The automated tests are not trusted on and the results from these tests are hard to combine to the overall status of the testing. 

This problem is often complicated by the fact that many test management tools report the results of manual and automated tests separately. In the worst case scenario, the test manager must know how the automated tests work to be able to make a judgment on the coverage of the testing. 

 

Scoping the automated tests in your test plan

Because the nature of automated tests varies, it is important that the test management tool offers an easy way to scope and map the results of your automated tests to your test plan. If is not often preferred to report the status of each and every test case (especially in the case of low-level unit tests) because it makes it harder to get the overall picture of your testing status. It is important to pay attention to the results of these tests though so that failures in these tests get reported.

Let’s take an example on how the automated tests are mapped in Meliora Testlab.

In the example above is a simple hierarchy of functions (requirements) which are verified by test cases in the test plan:

  • UI / Login -function is verified by a manual test case “Login test case“,
  • UI / User mgmnt / Basic info and UI / User mgmnt / Credentials -function is verified by a functional manual test case “Detail test case” and
  • Backend / Order mgmt -functions are verified by automated tests mapped to a test case “Order API test case” in the test plan.

Mapping is done by simply specifying the package identifier of the automated tests to a test case. When testing, the results of tests are always recorded to test cases:

  1. The login view and user management views of the application are tested manually by the testers and the results of these tests get recorded to test cases “Login test case” and “Detail test case“.
  2. The order management is tested automatically with results from automated tests “ourapp.tests.api.order.placeOrderTest” and “ourapp.tests.api.order.deliverOrderTest“. These automated tests are mapped to test case “Order API test case” via automated test package “ourapp.tests.api.order“.

The final result for the test case in step 2 is derived from the results of all automated tests under the package “ourapp.tests.api.order“. If one or more tests in this package fail, the test case will be marked as failed. If all tests pass, the test case is also marked as passed.

As automated tests are mapped via the package hierarchy of the automated tests, it makes it easy to fine tune the detail level you wish to scope your automated tests to your test plan. In the above example, if it is deemed necessary to always report out the detailed results on order delivery related tests, the “ourapp.tests.api.order.deliverOrderTest” automated test can be mapped to a test case in the test plan.

 

Automating existing manual tests

As testing automation has clear benefits to your testing process, it is preferred for the testing process and the used tools to manage it to support easy automation of existing manual tests. From the test management tool standpoint, it is not relevant which technique is used to actually automate the test, but instead, it is important that the reporting and coverage analysis stays the same and the results of these automated tests are easily pushed to the tool.

To continue on with the example above, let’s presume that login related manual tests (“Login test case“) are automated by using Selenium:

The test designers record and create the automated UI tests for the login view to a package “ourapp.tests.ui.login“. Now, the manual test case “Login test case” can be easily mapped to these tests with the identifier “ourapp.tests.ui.login“. The test cases themselves, requirements or the structure of these do not need any changes. When the Selenium based tests are run, later on, the result of these tests determine the result for the test case “Login test case“. The reporting of the testing status stays the same, the structure of the test plan is the same, and related reports are easily approached by the people formerly familiar with them.

 

Summary

Testing automation and manual testing are most often best used in a combination. It is important that the tools used for test management support the reporting of this kind of testing in as flexible way as possible.

 

(Icons used in illustrations by Thijs & Vecteezy / Iconfinder)

Facebooktwitterlinkedinmail


Tags for this post: automation best practices example features product reporting usage 


30.1.2017

Testlab – Raining Animal released

Meliora is proud to announce a new version of Meliora Testlab – Raining Animal. This version brings in a concept of “Indexes” which enable you to more easily collaborate with others and copy assets between your projects.

Please read on for more detailed description of the new features.

 

Custom fields for steps
Custom columns for steps

Execution steps of test cases can now be configured with custom columns. This allows you to customize the way you enter your test cases in your project.

Custom columns can be renamed, ordered in the order you want them to appear in your test case and typed with different kinds of data types.

 

Indexes
Collaborating with indexes

A new concept – Indexes – has been added which enables you to pick assets from different projects on your index and collaborate with them.

An index is basically a flat list of assets such as requirements or test cases from your projects. You can create as many indexes you like and you can share them between users in your Testlab. All users who have access to your index can comment on it and edit it – this makes it easy to collaborate with a set of assets in your Testlab. 

 

Copying assets between your projects

Each asset on your index is shown with the project it belongs to. When you select assets from your index, you have an option to paste the selected assets to your current Testlab project. This enables you to easily copy content from a project to another. 

 

SAML 2.0 Single Sign-On support

The authentication pipeline of Testlab has been added with an support for SAML 2.0 Single Sign-On support (WebSSO profile). This makes it possible to use SAML 2.0 based user identity federation services such as Microsoft’s ADFS for user authentication.

The existing CAS based SSO is still supported but providing SAML 2.0 based federation offers more possibilities for integrating Testlab with your identity service of choice. You can read more about setting up the SSO from the documentation provided.

Better exports with XLS support

The data can now be exported directly to Excel in XLS format. CSV export is still available, but exporting data to Excel is now more straightforward.

Also, when exporting data from the table view, only the rows selected in the batch edit mode are exported. This makes it easier for you to hand pick the data when exporting.

In addition to the above and fixes under the hood,
  • the actions and statuses in workflows can now be rearranged by dragging and dropping,
  • stopping the testing session is made more straightforward by removing the buttons for aborting and finishing the run and
  • a new permission “testrun.setstatus” has been added to control who can change the status of a test run (for example mark the run as finished).
 

Meliora team


Throughout history, a rare meteorological phenomenon in which animals fall from the sky has been reported. There are reports of fish, frogs, toads, spiders, jellyfish and even worms coming raining down from the skies.

Curiously, the saying “raining cats and dogs” is not necessarily related to this phenomenon and is of unknown etymology. There are other some quite bizarre expressions for heavy rain such as “chair legs” (Greek) and “husbands” (Colombian).

 

(Source: Wikipedia, Photo – public domain)

Facebooktwitterlinkedinmail


Tags for this post: announce features integration product release usage 


15.11.2016

Testlab – Earthquake Light released

Meliora Testlab – Earthquake Light – has been released. In addition to set of fixes, this release comes with a new advanced reporting mode which allows you to customize the criteria of data on your reports in an intuitive manner. We’ve also integrated with Stripe to make subscription handling and credit card payments as easy as possible.

Please read on for more detailed description of the new features.

 

Inline table editing
Reporting criteria in advanced mode

Most of the report templates available have been added with an option to switch the criteria form to the new advanced mode. In this mode, the criteria for picking the data on the report can be specified as a set of rules the reporting engine will use when rendering the report.

Reports in advanced mode work in similar manner to the earlier so called simple mode. You can save them, schedule them for publishing and so on.

 

Advanced operators
Various operators for each field

A rule on an advanced reports consists of the targeted field, an operator executed to determine if the rule matches and an optional value for the operator. The mode has been implemented with a full set of operators allowing you to define a complex set of rules.

The operators available depend on the type of the field.

 

 

Boolean operators and sub-clauses

The criteria may be defined with sub-clauses. Each clause is set with an boolean operator (match all [and], match any [or], match none [not]) which declares how the list of rules in the clause should be interpreted.

This allows you to define a complex set of rules to pick the data you need on your report.

 
Stripe subscriptions for credit card billing

The hosted Meliora Testlab has been integrated with Stripe – a leading credit card payment gateway – to make subscription and credit card handling for you as easy as possible.

If any action is needed from existing customers, we will contact all customers directly for instructions.

 

Meliora team


earthquakelights

For a very long time, people have occasionally reported seeing strange bright lights in the sky before, during or after an earthquake. For much of the modern times, these reports were considered questionable and skeptics say, there is no considerable proof of the existence of such phenomenon.

The lights reported are usually blueish and greenish dancing lights in the sky comparable a bit to the aurora borealis. Theories exist on the cause of the lights such as the electricity in certain rocks of the earth gets activated and gets discharged during the teutonic stress. It has even been suggested that the existence of the lights could be used to predict upcoming earthquakes.

The latest the earthquake lights were reported was with the New Zealand earthquake in November 2016 and there are even videos circulating documenting the event.

(Source: Wikipedia, National Geographic, Youtube, Photo from UC Berkeley Online Archive)

 

Facebooktwitterlinkedinmail


Tags for this post: announce features product release reporting usage 


13.7.2016

Testlab – Ghost Boy release

We are proud to announce a new version of Meliora Testlab – Ghost Boy – which brings in features such as rapid batch editing of requirements, test cases and issues. A more detailed description of the new features can be read below.

 

Inline table editing

 
Inline table editing

The central assets of your Testlab projects – Requirements, test cases and issues – can now be inspected in a table view. You can choose a folder from your asset related tree and the table view will list all assets from this folder.

The data of the assets can be edited inline. This allows you to rapidly edit a set of assets and save your changes with a single click. Adding and editing assets can still be made in similar fashion as in earlier versions, but the table view brings in new alternative for rapid edits.

As the data is presented in a table, all regular table related functions such as sorting, filtering and grouping are available for you in addition with such features as exporting the set of assets to Excel, sending them via e-mail and printing.

 

 

Batch editing

 
Batch edits

Ever had a need to bump up the severity for a set of issues ? Or assign a batch of test cases to some user ?

The new table view with inline editing features a batch edit mode which allows you to pick a set of assets for batch editing. Then, all edits made to some asset in the table will be replicated to all chosen assets. This way, it is easy to make batch edits to a large set of assets while you are designing.

 

 

Project’s users listing

Earlier, granting access and roles for users in projects was done in the user management only. You chose an user and granted the needed roles for this user.

Ghost boy brings in a new Users tab to project management view which allows you to easily manage roles of your users in a project-centric way. You can also filter, sort, export, e-mail and print the listing if needed.

 

Miscellaneous enhancements and changes

With the new features listed above, this feature release contains small enhancements for executing tests:

  • Publish now -button added for published reports: When you configure a report to be automatically published, you can now press a button to force a publish of this report. This makes it easier set up automatically published reporting.
  • Test run selectors enhanced: The test run selectors in the test case tree and in the test coverage view has been changes to a filterable picker for better usage when there are a large amount of runs in your project.
  • Tagging assets in table view: The new table views have a button which allows you to tag the chosen assets with tags. Earlier, the tagging in issues table worked in a way that it tagged all visible issues in the table. Now, the assets to be tagged can be chosen in the “batch edit mode” for more flexible usage.
  • Continue tour on the current view: The tour found in the Help menu of Testlab now starts & continues on the current view.
  • Results for run tests report: The report now allows you to set the time of day for starting time and ending time the results are reported from.

 

Meliora team


Ghost Boy

Think of a situation where you would be unable to move in any way, communicate or interact with outside world. You would be fully aware and able to think – trapped in your own body with your thoughts. Or, you can think of this but never really comprehend what it must feel like.

Meet Martin Pistorius, a South African born in 1975, who fell into a coma in early teens but eventually regained consciousness by the age of 19. He was still unable to move and spent years locked-in his own body until one of his day carers noticed he might be able to respond to outside interaction. He has since recovered but still needs a speech computer to communicate.

You can listen more about this fascinating story from an Invisibilia podcast or check out a TED talk Martin gave in 2015 at a TEDx event.

(Source: Wikipedia, Invisibilia podcast, TED talks, Brain vector designed by Freepik)

 

Facebooktwitterlinkedinmail


Tags for this post: announce features product release usage 


24.4.2016

Testlab – Dis Manibus released

The new release of Meliora Testlab – Dis Manibus – brings in long-awaited customization features, other minor features and fixes. The release makes it easier to manage your projects’ versions, environments and option values for different fields. A more detailed description of the new features can be read below.

 

Field option management

The option values for different fields can now be freely managed. Options can be managed for requirements, test cases and issues. For example, you can now customize test case priorities, requirement types, issue severities and so on.

To add new options or to edit or remove existing options, open up the project in Manage projects view and choose the appropriate tab for your asset. The option editor can be accessed from the Options column of the chosen field. Edited options are supported in all integrations, data importing and exporting, reporting and when copying projects. In addition, the REST API has been added with an endpoint for accessing the field meta data.

 

Version and Environment management

Adding new values for versions and environments in your project is easy by just entering a new value when needed. If you prefer to have a strict control on which versions and environments should be used in your project, Dis Manibus brings in a view to manage them by hand. The view has been added as a new tab to Manage project view.

You also have an option to control if the users in your project can themselves add new versions or not.

 

Saveable table filters

All tables in Testlab’s UI have been added with controls which enable you to save your current filter criteria for later use. The criteria can be named and it can be saved to your project to be used by all users or even globally to all your projects.

Miscellaneous enhancements

With the new features listed above, this feature release contains small enhancements for executing tests:

  • Executing test case steps in free order: The steps of test cases can be executed in an order preferred.
  • Discard result and run again -button: The table of test run’s test cases in Test runs view has been added with a new button which enables you to easily run an already run test case again. Clicking the button discards the current result of a test case, preserves the results for the test case’s steps and instantly opens up the window for running the selected test case.
  • Saved reports can be filtered and sorted: The listing of saved reports in Reports view has no filter controls to filter in reports. Finding reports is now easier if you have a number of reports in your project. The filter settings can also be saved to the server for later use.

 

Meliora team


Oakville

O U O S V A V V“, between the letters ‘D M‘, commonly stood for Dis manibus, meaning “dedicated to the shades”, is a sequence of letters commonly known as the Shugborough Inscription. The inscription is carved on the 18th-century monument in Staffordshire, England, and has been called one of the world’s top uncracked ciphertexts.

In recent decades there have been several proposals for possible solutions. None of the solutions satisfy the staff at Shugborough Hall though. And ofcourse, as with the all good mysteries, it is hinted that Poussin, the author of the original painting which is expressed in the relief of the monument, was a member of the Priory of Sion. This would mean that the ciphertext may encode secrets related to the Priory, or the location of the Holy Grail.

(Source: Wikipedia, Photograph by Edward Wood)

 

Facebooktwitterlinkedinmail


Tags for this post: announce features product release usage 


25.9.2014

Integrating with Apache JMeter

Apache JMeter is a popular tool for load and functional testing and for measuring performance. In this article we will give you hands-on examples on how to integrate your JMeter tests to Meliora Testlab.

Apache JMeter in brief

Apache JMeter is a tool for which with you can design load testing and functional testing scripts. Originally, JMeter was designed for testing web applications but has since expanded to be used for different kinds of load and functional testing. The scripts can be executed to collect performance statistics and testing results.

JMeter offers a desktop application for which the scripts can be designed and run with. JMeter can also be used from different kinds of build environments (such as Maven, Gradle, Ant, …) from which running the tests can be automated with. JMeter’s web site has a good set of documentation on how it should be used.

 

Typical usage scenario for JMeter

A common scenario for using JMeter is a some kind of load testing or smoke testing setup where JMeter is scripted to make a load of HTTP requests to a web application. Response times, request durations and possible errors are logged and analyzed later on for defects. Interpreting performance reports and analyzing metrics is usually done by people as automatically determining if some metric should be considered as a failure is often hard.

Keep in mind, that JMeter can be used against various kinds of backends other than HTTP servers, but we won’t get into that in this article.

 

Automating load testing with assertions

The difficulty in automating load testing scenarios comes from the fact that performance metrics are often ambiguous. For automation, each test run by JMeter must produce a distinct result indicating if the test passes or not. The JMeter script can be added with assertions to tackle this problem.

Assertions are basically the criteria which is set to decide if the sample recorded by JMeter indicates a failure or not. For example, an assertion might be set up to check that a request to your web application is executed in under some specified duration (i.e. your application is “fast enough”). Or, an another assertion might check that the response code from your application is always correct (for example, 200 OK). JMeter supports a number of different kinds of assertions for you to design your script with.

When your load testing script is set up with proper assertions the script suits well for automation as it can be run automatically, periodically or in any way you prefer to produce passing and failing test results which can be pushed to your test management tool for analysis. On how to use assertions in JMeter there is a good set of documentation available online.

 

Integration to Meliora Testlab

Meliora Testlab has a Jenkins CI plugin which enables you to push test results and open up issues according to the test results of your automated tests. When JMeter scripted tests are run in a Jenkins job, you can push the results of your load testing criteria to your Testlab project!

The technical scenario of this is illustrated in the picture below.

jmeterkuva

You need your JMeter script (plan). This is designed with the JMeter tool and should include the needed assertions (in the picture: Duration and ResponseCode) to determine if the tests should pass or not. A Jenkins job should be set up to run your tests, translate the JMeter produced log file to xUnit compatible testing results which are then pushed to your Testlab project as test case results. Each JMeter test (in this case Front page.Duration and Front page.ResponseCode) is mapped to a test case in your Testlab project which get results posted for when the Jenkins job is executed.

 

Example setup

In this chapter, we give you a hands on example on how to setup a Jenkins job to push testing results to your Testlab project. To make things easy, download the testlab_jmeter_example.zip file which includes all the files and assets mentioned below.

 
Creating a build

You need a some kind of build (Maven, Gradle, Ant, …) to execute your JMeter tests with. In this example we are going to use Gradle as it offers an easy to use JMeter plugin for running the tests. For running the JMeter scripts there are tons of options but using a build plugin is often the easiest way.

1. Download and install Gradle if needed

Go to www.gradle.org and download the latest Gradle binary. Install it as instructed to your system path so that you can run gradle commands.

2. Create build.gradle file

apply plugin: 'java'
apply plugin: 'idea'
apply plugin: 'jmeter'

buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath "com.github.kulya:jmeter-gradle-plugin:1.3.1-2.6"
}
}

As we are going to run all the tests with plugin’s default settings this is all we need. The build file just registers the “jmeter” plugin from the repository provided.

3. Create src directory and needed artifacts

For the JMeter plugin to work, create src/test/jmeter -directory and drop in a jmeter.properties file which is needed for running the actual JMeter tool. This jmeter.properties file is easy to obtain by downloading JMeter and copying the default jmeter.properties from the tool to this directory.

 
Creating a JMeter plan

When your Gradle build is set up as instructed you can run the JMeter tool easily by changing to your build directory and running the command

# gradle jmeterEditor

This downloads all the needed artifacts and launches the graphical user interface for designing JMeter plans.

To make things easy, you can use the MyPlan.jmx provijmeterplanded in the zip package. The script is really simple: It has a single HTTP Request Sampler (named Front page) set up to make a request to http://localhost:9090 address with two assertions: 

  • Duration -assertion to check that the time to make the request does not exceed 5 milliseconds. For the sake of this example, this assertion should fail as the request probably takes more than this.
  • ResponseCode -assertion to check that the response code from the server is 200 (OK). This should pass as long as there is a web server running in port 9090 (we’ll come to this later).

It is recommended to give your Samplers and Assertions sensible names, as you refer directly these names later when mapping the test results to your Testlab test cases.

The created plan(s) should be saved to the src/test/jmeter -directory we created earlier, as Gradle’s JMeter plugin automatically executes all plans from this directory.

 

Setting up a Jenkins job

1. Install Jenkins

If you don’t happen to have Jenkins CI server available, setting one up locally couldn’t be easier. Download the latest release to a directory and run it with 

# java -jar jenkins.war --httpPort=9090

Wait a bit, and Jenkins should be accessible from http://localhost:9090 with your web browser. 

The JMeter plan we went through earlier made a request to http://localhost:9090. When you are running your Jenkins with the command said, the JMeter will fetch the front page of your Jenkins CI server when the tests are run. If you prefer to use some other Jenkins installation you might want to edit the MyPlan.jmx provided to point to this other address.

2. Install needed Jenkins plugins

Go to Manage Jenkins > Manage Plugins > Available and install

  • Gradle Plugin
  • Meliora Testlab Plugin
  • Performance Plugin
  • xUnit Plugin

2.1 Configure plugins

Go to Manage Jenkins > Configure System > Gradle and add a new Gradle installation for your locally installed Gradle. 

3. Create a job

Add new “free-style software project” job to your Jenkins and configure it as follows:

3.1 Add build step: Execute shell

Add a new “Execute shell” typed build step to copy the contents of your earlier set up Gradle project to job’s workspace. This is needed as the project is not in a version control repository. Setup the step as for example:

jmeter_executeshellplugin

.. Or something else that will make your Gradle project available to Jenkins job’s workspace.

Note: The files should be copied so that the root of the workspace contains the build.gradle file for launching the build.

3.2 Add build step: Invoke Gradle script

Select your locally installed Gradle Version and enter “clean jmeterRun” to Tasks field. This will run “gradle clean jmeterRun” command for your Gradle project which will clean up the workspace and execute the JMeter plan.

jmeter_gradleplugin

3.3 Add post-build action: Publish Performance test result report (optional)

Jenkins CI’s Performance plugin provides you trend reports on how your JMeter tests have been run. This plugin is not required for Testlab’s integration, but provides handy performance metrics on your Jenkins job view. To set up the action click “Add a new report”, select JMeter and set the Report files as “**/jmeter-report/*.xml”:

jmeter_performanceplugin

Other settings can be left to defaults or you can configure the settings for your liking.

3.4 Add post-build action: Publish xUnit test result report

Testlab’s Jenkins plugin works in a way, that it needs the test results to be available in so called xUnit format. In addition, this will generate test result trending graphs to your Jenkins job view. Add a post-build action to publish the test results resolved from JMeter assertions as follows by selecting a “Custom Tool”:

jmeter_xunitplugin

Note: The jmeter_to_xunit.xsl custom stylesheet is mandatory. This translates the JMeter’s log files to the xUnit format. The .xsl file mentioned is located in the jmeterproject -directory in the zip file and will be available in the Jenkins’ workspace root if the project is copied there as set up earlier.

3.5 Add post-build action: Publish test results to Testlab

The above plugins will set up the workspace, execute the JMeter tests, publish the needed reports to Jenkins job view and translate the JMeter log file(s) to xUnit format. What is left is to push the test results to Testlab. For this, add a “Publish test results to Testlab” post-build action and configure it as follows:

jmeter_testlabplugin

For sake of simplicity, we will be using the “Demo” project of your Testlab. Make sure to configure the “Company ID” and “Testlab API key” fields to match your Testlab environment. The Test case mapping field is set to “Automated” which is by default configured as a custom field in the “Demo” project.

If you haven’t yet configured an API key to your Testlab, you should log on to your Testlab as company administrator and configure one from Testlab > Manage company … > API keys. See Testlab’s help manual for more details.

Note: Your Testlab’s edition must be one with access to the API functions. If you cannot see the API keys tab in your Manage company view and wish to proceed, please contact us and we get it sorted out.

 

Mapping JMeter tests to test cases in Testlab

For the Jenkins plugin to be able to record the test results to your Testlab project your project must contain matching test cases. As explained in the plugin documentation, your project in Testlab must have a custom field set up which is used to map the incoming test results. In the “Demo” project field is already set up (called “Automated”). 

jmeterplanEvery assertion in JMeter’s test plan will record a distinguishing test result when run. In the simple plan provided, we have a single HTTP Request Sampler named “Front page”. This Sampler is tied with two assertions (named “Duration” and “ResponseCode”) which check if the request was done properly. When translated to xUnit format, these test results will get idenfied as <Sampler name>.<Assertion name>, for example:

  • Front page/Duration will be identified as: Front page.Duration and
  • Front page/ResponseCode will be identified as: Front page.ResponseCode

To map these test results to test cases in the “Demo” project,

1. Add test cases for JMeter assertions

Log on to Testlab’s “Demo” project, go to Test case design and

  • add a new Test category called “Load tests”, and to this category,
  • add a new test case “Front page speed”, set the Automated field to “Front page.Duration” and Approve the test case as ready and
  • add a new test case “Front page response code”, set the Automated field to “Front page.ResponseCode” and Approve the test case as ready.

Now we have two test cases for which the “Test case mapping field” we set up earlier (“Automated”) contains the JMeter assertions’ identifiers.

 

Running JMeter tests

What is left to do is to run the actual tests. Go to your Jenkins job view and click “Build now”. A new build should be scheduled, executed and completed – probably as FAILED. This is because the JMeter plan has the 5 millisecond assertion which should fail the job as expected.

 

Viewing the results in Testlab

Log on to Testlab’s “Demo” project and select the Test execution view. If everything went correctly, you should now have a new test run titled “jmeter run” in your project:

jmeter_testrun

As expected, the Front page speed test case reports as failed and Front page response code test case reports as passed.

As we configured the publisher to open up issues for failed tests we should also have an issue present. Change to Issues view and verify, that an issue has been opened up:

jmeter_defect

 

Viewing the results in Jenkins CI

The matching results are present in your Jenkins job view. Open up the job view from your Jenkins:

jmeter_jenkinsview

The view holds the trend graphs from the plugins we set up earlier: “Responding time” and “Percentage of errors” from Performance plugin and “Test result Trend” from xUnit plugin. 

To see the results of the assertions, click “Latest Test Result”:

jmeter_jenkinsresults

The results show that the Front page.Duration test failed and one test has passed (Front page.ResponseCode).

 

Links referred

 http://jmeter.apache.org/  Apache JMeter home page
 http://jmeter.apache.org/usermanual/index.html  Apache JMeter user manual
 http://jmeter.apache.org/usermanual/component_reference.html#assertions  Apache JMeter assertion reference
 http://blazemeter.com/blog/how-use-jmeter-assertions-3-easy-steps  BlazeMeter: How to Use JMeter Assertions in 3 Easy Steps
 https://www.melioratestlab.com/wp-content/uploads/2014/09/testlab_jmeter_example.zip  Needed assets for running the example
 https://github.com/kulya/jmeter-gradle-plugin  Gradle JMeter plugin
 http://www.gradle.org/  Gradle home page
 http://mirrors.jenkins-ci.org/war/latest/jenkins.war  Latest Jenkins CI release
 https://www.melioratestlab.com/wp-content/uploads/2014/09/jmeter_to_xunit.xsl_.txt  XSL-file to translate JMeter JTL file to xUnit format  
 https://wiki.jenkins-ci.org/display/JENKINS/Meliora+Testlab+Plugin  Meliora Testlab Jenkins Plugin documentation 

 

 

 

Facebooktwitterlinkedinmail


Tags for this post: example integration jenkins load testing usage 


9.9.2014

How to: Work with bugs and other issues

Testers role in the project is often seen in two major areas: building confidence in that software fulfill it’s need and on the other hand finding parts where it does not, or might not. Just pointing finger where the problem lies is seldom enough: the tester needs to tell how the problem was encountered and why the found behaviour is a problem. Same thing goes with enhancements ideas. This post is about how to work with bugs ( what we call defects in this post ) and other findings ( we call all the findings issues ) from writing those to closing them.

 

Using time wisely

Author encountering a bug

While small scale projects can cope with findings written on post-it notes, excel sheets or such, efficient handling of large amounts of issues is next to impossible. The real reason for having more refined way of working with issues is to enable the project to be able to react to findings as needed. Some issues need fast response, some are best listed for future development, but not forgotten. When you work right with issues, people in the project find the right ones to work with at given time, and also understand what the issue was written about.

As software projects are complex entities, it is hard to archive such thing as perfect defect description, but they need to be pretty darn good to be helpful. If a tester writes a report that the receiver does not understand, it will be sent back or even worse – disregarded. That said, getting best bang for buck for tester’s time writing the defect is not always writing an essay. If the tester sits next to developer, just a descriptive name for issue might be enough. Then tester just must remember what meant by the description! Same kind of logic goes with defect workflows. On some projects the workflows are not needed to guide the work, but on some it really makes sense for an issue to go trough certain flow. There is not a single truth. This article tells some of the things you should consider. You pick what you need.

 

Describing the finding

First thing after finding an defect is not always trying to write it down to error management system as fast as possible. Has the thing been reported earlier? If not, the next thing is to find out more about what happened. Can you replicate the behaviour? Does the same thing happen elsewhere in the system under test? What was the actual cause of the issue? If you did a long and complex operation during the testing, but you can replicate the issue with more compact set of actions, you should describe the more compact one.

After you have understanding about the issue, you should write ( a pretty darn good ) description of the issue. The minimum is to write the steps what you did. If you found out more about the behaviour, like how in similar conditions it does not happen, write that down too. If the issue can be found executing the testcase, do not rely on that reader would read test test case. Write down what is relevant on causing the issue. Attached pictures are often a great way of showing what went wrong, and attached videos a great way of showing what you did to cause that. They are really fast to record and a good video shows exactly what needs to be shown. Use them.  https://www.melioratestlab.com/taking-screenshots-and-capturing-video/

Consider adding an exact time stamp you tested the issue. The developers might want to dig in to the log files – or even better, attach the log files yourself. It is not always clear if something is a defect. If there is any room for debate, write it also down why you think the behaviour is a defect.

Besides writing a good description, you should also classify your defects. This helps the people in project to find the most important ones for them from the defect mass. Consider the following:

tags

Field Motive
Priority Not all issues are as important at the moment.
Severity Severity tells about the effect of the issue. That can be different than the priority. Then again, as in most projects severity and priority go hand in hand. Then it is just easier to use just one field.
Assignment Who is expected to take the next step in handling the issue.
Type Is the issue a defect, enhancement idea, task or something else.
Version detected What version was tested when the issue was found.
Environment On what environment was the issue found.
Application area To what application area issue has an effect.
Resolution For closed issues, the reason why the defect was closed.

 

Add the fields to your project that are meaningful for you to classify defects and hide the ones that are not relevant. If you are not sure if you benefit from classification field, do not use the field. Use tags. You can add any tags to issues and find the issues later or to report them. Easy.

 

Issue written – what next?

Now that you wrote an error report, did you find it by conducting an existing test? If not, should there be a test that would find that given defect? After the issue has been corrected you probably get to verify the fix, but if the issue seems likely to re-emerge, write a test case for it.

defectflow

Writing an issue begins its life cycle. At bare minimum the defect has two stages – open when you work with it and closed when you have done what you wanted to do, but most of the time the project benefits from having more stages in workflow. Mainly the benefits come from easier management of large amounts of findings. It helps to know what needs to be done next, and by whom. Having information in what state the issues are, the project will be able to distinguish bottlenecks easier. Generally the bigger the project, the more benefits there are to be gotten from having more statuses. That said, if the statuses do not give you more relevant information about the project, do not collect them. Simpler is better.

Status Motive
New Sometimes it makes sense to verify the issues before adding them to further handling by someone else than the initial reporter. New distinguishes these from Open issues.
Open Defects that have been found, but no decision how the situation will be fixed has been made yet.
Assigned Defect is assigned to be fixed.
Reopened Delivered fix did not correct the problem or the problem occurred again later. Sometimes it makes sense to distinguish the opens from reopened to see how common it is for problems to re-emerge.
Fixed Issues are marked fixed when the mentioned issue has been corrected.
Ready for testing After the issue has been fixed this status can be used to mark ones that are ready for testing.
Postponed Distinguish those open issues that you are not planning to work with for a while.
Closed After the findings have been confirmed to cease to be relevant, the issue can be closed. Basically after fixing the issue will be tested.

When working with defects and changing their statuses, it is important to comment the issues when something relevant changes. Basically if the comment adds information about found issue, it probably should be added. The main idea behind this is that it is later possible to come back to discussion so you don’t need to guess why something was done or wasn’t done about the issue. Especially it is important to write down how the change will be fixed (when there is a room for doubt), and finally how the fix has been implemented so that tester will know how to re-test the issue. If the developer can open the way how the issue was fixed it helps tester find possible other issues that have arised when the fix has been applied.

 

Digging in the defect mass

So now that you have implemented this kind of classification for working around issues, what could you learn from statistics? First off, even with sophisticated rating and categorization the real situation can easily hide behind numbers. It is more important to react to each invidual issue correctly than to rate and categorize issues and only later react to statistics. That said, in complex projects the statistics, or numbers, help understand what is going on in the project and help find the focus on what should be done on the project.

Two most important ways to categorize defects are priority and status. Thus a report showing open issues per priority, grouped by status is a very good starting point to look at current issue situation. Most of the time you handle defects defferently from other issues, so you would pick the defects to one report and other types of defects to other. Now, this kind of report might show you for example that there is one critical defect assigned, 3 high priority defects open and 10 normal priority defects fixed. The critical and high priority defects you would probably want to go trough individually to make at least sure that they get fixed as soon as they should so they do not hinder other activities, and for the fixed ones you would look if something needs to be done to have them enabled for re-testing. If at some point you see some category growing, you know who should you ask questions. For example high number in assigned defects would indicate bottleneck in development and prolonged numbers in “ready for testing” a bottleneck in testing.

Another generally easy to use report is the trend report for open issues by their statuses or issues’ development. As the report shows how many open issues there has been at given time, you’ll see the trend – if you can close the issues the same pace you open them.

This was just a mild scratch in the area of working with defects. If you have questions or would like to point out something, feel free to comment.

Happy hunting!

Joonas

Facebooktwitterlinkedinmail


Tags for this post: best practices issues testing usage 


27.5.2014

Taking screenshots and capturing video

This article introduces you on an easy way to capture and annotate screenshots during testing. We show you couple of easy ways to use screen capturing and recording tool Monosnap.

The latest Testlab release brings you inbuilt integration to Monosnap, a handy screen capturing tool with possibility of annotating the screenshots before uploading. Testlab supports desktop clients of Monosnap for Windows and Mac OS X operating systems. You are ofcourse free to use any screen capturing tool you prefer but we feel Monosnap really stands out from the crowd feature-wise and in the ease of use.

 

Why take screenshots or record video

When you are testing software on your workstation taking screenshots is a great way of documenting issues. A picture is worth a thousand words, right ? For example, when you are testing and an issue such as a defect is encountered capturing a screenshot, annotating the capture by highlighting the issue in an exact way and uploading it to Testlab usually tells the team members very well what went wrong. If the capturing tool allows you to annotate the shot, it’s perfect – the amount of textual description you need to enter for the defect is typically much less when you can mark and highlight the relevant parts of the screenshot.

The benefits of using screenshots in issue management are quite self-evident. But screenshots and recorded screencaptures can be quite beneficial in requirement management too. For example, when you are documenting new features on existing user interfaces, taking a screenshot and annotating it properly is a great addition to documenting your requirements. Same applies to test cases: If a test case is testing a complex user interface a well annotated screenshot or two can be a great help for a tester when testing.

 

Monosnap introduced

monosnapMonosnap is a collaboration tool for taking screenshots, sharing files and recording video from your desktop. The tool is available for multiple platforms (such as a Google Chrome extension, for iPhone and iPad) but here we talk about the desktop installable clients for Microsoft Windows and Mac OS X operating systems as they can be integrated and used seamlessly with Testlab.

When Monosnap is installed and run it runs as a desktop application and is accessible in a way depending on your operating system. For Mac OS X, the tool is available in your desktop’s menu bar as an icon. Similarly in Windows, the tool is available in your so called system tray and as a hovering hotspot on your desktop if you prefer.

For capturing screenshots the basic way of working with Monosnap is as follows:

  1. You capture an area of your desktop by selecting “Capture Area” from Monosnap’s menu or pressing the appropriate keyboard shortcut.
  2. A Monosnap window appears with the captured area shown. The window has functions to annotate the capture: For example draw different shapes on it and write text on the capture.
  3. When you are happy with the capture you can upload it to a service of your choice or save the capture on your disk.

For capturing video, you

  1. select “Record Video” from monosnap’s menu or press the appropriate keyboard shortcut.
  2. Monosnap’s recording frame appears. You move and resize this frame to the area on your desktop which you would like to record as a video capture. You also have options to record video from your workstations web cam, record audio from your microphone if you prefer.
  3. To start recording you press the Rec button. You can annotate the video during recording by drawing different shapes on it. When you have recorded your video you press Rec button again to stop the capture.
  4. When recorded, the video is encoded to a MP4 format and depending on your workstation if might take a few seconds. A window appears with the encoded video in it which you can preview before uploading. You can then upload the captured video to a service of your choice or access the encoded video file on your disk. 

 

Using Monosnap with Testlab

To use Monosnap with Testlab you have two options: Take screen captures with Monosnap and upload them manually to Testlab by dragging and dropping or, integrating Monosnap to Testlab’s WebDAV interface which allows you to upload captures to Testlab with a click of a button.

 
Uploading manually

When uploading manually no pre-configuration is needed. You can use Monosnap in a way you prefer and when you have a capture ready upload it to Testlab in same way you would upload a regular file attachment. Keep in mind though, that Monosnap makes this quite easy as it features a “Drag bar” on the right hand side of the capture window. From this, you can just grab and drag the capture on your Testlab browser window and attach it to the asset open in the window just by dropping.

If dragging and dropping is not possible for some reason, as a workaround, you can ofcourse save the capture on your disk and upload it regularly to Testlab.

To see how it actually works play the video below:

video_reports

 

 

WebDAV integration

Monosnap is great in a way that it supports a possibility of uploading the captures with a click of a button to service of your choice. This enables Testlab to act as a WebDAV storage for which into the Monosnap can push the captures to. When configured, you can just push the Upload button of Monosnap and the capture is automatically uploaded to Testlab and attached to the asset open in your Testlab browser window.

To make use of this feature some pre-configuration is needed:

  1. Open up Monosnap’s menu and select “Preferences…” or “Settings…”. Monosnap’s settings window opens up.
  2. Select “General” tab and configure the following:
    • After screenshot: Open Monosnap editor
    • After upload: Do not copy
    • Open in browser: no
    • Short links: no
  3. Select the “Account / WebDAV” view and configure the following:

    For Mac OS X:

    • Host: https://COMPANY.melioratestlab.com/api/attachment/user
      Note: Replace COMPANY with the virtual host of your own Testlab. For example, if you are using hosted Testlab from mycompany.melioratestlab.com enter “https://mycompany.melioratestlab.com/api/attachment/user” to this field. For on-premise installations, set this field to match the protocol, host name and the port of your server to a /api/attachment/user context.
    • Port: Leave as blank (shows as gray “80”)
    • User: User ID of your Testlab account
    • Password: Password of your Testlab account
    • Folder: Leave as blank (shows as gray “/var/www/webdav”)
    • Base URL:Leave as blank (shows as gray “http://127.0.0.1/webdav”)
      Click “Make default” button to make the configured WebDAV service as the default upload service of Monosnap. When set, the Upload button always uses this service by default.

       

      For Microsoft Windows:

    • Host: COMPANY.melioratestlab.com
    • Note: Replace COMPANY with the virtual host of your own Testlab. For example, if you are using Testlab from mycompany.melioratestlab.com enter “mycompany.melioratestlab.com” to this field.
    • Port: HTTPS or HTTP port of your Testlab server – if you are using hosted Testlab enter 443
    • User: User ID of your Testlab account
    • Password: Password of your Testlab account
    • Directory: /api/attachment/user
    • Base URL: Leave as blank

The preconfiguration is documented in detail in the “Screenshots and recording” section of the Testlab’s integrated help manual.

Keep in mind, that the pre-configuration step needs to be done only once. Once you’ve configured your Monosnap to upload captures to Testlab it just works – no need to configure it again later.

Where is the capture uploaded to

When captures are uploaded via Testlab’s WebDAV interface the uploaded captures are automatically attached to the asset which is currently open in your Testlab browser window. So when uploading, make sure you have an asset (a requirement, a test case or an issue) open in your Testlab window in a way, that a file can be attached to it. If for example, your Testlab user account wouldn’t have proper permissions to attach files to assets the uploading will just silently fail.

To see WebDAV integrated Monosnap in action play the video below:

video_reports

 

 

Advantages gained

Having easy to use screen capture tools make your documenting easier and speeds up work in multiple tiers: Documenting issues and other assets is faster and people dealing with the documented assets have a clearer understanding on the issue at hand.

Facebooktwitterlinkedinmail


Tags for this post: example features screenshots usage video 


25.4.2014

Exploratory testing with Testlab

In this article we introduce you to the recently released new features enabling more streamlined workflow for exploratory testing.

Exploratory testing is an approach to testing where the tester or team of testers ‘explores’ the system under test and during the testing generates and documents good test cases to be run. In a more academic way, it is an approach to software testing that is concisely described as simultaneous learning, test design and test execution.

Compared to scripted testing – where test cases and scenarios are pre-planned before execution – exploratory testing is often seen as more free and flexible. Each of these methodologies have their own benefits and drawbacks and in reality, all testing is usually something in between of these two. We won’t go into methodological detail in this article as we focus on how to do the actual test execution in explorative way with Testlab. We can conclude though, that exploratory testing is particularly suitable if requirements for the system under test are incomplete, or there is lack of time.

 

Pre-planning exploratory test cases

As said, all testing approaches can be usually placed in between a fully pre-scripted and fully exploratory approach. It is often recommended to consider if pre-planning the test cases in some way would be beneficial. If the system under test is not a total black box meaning there are some knowledge or even specifications available it might be a wise idea to add so called stubs for your test cases in pre-planning phase. Pre-planning test case stubs might give you better insight in testing coverage as in pre-planning, you have an option to bind the test cases to requirements. We’ll discuss using requirements in exploratory testing in more detail later.

For example, one approach might be that you could just add the test cases you think you might need to cover some or all areas of the system without the actual execution steps. The actual execution steps, preconditions and expected results would be filled out in exploratory fashion during the testing. Alternatively, you might be able to plan the preconditions and expected results and just leave the actual execution steps for the testers.

Keep in mind, that pre-planning test cases does not and should not prevent your testers from adding totally new test cases during the testing. Additionally, you should consider if pre-planning testing might affect your testers’ way of testing. Sometimes this is not desirable and you should take into account the experience level of your testers and how the different pre-planning models fit into your testing approach and workflow in overall. 

 

Exploratory testing session

Exploratory testing is not an exact testing methodology per se. In reality, there are many testing methods such as Session-based testing or pair testing which are exploratory in a way. As Testlab is methodology agnostic and can be used with various different testing methods, in this article we combine all these methods by just establishing the fact that the testing must be done in a testing session. The testing method itself can be any method you wish to use but the actual test execution must be done in a session which optionally specifies

  • the system or target under testing (such as a software version),
  • environment in which the testing is executed in (such as production environment, staging environment, …) and
  • sets a timeframe in which the testing must be executed in (a starting date and/or a deadline).

To execute test cases in a testing session add a new test run to your project in Testlab. Go to the Test runs view and click the Add test run… button to add a new blank testing session.

New testing session as a test run

When the needed fields are set you have an option to just Save the test run for later execution or, to Save and Start… the added test run immediately. The test run is added to the project as blank meaning it does not have any test cases bound to it yet. We want to start testing right away so we click the Save and Start… button.

 

Executing tests

The set of functionality while executing is a good match for an exploratory testing session. As said, test execution in Testlab enables you to

  • pick a test case for execution,
  • record testing results for test cases and their steps,
  • add issues such as found defects,
  • add comments to test cases and for executed test case steps,
  • add existing test cases to be executed to the run,
  • remove test cases from the run,
  • re-order test cases in the run to a desired order,
  • add new test cases to the test case hierarchy and pick them for execution,
  • edit existing test cases and their steps and
  • cut and copy test cases around in your test case hierarchy.

The actual user interface while executing looks like this:

exploratory_runwindow

The left hand side of the window has the same test case hierarchy tree that is used to manipulate the hierarchy of test cases in test planning view. It enables you to add new categories and test cases and move them around in your hierarchy. The hierarchy tree may be hidden – you can show (and hide) it by clicking the resizing bar of the tree panel. The top right shows you the basic details of the test run you are executing and the list below it shows the test cases picked for execution in this testing session.

The panel below the list of test cases holds the details of a single test case. When no test cases for execution are available, the panel disables itself (like shown in the shot above) and lists all issues from the project. This is especially handy when testing for regression or re-testing – it makes it easy to reopen closed findings and retest resolved issues.

The bottom toolbar of buttons enable you to edit the current test case, add an issue and record results for test cases. The “Finish”, “Abort” and “Stop” buttons should be used to end the current testing session. Keep in mind, that finishing, aborting and stopping testing have their own meaning which we will come to later in this article.

 

Adding and editing test cases during a session

When exploring, it is essential to be able to document the steps for later execution if an issue is found. This way scripted testing for regression is easier later on. Also, if your testing approach aims to document the test cases for later use by exploring the system you must be able to easily add them during execution.

If you have existing test cases added which you would like to pick for execution during a session you can drag the test cases from the test hierarchy tree on to the list of test cases. Additionally, you can add new test cases by selecting New > Test case for the test category you want to add the new test case to. Picking test cases and adding new via inline editing is demonstrated in the following video:

 

 

 

Editing existing test cases is similar to the adding. You just press the Edit button at the bottom bar to switch the view to editing mode. The edits are made in identical fashion compared to adding.

 

Ending the session

When you have executed the tests you want you have three options:

Finish, abort or stop

It is important to understand the difference which comes from the fact that each executed session is always a part of a test run. If you wish to continue executing tests in the same test run at the later time you must Stop the session. This way the test run can be continued later on normally.

If you conclude that the test run is ready and you wish not to continue it anymore you should Finish it. When done so the test run is marked as finished and no testing sessions can be started on it anymore. It should be noted though, that if you discard a result of a test case from the run later on the run is reset back to Started-state and is again executable.

Aborted test runs are considered discarded and cannot be continued later on. So, if for some reason you think that the test run is not valid anymore and should be discarded you can press the Abort run button.

 

Asset workflows and user roles in exploratory testing

Requirements, test cases and issues have an asset workflow tied to them via project’s workflow setting. This means that each asset has states that they can be in (In design, Ready for review, Ready, …) and actions which can be executed on them (Approve, Reject, …). In exploratory testing having a complex workflow for project’s test cases is usually not desirable. For example, having a workflow which requires review of test cases from another party makes no sense when testers should be able to add, edit and execute test cases inline during testing.

That said, if using default workflows it recommended to use the “No review” workflow for your projects. 

 

No review workflow

 

If executing test cases which are not yet approved as ready Testlab tries to automatically approve them on behalf of the user. This means, that if the test case’s workflow allows it (and the user has the needed permissions to do so) the test case is automatically marked as approved during the session. This way using more complex workflows in a project with an exploratory testing approach might work if the transitions between test case’s states are suitable. That said, as the testers must be able to add and edit test cases during executing having a review based workflow is useless.

The asset workflows’ actions are also tied to user roles. For the testers to be able to document test cases during execution the tester users should also be granted a TESTDESIGNER role. This ensures that the users should have the needed permissions to add and edit test cases they need.

 

Using requirements in exploratory testing approach

When designing test cases in exploratory testing sessions the test cases are added without any links to requirements. In Testlab, testing coverage is reported against the system’s requirements and in testing parlance, a test case verifies a requirement it is linked to when the test case is executed and passes.

It is often recommended to bind the added test cases to requirements at the later stage. This way you can easily report what actually has been covered by testing. It should be noted, that the requirements we talk here don’t have to be fully documented business requirements for this to work. For example, if you would just like to know which parts of the system have been covered you might want to add the system’s parts as project’s requirements and bind the appropriate test cases to them. This way a glance at Testlab’s coverage view should give you insight which parts of the system have been tested successfully.

Better yet, if you did pre-plan your test cases in some form (see above) you might consider adding a requirement hierarchy too and linking your test case stubs to these requirements. This would give you insight into your testing coverage straight away when the testing starts.

 

Summary

In this article we talked about the new test execution features of Testlab enabling you to execute your tests using exploratory testing approaches. We went through the advantages of pre-planning some of your testing, using requirements in your exploratory testing, talked about testing sessions and noted how Testlab’s workflows and user roles should be used in exploratory testing.

Facebooktwitterlinkedinmail


Tags for this post: example exploratory features testing usage 


4.12.2013

Introduction video

Today, we’re happy to bring you a brief introduction screencast about Testlab. The video will introduce you to the central concepts of Testlab and how they are presented in the user interface. We will give a glance to

  • requirements management,
  • test case design,
  • execution planning and test runs,
  • issue management and
  • test coverage.

Keep in mind that this introduction skips some central features of Testlab such as reporting but should give you some insight to the use of Testlab. To view the introduction please click below.

 

Introduction to Meliora Testlab

Facebooktwitterlinkedinmail


Tags for this post: demo example screencast usage video 


 
 
Best-of-class cross-browser hosted SaaS quality and test management, testing and issue management tools to improve your quality. Site information.