Posts tagged with: example


23.2.2017

Unifying manual and automated testing

Automating testing has been an ongoing practice to gain benefits for your testing processes. Manual testing with pre-defined steps is still surprisingly common and especially during acceptance testing we still often put our trust in the good old tester. Unifying manual testing and automated testing in a transparent, easily managed and reported way is particularly important for organizations pursuing gains from testing automation.

 

All automated testing is not similar

The gains from testing automation are numerous: Automated testing saves time, it makes the tests easily repeatable and less error-prone, makes distributed testing possible and improves the coverage of the testing, to bring up few. It should be noted though that not all automated testing is the same. For example, modern testing harnesses and tools make it possible to automate and execute complex UI-based acceptance tests and in the same time, developers can implement low-level unit tests. From the reporting standpoint, it is essential to be able to combine the testing results from all kinds of tests to a manageable and easily approachable view with the correct level of details.

 

I don’t know what our automated tests do and what they cover

It is often the case that testers in the organization waste time on manual testing of features that are already covered with a good set of automated tests. This is because the test managers don’t always know the details of the (often very technical) automated tests. The automated tests are not trusted on and the results from these tests are hard to combine to the overall status of the testing. 

This problem is often complicated by the fact that many test management tools report the results of manual and automated tests separately. In the worst case scenario, the test manager must know how the automated tests work to be able to make a judgment on the coverage of the testing. 

 

Scoping the automated tests in your test plan

Because the nature of automated tests varies, it is important that the test management tool offers an easy way to scope and map the results of your automated tests to your test plan. If is not often preferred to report the status of each and every test case (especially in the case of low-level unit tests) because it makes it harder to get the overall picture of your testing status. It is important to pay attention to the results of these tests though so that failures in these tests get reported.

Let’s take an example on how the automated tests are mapped in Meliora Testlab.

In the example above is a simple hierarchy of functions (requirements) which are verified by test cases in the test plan:

  • UI / Login -function is verified by a manual test case “Login test case“,
  • UI / User mgmnt / Basic info and UI / User mgmnt / Credentials -function is verified by a functional manual test case “Detail test case” and
  • Backend / Order mgmt -functions are verified by automated tests mapped to a test case “Order API test case” in the test plan.

Mapping is done by simply specifying the package identifier of the automated tests to a test case. When testing, the results of tests are always recorded to test cases:

  1. The login view and user management views of the application are tested manually by the testers and the results of these tests get recorded to test cases “Login test case” and “Detail test case“.
  2. The order management is tested automatically with results from automated tests “ourapp.tests.api.order.placeOrderTest” and “ourapp.tests.api.order.deliverOrderTest“. These automated tests are mapped to test case “Order API test case” via automated test package “ourapp.tests.api.order“.

The final result for the test case in step 2 is derived from the results of all automated tests under the package “ourapp.tests.api.order“. If one or more tests in this package fail, the test case will be marked as failed. If all tests pass, the test case is also marked as passed.

As automated tests are mapped via the package hierarchy of the automated tests, it makes it easy to fine tune the detail level you wish to scope your automated tests to your test plan. In the above example, if it is deemed necessary to always report out the detailed results on order delivery related tests, the “ourapp.tests.api.order.deliverOrderTest” automated test can be mapped to a test case in the test plan.

 

Automating existing manual tests

As testing automation has clear benefits to your testing process, it is preferred for the testing process and the used tools to manage it to support easy automation of existing manual tests. From the test management tool standpoint, it is not relevant which technique is used to actually automate the test, but instead, it is important that the reporting and coverage analysis stays the same and the results of these automated tests are easily pushed to the tool.

To continue on with the example above, let’s presume that login related manual tests (“Login test case“) are automated by using Selenium:

The test designers record and create the automated UI tests for the login view to a package “ourapp.tests.ui.login“. Now, the manual test case “Login test case” can be easily mapped to these tests with the identifier “ourapp.tests.ui.login“. The test cases themselves, requirements or the structure of these do not need any changes. When the Selenium based tests are run, later on, the result of these tests determine the result for the test case “Login test case“. The reporting of the testing status stays the same, the structure of the test plan is the same, and related reports are easily approached by the people formerly familiar with them.

 

Summary

Testing automation and manual testing are most often best used in a combination. It is important that the tools used for test management support the reporting of this kind of testing in as flexible way as possible.

 

(Icons used in illustrations by Thijs & Vecteezy / Iconfinder)

Facebooktwitterlinkedinmail


Tags for this post: automation best practices example features product reporting usage 


25.9.2014

Integrating with Apache JMeter

Apache JMeter is a popular tool for load and functional testing and for measuring performance. In this article we will give you hands-on examples on how to integrate your JMeter tests to Meliora Testlab.

Apache JMeter in brief

Apache JMeter is a tool for which with you can design load testing and functional testing scripts. Originally, JMeter was designed for testing web applications but has since expanded to be used for different kinds of load and functional testing. The scripts can be executed to collect performance statistics and testing results.

JMeter offers a desktop application for which the scripts can be designed and run with. JMeter can also be used from different kinds of build environments (such as Maven, Gradle, Ant, …) from which running the tests can be automated with. JMeter’s web site has a good set of documentation on how it should be used.

 

Typical usage scenario for JMeter

A common scenario for using JMeter is a some kind of load testing or smoke testing setup where JMeter is scripted to make a load of HTTP requests to a web application. Response times, request durations and possible errors are logged and analyzed later on for defects. Interpreting performance reports and analyzing metrics is usually done by people as automatically determining if some metric should be considered as a failure is often hard.

Keep in mind, that JMeter can be used against various kinds of backends other than HTTP servers, but we won’t get into that in this article.

 

Automating load testing with assertions

The difficulty in automating load testing scenarios comes from the fact that performance metrics are often ambiguous. For automation, each test run by JMeter must produce a distinct result indicating if the test passes or not. The JMeter script can be added with assertions to tackle this problem.

Assertions are basically the criteria which is set to decide if the sample recorded by JMeter indicates a failure or not. For example, an assertion might be set up to check that a request to your web application is executed in under some specified duration (i.e. your application is “fast enough”). Or, an another assertion might check that the response code from your application is always correct (for example, 200 OK). JMeter supports a number of different kinds of assertions for you to design your script with.

When your load testing script is set up with proper assertions the script suits well for automation as it can be run automatically, periodically or in any way you prefer to produce passing and failing test results which can be pushed to your test management tool for analysis. On how to use assertions in JMeter there is a good set of documentation available online.

 

Integration to Meliora Testlab

Meliora Testlab has a Jenkins CI plugin which enables you to push test results and open up issues according to the test results of your automated tests. When JMeter scripted tests are run in a Jenkins job, you can push the results of your load testing criteria to your Testlab project!

The technical scenario of this is illustrated in the picture below.

jmeterkuva

You need your JMeter script (plan). This is designed with the JMeter tool and should include the needed assertions (in the picture: Duration and ResponseCode) to determine if the tests should pass or not. A Jenkins job should be set up to run your tests, translate the JMeter produced log file to xUnit compatible testing results which are then pushed to your Testlab project as test case results. Each JMeter test (in this case Front page.Duration and Front page.ResponseCode) is mapped to a test case in your Testlab project which get results posted for when the Jenkins job is executed.

 

Example setup

In this chapter, we give you a hands on example on how to setup a Jenkins job to push testing results to your Testlab project. To make things easy, download the testlab_jmeter_example.zip file which includes all the files and assets mentioned below.

 
Creating a build

You need a some kind of build (Maven, Gradle, Ant, …) to execute your JMeter tests with. In this example we are going to use Gradle as it offers an easy to use JMeter plugin for running the tests. For running the JMeter scripts there are tons of options but using a build plugin is often the easiest way.

1. Download and install Gradle if needed

Go to www.gradle.org and download the latest Gradle binary. Install it as instructed to your system path so that you can run gradle commands.

2. Create build.gradle file

apply plugin: 'java'
apply plugin: 'idea'
apply plugin: 'jmeter'

buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath "com.github.kulya:jmeter-gradle-plugin:1.3.1-2.6"
}
}

As we are going to run all the tests with plugin’s default settings this is all we need. The build file just registers the “jmeter” plugin from the repository provided.

3. Create src directory and needed artifacts

For the JMeter plugin to work, create src/test/jmeter -directory and drop in a jmeter.properties file which is needed for running the actual JMeter tool. This jmeter.properties file is easy to obtain by downloading JMeter and copying the default jmeter.properties from the tool to this directory.

 
Creating a JMeter plan

When your Gradle build is set up as instructed you can run the JMeter tool easily by changing to your build directory and running the command

# gradle jmeterEditor

This downloads all the needed artifacts and launches the graphical user interface for designing JMeter plans.

To make things easy, you can use the MyPlan.jmx provijmeterplanded in the zip package. The script is really simple: It has a single HTTP Request Sampler (named Front page) set up to make a request to http://localhost:9090 address with two assertions: 

  • Duration -assertion to check that the time to make the request does not exceed 5 milliseconds. For the sake of this example, this assertion should fail as the request probably takes more than this.
  • ResponseCode -assertion to check that the response code from the server is 200 (OK). This should pass as long as there is a web server running in port 9090 (we’ll come to this later).

It is recommended to give your Samplers and Assertions sensible names, as you refer directly these names later when mapping the test results to your Testlab test cases.

The created plan(s) should be saved to the src/test/jmeter -directory we created earlier, as Gradle’s JMeter plugin automatically executes all plans from this directory.

 

Setting up a Jenkins job

1. Install Jenkins

If you don’t happen to have Jenkins CI server available, setting one up locally couldn’t be easier. Download the latest release to a directory and run it with 

# java -jar jenkins.war --httpPort=9090

Wait a bit, and Jenkins should be accessible from http://localhost:9090 with your web browser. 

The JMeter plan we went through earlier made a request to http://localhost:9090. When you are running your Jenkins with the command said, the JMeter will fetch the front page of your Jenkins CI server when the tests are run. If you prefer to use some other Jenkins installation you might want to edit the MyPlan.jmx provided to point to this other address.

2. Install needed Jenkins plugins

Go to Manage Jenkins > Manage Plugins > Available and install

  • Gradle Plugin
  • Meliora Testlab Plugin
  • Performance Plugin
  • xUnit Plugin

2.1 Configure plugins

Go to Manage Jenkins > Configure System > Gradle and add a new Gradle installation for your locally installed Gradle. 

3. Create a job

Add new “free-style software project” job to your Jenkins and configure it as follows:

3.1 Add build step: Execute shell

Add a new “Execute shell” typed build step to copy the contents of your earlier set up Gradle project to job’s workspace. This is needed as the project is not in a version control repository. Setup the step as for example:

jmeter_executeshellplugin

.. Or something else that will make your Gradle project available to Jenkins job’s workspace.

Note: The files should be copied so that the root of the workspace contains the build.gradle file for launching the build.

3.2 Add build step: Invoke Gradle script

Select your locally installed Gradle Version and enter “clean jmeterRun” to Tasks field. This will run “gradle clean jmeterRun” command for your Gradle project which will clean up the workspace and execute the JMeter plan.

jmeter_gradleplugin

3.3 Add post-build action: Publish Performance test result report (optional)

Jenkins CI’s Performance plugin provides you trend reports on how your JMeter tests have been run. This plugin is not required for Testlab’s integration, but provides handy performance metrics on your Jenkins job view. To set up the action click “Add a new report”, select JMeter and set the Report files as “**/jmeter-report/*.xml”:

jmeter_performanceplugin

Other settings can be left to defaults or you can configure the settings for your liking.

3.4 Add post-build action: Publish xUnit test result report

Testlab’s Jenkins plugin works in a way, that it needs the test results to be available in so called xUnit format. In addition, this will generate test result trending graphs to your Jenkins job view. Add a post-build action to publish the test results resolved from JMeter assertions as follows by selecting a “Custom Tool”:

jmeter_xunitplugin

Note: The jmeter_to_xunit.xsl custom stylesheet is mandatory. This translates the JMeter’s log files to the xUnit format. The .xsl file mentioned is located in the jmeterproject -directory in the zip file and will be available in the Jenkins’ workspace root if the project is copied there as set up earlier.

3.5 Add post-build action: Publish test results to Testlab

The above plugins will set up the workspace, execute the JMeter tests, publish the needed reports to Jenkins job view and translate the JMeter log file(s) to xUnit format. What is left is to push the test results to Testlab. For this, add a “Publish test results to Testlab” post-build action and configure it as follows:

jmeter_testlabplugin

For sake of simplicity, we will be using the “Demo” project of your Testlab. Make sure to configure the “Company ID” and “Testlab API key” fields to match your Testlab environment. The Test case mapping field is set to “Automated” which is by default configured as a custom field in the “Demo” project.

If you haven’t yet configured an API key to your Testlab, you should log on to your Testlab as company administrator and configure one from Testlab > Manage company … > API keys. See Testlab’s help manual for more details.

Note: Your Testlab’s edition must be one with access to the API functions. If you cannot see the API keys tab in your Manage company view and wish to proceed, please contact us and we get it sorted out.

 

Mapping JMeter tests to test cases in Testlab

For the Jenkins plugin to be able to record the test results to your Testlab project your project must contain matching test cases. As explained in the plugin documentation, your project in Testlab must have a custom field set up which is used to map the incoming test results. In the “Demo” project field is already set up (called “Automated”). 

jmeterplanEvery assertion in JMeter’s test plan will record a distinguishing test result when run. In the simple plan provided, we have a single HTTP Request Sampler named “Front page”. This Sampler is tied with two assertions (named “Duration” and “ResponseCode”) which check if the request was done properly. When translated to xUnit format, these test results will get idenfied as <Sampler name>.<Assertion name>, for example:

  • Front page/Duration will be identified as: Front page.Duration and
  • Front page/ResponseCode will be identified as: Front page.ResponseCode

To map these test results to test cases in the “Demo” project,

1. Add test cases for JMeter assertions

Log on to Testlab’s “Demo” project, go to Test case design and

  • add a new Test category called “Load tests”, and to this category,
  • add a new test case “Front page speed”, set the Automated field to “Front page.Duration” and Approve the test case as ready and
  • add a new test case “Front page response code”, set the Automated field to “Front page.ResponseCode” and Approve the test case as ready.

Now we have two test cases for which the “Test case mapping field” we set up earlier (“Automated”) contains the JMeter assertions’ identifiers.

 

Running JMeter tests

What is left to do is to run the actual tests. Go to your Jenkins job view and click “Build now”. A new build should be scheduled, executed and completed – probably as FAILED. This is because the JMeter plan has the 5 millisecond assertion which should fail the job as expected.

 

Viewing the results in Testlab

Log on to Testlab’s “Demo” project and select the Test execution view. If everything went correctly, you should now have a new test run titled “jmeter run” in your project:

jmeter_testrun

As expected, the Front page speed test case reports as failed and Front page response code test case reports as passed.

As we configured the publisher to open up issues for failed tests we should also have an issue present. Change to Issues view and verify, that an issue has been opened up:

jmeter_defect

 

Viewing the results in Jenkins CI

The matching results are present in your Jenkins job view. Open up the job view from your Jenkins:

jmeter_jenkinsview

The view holds the trend graphs from the plugins we set up earlier: “Responding time” and “Percentage of errors” from Performance plugin and “Test result Trend” from xUnit plugin. 

To see the results of the assertions, click “Latest Test Result”:

jmeter_jenkinsresults

The results show that the Front page.Duration test failed and one test has passed (Front page.ResponseCode).

 

Links referred

 http://jmeter.apache.org/  Apache JMeter home page
 http://jmeter.apache.org/usermanual/index.html  Apache JMeter user manual
 http://jmeter.apache.org/usermanual/component_reference.html#assertions  Apache JMeter assertion reference
 http://blazemeter.com/blog/how-use-jmeter-assertions-3-easy-steps  BlazeMeter: How to Use JMeter Assertions in 3 Easy Steps
 https://www.melioratestlab.com/wp-content/uploads/2014/09/testlab_jmeter_example.zip  Needed assets for running the example
 https://github.com/kulya/jmeter-gradle-plugin  Gradle JMeter plugin
 http://www.gradle.org/  Gradle home page
 http://mirrors.jenkins-ci.org/war/latest/jenkins.war  Latest Jenkins CI release
 https://www.melioratestlab.com/wp-content/uploads/2014/09/jmeter_to_xunit.xsl_.txt  XSL-file to translate JMeter JTL file to xUnit format  
 https://wiki.jenkins-ci.org/display/JENKINS/Meliora+Testlab+Plugin  Meliora Testlab Jenkins Plugin documentation 

 

 

 

Facebooktwitterlinkedinmail


Tags for this post: example integration jenkins load testing usage 


27.5.2014

Taking screenshots and capturing video

This article introduces you on an easy way to capture and annotate screenshots during testing. We show you couple of easy ways to use screen capturing and recording tool Monosnap.

The latest Testlab release brings you inbuilt integration to Monosnap, a handy screen capturing tool with possibility of annotating the screenshots before uploading. Testlab supports desktop clients of Monosnap for Windows and Mac OS X operating systems. You are ofcourse free to use any screen capturing tool you prefer but we feel Monosnap really stands out from the crowd feature-wise and in the ease of use.

 

Why take screenshots or record video

When you are testing software on your workstation taking screenshots is a great way of documenting issues. A picture is worth a thousand words, right ? For example, when you are testing and an issue such as a defect is encountered capturing a screenshot, annotating the capture by highlighting the issue in an exact way and uploading it to Testlab usually tells the team members very well what went wrong. If the capturing tool allows you to annotate the shot, it’s perfect – the amount of textual description you need to enter for the defect is typically much less when you can mark and highlight the relevant parts of the screenshot.

The benefits of using screenshots in issue management are quite self-evident. But screenshots and recorded screencaptures can be quite beneficial in requirement management too. For example, when you are documenting new features on existing user interfaces, taking a screenshot and annotating it properly is a great addition to documenting your requirements. Same applies to test cases: If a test case is testing a complex user interface a well annotated screenshot or two can be a great help for a tester when testing.

 

Monosnap introduced

monosnapMonosnap is a collaboration tool for taking screenshots, sharing files and recording video from your desktop. The tool is available for multiple platforms (such as a Google Chrome extension, for iPhone and iPad) but here we talk about the desktop installable clients for Microsoft Windows and Mac OS X operating systems as they can be integrated and used seamlessly with Testlab.

When Monosnap is installed and run it runs as a desktop application and is accessible in a way depending on your operating system. For Mac OS X, the tool is available in your desktop’s menu bar as an icon. Similarly in Windows, the tool is available in your so called system tray and as a hovering hotspot on your desktop if you prefer.

For capturing screenshots the basic way of working with Monosnap is as follows:

  1. You capture an area of your desktop by selecting “Capture Area” from Monosnap’s menu or pressing the appropriate keyboard shortcut.
  2. A Monosnap window appears with the captured area shown. The window has functions to annotate the capture: For example draw different shapes on it and write text on the capture.
  3. When you are happy with the capture you can upload it to a service of your choice or save the capture on your disk.

For capturing video, you

  1. select “Record Video” from monosnap’s menu or press the appropriate keyboard shortcut.
  2. Monosnap’s recording frame appears. You move and resize this frame to the area on your desktop which you would like to record as a video capture. You also have options to record video from your workstations web cam, record audio from your microphone if you prefer.
  3. To start recording you press the Rec button. You can annotate the video during recording by drawing different shapes on it. When you have recorded your video you press Rec button again to stop the capture.
  4. When recorded, the video is encoded to a MP4 format and depending on your workstation if might take a few seconds. A window appears with the encoded video in it which you can preview before uploading. You can then upload the captured video to a service of your choice or access the encoded video file on your disk. 

 

Using Monosnap with Testlab

To use Monosnap with Testlab you have two options: Take screen captures with Monosnap and upload them manually to Testlab by dragging and dropping or, integrating Monosnap to Testlab’s WebDAV interface which allows you to upload captures to Testlab with a click of a button.

 
Uploading manually

When uploading manually no pre-configuration is needed. You can use Monosnap in a way you prefer and when you have a capture ready upload it to Testlab in same way you would upload a regular file attachment. Keep in mind though, that Monosnap makes this quite easy as it features a “Drag bar” on the right hand side of the capture window. From this, you can just grab and drag the capture on your Testlab browser window and attach it to the asset open in the window just by dropping.

If dragging and dropping is not possible for some reason, as a workaround, you can ofcourse save the capture on your disk and upload it regularly to Testlab.

To see how it actually works play the video below:

video_reports

 

 

WebDAV integration

Monosnap is great in a way that it supports a possibility of uploading the captures with a click of a button to service of your choice. This enables Testlab to act as a WebDAV storage for which into the Monosnap can push the captures to. When configured, you can just push the Upload button of Monosnap and the capture is automatically uploaded to Testlab and attached to the asset open in your Testlab browser window.

To make use of this feature some pre-configuration is needed:

  1. Open up Monosnap’s menu and select “Preferences…” or “Settings…”. Monosnap’s settings window opens up.
  2. Select “General” tab and configure the following:
    • After screenshot: Open Monosnap editor
    • After upload: Do not copy
    • Open in browser: no
    • Short links: no
  3. Select the “Account / WebDAV” view and configure the following:

    For Mac OS X:

    • Host: https://COMPANY.melioratestlab.com/api/attachment/user
      Note: Replace COMPANY with the virtual host of your own Testlab. For example, if you are using hosted Testlab from mycompany.melioratestlab.com enter “https://mycompany.melioratestlab.com/api/attachment/user” to this field. For on-premise installations, set this field to match the protocol, host name and the port of your server to a /api/attachment/user context.
    • Port: Leave as blank (shows as gray “80”)
    • User: User ID of your Testlab account
    • Password: Password of your Testlab account
    • Folder: Leave as blank (shows as gray “/var/www/webdav”)
    • Base URL:Leave as blank (shows as gray “http://127.0.0.1/webdav”)
      Click “Make default” button to make the configured WebDAV service as the default upload service of Monosnap. When set, the Upload button always uses this service by default.

       

      For Microsoft Windows:

    • Host: COMPANY.melioratestlab.com
    • Note: Replace COMPANY with the virtual host of your own Testlab. For example, if you are using Testlab from mycompany.melioratestlab.com enter “mycompany.melioratestlab.com” to this field.
    • Port: HTTPS or HTTP port of your Testlab server – if you are using hosted Testlab enter 443
    • User: User ID of your Testlab account
    • Password: Password of your Testlab account
    • Directory: /api/attachment/user
    • Base URL: Leave as blank

The preconfiguration is documented in detail in the “Screenshots and recording” section of the Testlab’s integrated help manual.

Keep in mind, that the pre-configuration step needs to be done only once. Once you’ve configured your Monosnap to upload captures to Testlab it just works – no need to configure it again later.

Where is the capture uploaded to

When captures are uploaded via Testlab’s WebDAV interface the uploaded captures are automatically attached to the asset which is currently open in your Testlab browser window. So when uploading, make sure you have an asset (a requirement, a test case or an issue) open in your Testlab window in a way, that a file can be attached to it. If for example, your Testlab user account wouldn’t have proper permissions to attach files to assets the uploading will just silently fail.

To see WebDAV integrated Monosnap in action play the video below:

video_reports

 

 

Advantages gained

Having easy to use screen capture tools make your documenting easier and speeds up work in multiple tiers: Documenting issues and other assets is faster and people dealing with the documented assets have a clearer understanding on the issue at hand.

Facebooktwitterlinkedinmail


Tags for this post: example features screenshots usage video 


25.4.2014

Exploratory testing with Testlab

In this article we introduce you to the recently released new features enabling more streamlined workflow for exploratory testing.

Exploratory testing is an approach to testing where the tester or team of testers ‘explores’ the system under test and during the testing generates and documents good test cases to be run. In a more academic way, it is an approach to software testing that is concisely described as simultaneous learning, test design and test execution.

Compared to scripted testing – where test cases and scenarios are pre-planned before execution – exploratory testing is often seen as more free and flexible. Each of these methodologies have their own benefits and drawbacks and in reality, all testing is usually something in between of these two. We won’t go into methodological detail in this article as we focus on how to do the actual test execution in explorative way with Testlab. We can conclude though, that exploratory testing is particularly suitable if requirements for the system under test are incomplete, or there is lack of time.

 

Pre-planning exploratory test cases

As said, all testing approaches can be usually placed in between a fully pre-scripted and fully exploratory approach. It is often recommended to consider if pre-planning the test cases in some way would be beneficial. If the system under test is not a total black box meaning there are some knowledge or even specifications available it might be a wise idea to add so called stubs for your test cases in pre-planning phase. Pre-planning test case stubs might give you better insight in testing coverage as in pre-planning, you have an option to bind the test cases to requirements. We’ll discuss using requirements in exploratory testing in more detail later.

For example, one approach might be that you could just add the test cases you think you might need to cover some or all areas of the system without the actual execution steps. The actual execution steps, preconditions and expected results would be filled out in exploratory fashion during the testing. Alternatively, you might be able to plan the preconditions and expected results and just leave the actual execution steps for the testers.

Keep in mind, that pre-planning test cases does not and should not prevent your testers from adding totally new test cases during the testing. Additionally, you should consider if pre-planning testing might affect your testers’ way of testing. Sometimes this is not desirable and you should take into account the experience level of your testers and how the different pre-planning models fit into your testing approach and workflow in overall. 

 

Exploratory testing session

Exploratory testing is not an exact testing methodology per se. In reality, there are many testing methods such as Session-based testing or pair testing which are exploratory in a way. As Testlab is methodology agnostic and can be used with various different testing methods, in this article we combine all these methods by just establishing the fact that the testing must be done in a testing session. The testing method itself can be any method you wish to use but the actual test execution must be done in a session which optionally specifies

  • the system or target under testing (such as a software version),
  • environment in which the testing is executed in (such as production environment, staging environment, …) and
  • sets a timeframe in which the testing must be executed in (a starting date and/or a deadline).

To execute test cases in a testing session add a new test run to your project in Testlab. Go to the Test runs view and click the Add test run… button to add a new blank testing session.

New testing session as a test run

When the needed fields are set you have an option to just Save the test run for later execution or, to Save and Start… the added test run immediately. The test run is added to the project as blank meaning it does not have any test cases bound to it yet. We want to start testing right away so we click the Save and Start… button.

 

Executing tests

The set of functionality while executing is a good match for an exploratory testing session. As said, test execution in Testlab enables you to

  • pick a test case for execution,
  • record testing results for test cases and their steps,
  • add issues such as found defects,
  • add comments to test cases and for executed test case steps,
  • add existing test cases to be executed to the run,
  • remove test cases from the run,
  • re-order test cases in the run to a desired order,
  • add new test cases to the test case hierarchy and pick them for execution,
  • edit existing test cases and their steps and
  • cut and copy test cases around in your test case hierarchy.

The actual user interface while executing looks like this:

exploratory_runwindow

The left hand side of the window has the same test case hierarchy tree that is used to manipulate the hierarchy of test cases in test planning view. It enables you to add new categories and test cases and move them around in your hierarchy. The hierarchy tree may be hidden – you can show (and hide) it by clicking the resizing bar of the tree panel. The top right shows you the basic details of the test run you are executing and the list below it shows the test cases picked for execution in this testing session.

The panel below the list of test cases holds the details of a single test case. When no test cases for execution are available, the panel disables itself (like shown in the shot above) and lists all issues from the project. This is especially handy when testing for regression or re-testing – it makes it easy to reopen closed findings and retest resolved issues.

The bottom toolbar of buttons enable you to edit the current test case, add an issue and record results for test cases. The “Finish”, “Abort” and “Stop” buttons should be used to end the current testing session. Keep in mind, that finishing, aborting and stopping testing have their own meaning which we will come to later in this article.

 

Adding and editing test cases during a session

When exploring, it is essential to be able to document the steps for later execution if an issue is found. This way scripted testing for regression is easier later on. Also, if your testing approach aims to document the test cases for later use by exploring the system you must be able to easily add them during execution.

If you have existing test cases added which you would like to pick for execution during a session you can drag the test cases from the test hierarchy tree on to the list of test cases. Additionally, you can add new test cases by selecting New > Test case for the test category you want to add the new test case to. Picking test cases and adding new via inline editing is demonstrated in the following video:

 

 

 

Editing existing test cases is similar to the adding. You just press the Edit button at the bottom bar to switch the view to editing mode. The edits are made in identical fashion compared to adding.

 

Ending the session

When you have executed the tests you want you have three options:

Finish, abort or stop

It is important to understand the difference which comes from the fact that each executed session is always a part of a test run. If you wish to continue executing tests in the same test run at the later time you must Stop the session. This way the test run can be continued later on normally.

If you conclude that the test run is ready and you wish not to continue it anymore you should Finish it. When done so the test run is marked as finished and no testing sessions can be started on it anymore. It should be noted though, that if you discard a result of a test case from the run later on the run is reset back to Started-state and is again executable.

Aborted test runs are considered discarded and cannot be continued later on. So, if for some reason you think that the test run is not valid anymore and should be discarded you can press the Abort run button.

 

Asset workflows and user roles in exploratory testing

Requirements, test cases and issues have an asset workflow tied to them via project’s workflow setting. This means that each asset has states that they can be in (In design, Ready for review, Ready, …) and actions which can be executed on them (Approve, Reject, …). In exploratory testing having a complex workflow for project’s test cases is usually not desirable. For example, having a workflow which requires review of test cases from another party makes no sense when testers should be able to add, edit and execute test cases inline during testing.

That said, if using default workflows it recommended to use the “No review” workflow for your projects. 

 

No review workflow

 

If executing test cases which are not yet approved as ready Testlab tries to automatically approve them on behalf of the user. This means, that if the test case’s workflow allows it (and the user has the needed permissions to do so) the test case is automatically marked as approved during the session. This way using more complex workflows in a project with an exploratory testing approach might work if the transitions between test case’s states are suitable. That said, as the testers must be able to add and edit test cases during executing having a review based workflow is useless.

The asset workflows’ actions are also tied to user roles. For the testers to be able to document test cases during execution the tester users should also be granted a TESTDESIGNER role. This ensures that the users should have the needed permissions to add and edit test cases they need.

 

Using requirements in exploratory testing approach

When designing test cases in exploratory testing sessions the test cases are added without any links to requirements. In Testlab, testing coverage is reported against the system’s requirements and in testing parlance, a test case verifies a requirement it is linked to when the test case is executed and passes.

It is often recommended to bind the added test cases to requirements at the later stage. This way you can easily report what actually has been covered by testing. It should be noted, that the requirements we talk here don’t have to be fully documented business requirements for this to work. For example, if you would just like to know which parts of the system have been covered you might want to add the system’s parts as project’s requirements and bind the appropriate test cases to them. This way a glance at Testlab’s coverage view should give you insight which parts of the system have been tested successfully.

Better yet, if you did pre-plan your test cases in some form (see above) you might consider adding a requirement hierarchy too and linking your test case stubs to these requirements. This would give you insight into your testing coverage straight away when the testing starts.

 

Summary

In this article we talked about the new test execution features of Testlab enabling you to execute your tests using exploratory testing approaches. We went through the advantages of pre-planning some of your testing, using requirements in your exploratory testing, talked about testing sessions and noted how Testlab’s workflows and user roles should be used in exploratory testing.

Facebooktwitterlinkedinmail


Tags for this post: example exploratory features testing usage 


4.12.2013

Introduction video

Today, we’re happy to bring you a brief introduction screencast about Testlab. The video will introduce you to the central concepts of Testlab and how they are presented in the user interface. We will give a glance to

  • requirements management,
  • test case design,
  • execution planning and test runs,
  • issue management and
  • test coverage.

Keep in mind that this introduction skips some central features of Testlab such as reporting but should give you some insight to the use of Testlab. To view the introduction please click below.

 

Introduction to Meliora Testlab

Facebooktwitterlinkedinmail


Tags for this post: demo example screencast usage video 


18.11.2013

Brand new Demo project available

We are happy to publish a renewed version of the Demo project for you all to familiarize yourself with Testlab. You can find the project by logging to your Testlab and choosing project “Demo”.

 

Boris has a problem

The new Demo project tells a story of Boris Buyer who has an online book store. He’s selling some books in his web store but has been lately noticing declining sales. He decides to develop his web store further by the aid of his brother in law who owns an IT-company specializing in web design. They hatch up a plan to develop Boris’ online business further with few agile development milestones.

The data in “Demo” project has all the requirements, user stories, test cases, test sets, test runs, testing results and issues to make it easier for you to grasp on the concepts of Testlab. The project starts at January 1st of 2013, has one milestone version completed and happily installed in production, second milestone completed but still in testing and a third milestone for which the planning has been just started.

timetableThe timeline diagram above presents a rough sketch of “Demo” project’s progress showing the phases of the project at the top and the relevant assets found in Testlab at the bottom. Purple highlight on the timeline presents the moment in time the project is currently in.  

 

Great, where can I read more ?

A detailed description, a story and hints for what to look in Testlab can be found at

https://www.melioratestlab.com/demo-project

All new registrants will get this link in their welcome message you get when you sign up.

 

Just to note – all existing customers and registrants have their old Demo project renamed as “Old demo” so any changes you’ve made to your previous demo project aren’t lost.

Facebooktwitterlinkedinmail


Tags for this post: demo example usage 


 
 
Best-of-class cross-browser hosted SaaS quality and test management, testing and issue management tools to improve your quality. Site information.