Posts tagged with: best practices


1.2.2019

Official support for Jenkins Pipelines

A continuous delivery pipeline is an automated process for delivering your software to your customers. It is the expression of steps which need to be taken to build your software from your version control system to a working and deployed state.

In Jenkins, Pipeline (with a capital P), provides a set of tools for modeling simple and complex pipelines as a domain-specific language (DSL) syntax. Most often this pipeline “script” is written and stored to a Jenkinsfile stored inside your version control system. This way the actual Pipeline definition can be kept up-to-date when the actual software evolves. That said, Pipeline scripts can also be stored as they are to Pipeline-typed jobs in your Jenkins.

 

Meliora Testlab plugin for your Pipelines

Meliora provides a plugin to Jenkins which allows you to easily publish your automated testing results to your Testlab project.

Previously, it was possible to use the plugin in Pipeline scripts by wrapping the plugin to a traditional Jenkins job and triggering it with a “build” step. A new version 1.16 of the plugin has been released with official support for using the plugin in Pipeline scripts. This way, the plugin can be directly used in your scripts with a ‘melioraTestlab’ expression.

When the plugin is configured as traditional post-build action in a Jenkins job, the plugin settings are set by configuring the job and entering the appropriate setting values via the web UI. In Pipelines, the settings are included as parameters to the step keyword.

 

Simple Pipeline script example

The script below is an example of a simple Declarative Pipeline script with Meliora Testlab plugin configured in a minimal manner.

pipeline {
    agent any
    stages {
        stage('Build') {
            // ...
        }
        stage('Test') {
            // ...
        }
        stage('Deploy') {
            // ...
        }
     }
     post {
         always {
             junit '**/build/test-results/**/*.xml'
             melioraTestlab(
                 projectKey: 'PRJX',
                 testRunTitle: 'Automated tests',
                 advancedSettings: [
                     companyId: 'mycompanyid',
                     apiKey: hudson.util.Secret.fromString('verysecretapikey'),
                     testCaseMappingField: 'Test class'
                 ] 
             )
         }
     }
}

The script builds, tests, deploys (with the steps omitted) the software and always as post stage, publishes the generated test results and sends them to your PJRX project in Testlab by storing them in a test run titled ‘Automated tests’. Note that advanced settings block is optional: If you configure these values to the global settings in Jenkins, the plugin can use the global settings instead of the values set in the scripts.

 

Pipeline script example with all settings present

The example below holds all parameters supported (at the time of writing) by the melioraTestlab step.

pipeline {
    agent any
    stages {
        // ...
    }
    post {
        always {
            junit '**/build/test-results/**/*.xml'
            melioraTestlab(
                projectKey: 'PRJX',
                testRunTitle: 'Automated tests',
                comment: 'Jenkins build: ${BUILD_FULL_DISPLAY_NAME} ${BUILD_RESULT}, ${BUILD_URL}',
                milestone: 'M1',
                testTargetTitle: 'Version 1.0',
                testEnvironmentTitle: 'integration-env',
                tags: 'jenkins nightly',
                parameters: 'BROWSER, USERNAME',
                issuesSettings: [
                    mergeAsSingleIssue: true,
                    reopenExisting: true,
                    assignToUser: 'agentsmith'
                ],
                importTestCases: [
                    importTestCasesRootCategory: 'Imported/Jenkins'
                ],
                publishTap: [
                    tapTestsAsSteps: true,
                    tapFileNameInIdentifier: true,
                    tapTestNumberInIdentifier: false,
                    tapMappingPrefix: 'tap-'
                ],
                publishRobot: [
                    robotOutput: '**/output.xml',
                    robotCatenateParentKeywords: true
                ],
                advancedSettings: [
                    companyId: 'mycompanyid', // your companyId in SaaS/hosted service
                    apiKey: hudson.util.Secret.fromString('verysecretapikey'),
                    testCaseMappingField: 'Test class',
                    usingonpremise: [
                        // optional, use only for on-premise installations
                        onpremiseurl: 'http://testcompany:8080/'
                    ]
                ]
            )
        }
    }
}

If you wish to familiarize yourself to the meaning of each setting, please refer to the plugin documentation at https://plugins.jenkins.io/meliora-testlab.

 

(Pipeline image from Jenkins.io – CC BY.SA 4.0 license)

Facebooktwitterlinkedinmail


Tags for this post: automation best practices features jenkins plugin release usage 


23.2.2017

Unifying manual and automated testing

Automating testing has been an ongoing practice to gain benefits for your testing processes. Manual testing with pre-defined steps is still surprisingly common and especially during acceptance testing we still often put our trust in the good old tester. Unifying manual testing and automated testing in a transparent, easily managed and reported way is particularly important for organizations pursuing gains from testing automation.

 

All automated testing is not similar

The gains from testing automation are numerous: Automated testing saves time, it makes the tests easily repeatable and less error-prone, makes distributed testing possible and improves the coverage of the testing, to bring up few. It should be noted though that not all automated testing is the same. For example, modern testing harnesses and tools make it possible to automate and execute complex UI-based acceptance tests and in the same time, developers can implement low-level unit tests. From the reporting standpoint, it is essential to be able to combine the testing results from all kinds of tests to a manageable and easily approachable view with the correct level of details.

 

I don’t know what our automated tests do and what they cover

It is often the case that testers in the organization waste time on manual testing of features that are already covered with a good set of automated tests. This is because the test managers don’t always know the details of the (often very technical) automated tests. The automated tests are not trusted on and the results from these tests are hard to combine to the overall status of the testing. 

This problem is often complicated by the fact that many test management tools report the results of manual and automated tests separately. In the worst case scenario, the test manager must know how the automated tests work to be able to make a judgment on the coverage of the testing. 

 

Scoping the automated tests in your test plan

Because the nature of automated tests varies, it is important that the test management tool offers an easy way to scope and map the results of your automated tests to your test plan. If is not often preferred to report the status of each and every test case (especially in the case of low-level unit tests) because it makes it harder to get the overall picture of your testing status. It is important to pay attention to the results of these tests though so that failures in these tests get reported.

Let’s take an example on how the automated tests are mapped in Meliora Testlab.

In the example above is a simple hierarchy of functions (requirements) which are verified by test cases in the test plan:

  • UI / Login -function is verified by a manual test case “Login test case“,
  • UI / User mgmnt / Basic info and UI / User mgmnt / Credentials -function is verified by a functional manual test case “Detail test case” and
  • Backend / Order mgmt -functions are verified by automated tests mapped to a test case “Order API test case” in the test plan.

Mapping is done by simply specifying the package identifier of the automated tests to a test case. When testing, the results of tests are always recorded to test cases:

  1. The login view and user management views of the application are tested manually by the testers and the results of these tests get recorded to test cases “Login test case” and “Detail test case“.
  2. The order management is tested automatically with results from automated tests “ourapp.tests.api.order.placeOrderTest” and “ourapp.tests.api.order.deliverOrderTest“. These automated tests are mapped to test case “Order API test case” via automated test package “ourapp.tests.api.order“.

The final result for the test case in step 2 is derived from the results of all automated tests under the package “ourapp.tests.api.order“. If one or more tests in this package fail, the test case will be marked as failed. If all tests pass, the test case is also marked as passed.

As automated tests are mapped via the package hierarchy of the automated tests, it makes it easy to fine tune the detail level you wish to scope your automated tests to your test plan. In the above example, if it is deemed necessary to always report out the detailed results on order delivery related tests, the “ourapp.tests.api.order.deliverOrderTest” automated test can be mapped to a test case in the test plan.

 

Automating existing manual tests

As testing automation has clear benefits to your testing process, it is preferred for the testing process and the used tools to manage it to support easy automation of existing manual tests. From the test management tool standpoint, it is not relevant which technique is used to actually automate the test, but instead, it is important that the reporting and coverage analysis stays the same and the results of these automated tests are easily pushed to the tool.

To continue on with the example above, let’s presume that login related manual tests (“Login test case“) are automated by using Selenium:

The test designers record and create the automated UI tests for the login view to a package “ourapp.tests.ui.login“. Now, the manual test case “Login test case” can be easily mapped to these tests with the identifier “ourapp.tests.ui.login“. The test cases themselves, requirements or the structure of these do not need any changes. When the Selenium based tests are run, later on, the result of these tests determine the result for the test case “Login test case“. The reporting of the testing status stays the same, the structure of the test plan is the same, and related reports are easily approached by the people formerly familiar with them.

 

Summary

Testing automation and manual testing are most often best used in a combination. It is important that the tools used for test management support the reporting of this kind of testing in as flexible way as possible.

 

(Icons used in illustrations by Thijs & Vecteezy / Iconfinder)

Facebooktwitterlinkedinmail


Tags for this post: automation best practices example features product reporting usage 


9.9.2014

How to: Work with bugs and other issues

Testers role in the project is often seen in two major areas: building confidence in that software fulfill it’s need and on the other hand finding parts where it does not, or might not. Just pointing finger where the problem lies is seldom enough: the tester needs to tell how the problem was encountered and why the found behaviour is a problem. Same thing goes with enhancements ideas. This post is about how to work with bugs ( what we call defects in this post ) and other findings ( we call all the findings issues ) from writing those to closing them.

 

Using time wisely

Author encountering a bug

While small scale projects can cope with findings written on post-it notes, excel sheets or such, efficient handling of large amounts of issues is next to impossible. The real reason for having more refined way of working with issues is to enable the project to be able to react to findings as needed. Some issues need fast response, some are best listed for future development, but not forgotten. When you work right with issues, people in the project find the right ones to work with at given time, and also understand what the issue was written about.

As software projects are complex entities, it is hard to archive such thing as perfect defect description, but they need to be pretty darn good to be helpful. If a tester writes a report that the receiver does not understand, it will be sent back or even worse – disregarded. That said, getting best bang for buck for tester’s time writing the defect is not always writing an essay. If the tester sits next to developer, just a descriptive name for issue might be enough. Then tester just must remember what meant by the description! Same kind of logic goes with defect workflows. On some projects the workflows are not needed to guide the work, but on some it really makes sense for an issue to go trough certain flow. There is not a single truth. This article tells some of the things you should consider. You pick what you need.

 

Describing the finding

First thing after finding an defect is not always trying to write it down to error management system as fast as possible. Has the thing been reported earlier? If not, the next thing is to find out more about what happened. Can you replicate the behaviour? Does the same thing happen elsewhere in the system under test? What was the actual cause of the issue? If you did a long and complex operation during the testing, but you can replicate the issue with more compact set of actions, you should describe the more compact one.

After you have understanding about the issue, you should write ( a pretty darn good ) description of the issue. The minimum is to write the steps what you did. If you found out more about the behaviour, like how in similar conditions it does not happen, write that down too. If the issue can be found executing the testcase, do not rely on that reader would read test test case. Write down what is relevant on causing the issue. Attached pictures are often a great way of showing what went wrong, and attached videos a great way of showing what you did to cause that. They are really fast to record and a good video shows exactly what needs to be shown. Use them.  https://www.melioratestlab.com/taking-screenshots-and-capturing-video/

Consider adding an exact time stamp you tested the issue. The developers might want to dig in to the log files – or even better, attach the log files yourself. It is not always clear if something is a defect. If there is any room for debate, write it also down why you think the behaviour is a defect.

Besides writing a good description, you should also classify your defects. This helps the people in project to find the most important ones for them from the defect mass. Consider the following:

tags

Field Motive
Priority Not all issues are as important at the moment.
Severity Severity tells about the effect of the issue. That can be different than the priority. Then again, as in most projects severity and priority go hand in hand. Then it is just easier to use just one field.
Assignment Who is expected to take the next step in handling the issue.
Type Is the issue a defect, enhancement idea, task or something else.
Version detected What version was tested when the issue was found.
Environment On what environment was the issue found.
Application area To what application area issue has an effect.
Resolution For closed issues, the reason why the defect was closed.

 

Add the fields to your project that are meaningful for you to classify defects and hide the ones that are not relevant. If you are not sure if you benefit from classification field, do not use the field. Use tags. You can add any tags to issues and find the issues later or to report them. Easy.

 

Issue written – what next?

Now that you wrote an error report, did you find it by conducting an existing test? If not, should there be a test that would find that given defect? After the issue has been corrected you probably get to verify the fix, but if the issue seems likely to re-emerge, write a test case for it.

defectflow

Writing an issue begins its life cycle. At bare minimum the defect has two stages – open when you work with it and closed when you have done what you wanted to do, but most of the time the project benefits from having more stages in workflow. Mainly the benefits come from easier management of large amounts of findings. It helps to know what needs to be done next, and by whom. Having information in what state the issues are, the project will be able to distinguish bottlenecks easier. Generally the bigger the project, the more benefits there are to be gotten from having more statuses. That said, if the statuses do not give you more relevant information about the project, do not collect them. Simpler is better.

Status Motive
New Sometimes it makes sense to verify the issues before adding them to further handling by someone else than the initial reporter. New distinguishes these from Open issues.
Open Defects that have been found, but no decision how the situation will be fixed has been made yet.
Assigned Defect is assigned to be fixed.
Reopened Delivered fix did not correct the problem or the problem occurred again later. Sometimes it makes sense to distinguish the opens from reopened to see how common it is for problems to re-emerge.
Fixed Issues are marked fixed when the mentioned issue has been corrected.
Ready for testing After the issue has been fixed this status can be used to mark ones that are ready for testing.
Postponed Distinguish those open issues that you are not planning to work with for a while.
Closed After the findings have been confirmed to cease to be relevant, the issue can be closed. Basically after fixing the issue will be tested.

When working with defects and changing their statuses, it is important to comment the issues when something relevant changes. Basically if the comment adds information about found issue, it probably should be added. The main idea behind this is that it is later possible to come back to discussion so you don’t need to guess why something was done or wasn’t done about the issue. Especially it is important to write down how the change will be fixed (when there is a room for doubt), and finally how the fix has been implemented so that tester will know how to re-test the issue. If the developer can open the way how the issue was fixed it helps tester find possible other issues that have arised when the fix has been applied.

 

Digging in the defect mass

So now that you have implemented this kind of classification for working around issues, what could you learn from statistics? First off, even with sophisticated rating and categorization the real situation can easily hide behind numbers. It is more important to react to each invidual issue correctly than to rate and categorize issues and only later react to statistics. That said, in complex projects the statistics, or numbers, help understand what is going on in the project and help find the focus on what should be done on the project.

Two most important ways to categorize defects are priority and status. Thus a report showing open issues per priority, grouped by status is a very good starting point to look at current issue situation. Most of the time you handle defects defferently from other issues, so you would pick the defects to one report and other types of defects to other. Now, this kind of report might show you for example that there is one critical defect assigned, 3 high priority defects open and 10 normal priority defects fixed. The critical and high priority defects you would probably want to go trough individually to make at least sure that they get fixed as soon as they should so they do not hinder other activities, and for the fixed ones you would look if something needs to be done to have them enabled for re-testing. If at some point you see some category growing, you know who should you ask questions. For example high number in assigned defects would indicate bottleneck in development and prolonged numbers in “ready for testing” a bottleneck in testing.

Another generally easy to use report is the trend report for open issues by their statuses or issues’ development. As the report shows how many open issues there has been at given time, you’ll see the trend – if you can close the issues the same pace you open them.

This was just a mild scratch in the area of working with defects. If you have questions or would like to point out something, feel free to comment.

Happy hunting!

Joonas

Facebooktwitterlinkedinmail


Tags for this post: best practices issues testing usage 


 
 
Best-of-class cross-browser hosted SaaS quality and test management, testing and issue management tools to improve your quality. Site information.