Posts tagged with: testing


9.9.2014

How to: Work with bugs and other issues

Testers role in the project is often seen in two major areas: building confidence in that software fulfill it’s need and on the other hand finding parts where it does not, or might not. Just pointing finger where the problem lies is seldom enough: the tester needs to tell how the problem was encountered and why the found behaviour is a problem. Same thing goes with enhancements ideas. This post is about how to work with bugs ( what we call defects in this post ) and other findings ( we call all the findings issues ) from writing those to closing them.

 

Using time wisely

Author encountering a bug

While small scale projects can cope with findings written on post-it notes, excel sheets or such, efficient handling of large amounts of issues is next to impossible. The real reason for having more refined way of working with issues is to enable the project to be able to react to findings as needed. Some issues need fast response, some are best listed for future development, but not forgotten. When you work right with issues, people in the project find the right ones to work with at given time, and also understand what the issue was written about.

As software projects are complex entities, it is hard to archive such thing as perfect defect description, but they need to be pretty darn good to be helpful. If a tester writes a report that the receiver does not understand, it will be sent back or even worse – disregarded. That said, getting best bang for buck for tester’s time writing the defect is not always writing an essay. If the tester sits next to developer, just a descriptive name for issue might be enough. Then tester just must remember what meant by the description! Same kind of logic goes with defect workflows. On some projects the workflows are not needed to guide the work, but on some it really makes sense for an issue to go trough certain flow. There is not a single truth. This article tells some of the things you should consider. You pick what you need.

 

Describing the finding

First thing after finding an defect is not always trying to write it down to error management system as fast as possible. Has the thing been reported earlier? If not, the next thing is to find out more about what happened. Can you replicate the behaviour? Does the same thing happen elsewhere in the system under test? What was the actual cause of the issue? If you did a long and complex operation during the testing, but you can replicate the issue with more compact set of actions, you should describe the more compact one.

After you have understanding about the issue, you should write ( a pretty darn good ) description of the issue. The minimum is to write the steps what you did. If you found out more about the behaviour, like how in similar conditions it does not happen, write that down too. If the issue can be found executing the testcase, do not rely on that reader would read test test case. Write down what is relevant on causing the issue. Attached pictures are often a great way of showing what went wrong, and attached videos a great way of showing what you did to cause that. They are really fast to record and a good video shows exactly what needs to be shown. Use them.  https://www.melioratestlab.com/taking-screenshots-and-capturing-video/

Consider adding an exact time stamp you tested the issue. The developers might want to dig in to the log files – or even better, attach the log files yourself. It is not always clear if something is a defect. If there is any room for debate, write it also down why you think the behaviour is a defect.

Besides writing a good description, you should also classify your defects. This helps the people in project to find the most important ones for them from the defect mass. Consider the following:

tags

Field Motive
Priority Not all issues are as important at the moment.
Severity Severity tells about the effect of the issue. That can be different than the priority. Then again, as in most projects severity and priority go hand in hand. Then it is just easier to use just one field.
Assignment Who is expected to take the next step in handling the issue.
Type Is the issue a defect, enhancement idea, task or something else.
Version detected What version was tested when the issue was found.
Environment On what environment was the issue found.
Application area To what application area issue has an effect.
Resolution For closed issues, the reason why the defect was closed.

 

Add the fields to your project that are meaningful for you to classify defects and hide the ones that are not relevant. If you are not sure if you benefit from classification field, do not use the field. Use tags. You can add any tags to issues and find the issues later or to report them. Easy.

 

Issue written – what next?

Now that you wrote an error report, did you find it by conducting an existing test? If not, should there be a test that would find that given defect? After the issue has been corrected you probably get to verify the fix, but if the issue seems likely to re-emerge, write a test case for it.

defectflow

Writing an issue begins its life cycle. At bare minimum the defect has two stages – open when you work with it and closed when you have done what you wanted to do, but most of the time the project benefits from having more stages in workflow. Mainly the benefits come from easier management of large amounts of findings. It helps to know what needs to be done next, and by whom. Having information in what state the issues are, the project will be able to distinguish bottlenecks easier. Generally the bigger the project, the more benefits there are to be gotten from having more statuses. That said, if the statuses do not give you more relevant information about the project, do not collect them. Simpler is better.

Status Motive
New Sometimes it makes sense to verify the issues before adding them to further handling by someone else than the initial reporter. New distinguishes these from Open issues.
Open Defects that have been found, but no decision how the situation will be fixed has been made yet.
Assigned Defect is assigned to be fixed.
Reopened Delivered fix did not correct the problem or the problem occurred again later. Sometimes it makes sense to distinguish the opens from reopened to see how common it is for problems to re-emerge.
Fixed Issues are marked fixed when the mentioned issue has been corrected.
Ready for testing After the issue has been fixed this status can be used to mark ones that are ready for testing.
Postponed Distinguish those open issues that you are not planning to work with for a while.
Closed After the findings have been confirmed to cease to be relevant, the issue can be closed. Basically after fixing the issue will be tested.

When working with defects and changing their statuses, it is important to comment the issues when something relevant changes. Basically if the comment adds information about found issue, it probably should be added. The main idea behind this is that it is later possible to come back to discussion so you don’t need to guess why something was done or wasn’t done about the issue. Especially it is important to write down how the change will be fixed (when there is a room for doubt), and finally how the fix has been implemented so that tester will know how to re-test the issue. If the developer can open the way how the issue was fixed it helps tester find possible other issues that have arised when the fix has been applied.

 

Digging in the defect mass

So now that you have implemented this kind of classification for working around issues, what could you learn from statistics? First off, even with sophisticated rating and categorization the real situation can easily hide behind numbers. It is more important to react to each invidual issue correctly than to rate and categorize issues and only later react to statistics. That said, in complex projects the statistics, or numbers, help understand what is going on in the project and help find the focus on what should be done on the project.

Two most important ways to categorize defects are priority and status. Thus a report showing open issues per priority, grouped by status is a very good starting point to look at current issue situation. Most of the time you handle defects defferently from other issues, so you would pick the defects to one report and other types of defects to other. Now, this kind of report might show you for example that there is one critical defect assigned, 3 high priority defects open and 10 normal priority defects fixed. The critical and high priority defects you would probably want to go trough individually to make at least sure that they get fixed as soon as they should so they do not hinder other activities, and for the fixed ones you would look if something needs to be done to have them enabled for re-testing. If at some point you see some category growing, you know who should you ask questions. For example high number in assigned defects would indicate bottleneck in development and prolonged numbers in “ready for testing” a bottleneck in testing.

Another generally easy to use report is the trend report for open issues by their statuses or issues’ development. As the report shows how many open issues there has been at given time, you’ll see the trend – if you can close the issues the same pace you open them.

This was just a mild scratch in the area of working with defects. If you have questions or would like to point out something, feel free to comment.

Happy hunting!

Joonas

Facebooktwitterlinkedinmail


Tags for this post: best practices issues testing usage 


25.4.2014

Exploratory testing with Testlab

In this article we introduce you to the recently released new features enabling more streamlined workflow for exploratory testing.

Exploratory testing is an approach to testing where the tester or team of testers ‘explores’ the system under test and during the testing generates and documents good test cases to be run. In a more academic way, it is an approach to software testing that is concisely described as simultaneous learning, test design and test execution.

Compared to scripted testing – where test cases and scenarios are pre-planned before execution – exploratory testing is often seen as more free and flexible. Each of these methodologies have their own benefits and drawbacks and in reality, all testing is usually something in between of these two. We won’t go into methodological detail in this article as we focus on how to do the actual test execution in explorative way with Testlab. We can conclude though, that exploratory testing is particularly suitable if requirements for the system under test are incomplete, or there is lack of time.

 

Pre-planning exploratory test cases

As said, all testing approaches can be usually placed in between a fully pre-scripted and fully exploratory approach. It is often recommended to consider if pre-planning the test cases in some way would be beneficial. If the system under test is not a total black box meaning there are some knowledge or even specifications available it might be a wise idea to add so called stubs for your test cases in pre-planning phase. Pre-planning test case stubs might give you better insight in testing coverage as in pre-planning, you have an option to bind the test cases to requirements. We’ll discuss using requirements in exploratory testing in more detail later.

For example, one approach might be that you could just add the test cases you think you might need to cover some or all areas of the system without the actual execution steps. The actual execution steps, preconditions and expected results would be filled out in exploratory fashion during the testing. Alternatively, you might be able to plan the preconditions and expected results and just leave the actual execution steps for the testers.

Keep in mind, that pre-planning test cases does not and should not prevent your testers from adding totally new test cases during the testing. Additionally, you should consider if pre-planning testing might affect your testers’ way of testing. Sometimes this is not desirable and you should take into account the experience level of your testers and how the different pre-planning models fit into your testing approach and workflow in overall. 

 

Exploratory testing session

Exploratory testing is not an exact testing methodology per se. In reality, there are many testing methods such as Session-based testing or pair testing which are exploratory in a way. As Testlab is methodology agnostic and can be used with various different testing methods, in this article we combine all these methods by just establishing the fact that the testing must be done in a testing session. The testing method itself can be any method you wish to use but the actual test execution must be done in a session which optionally specifies

  • the system or target under testing (such as a software version),
  • environment in which the testing is executed in (such as production environment, staging environment, …) and
  • sets a timeframe in which the testing must be executed in (a starting date and/or a deadline).

To execute test cases in a testing session add a new test run to your project in Testlab. Go to the Test runs view and click the Add test run… button to add a new blank testing session.

New testing session as a test run

When the needed fields are set you have an option to just Save the test run for later execution or, to Save and Start… the added test run immediately. The test run is added to the project as blank meaning it does not have any test cases bound to it yet. We want to start testing right away so we click the Save and Start… button.

 

Executing tests

The set of functionality while executing is a good match for an exploratory testing session. As said, test execution in Testlab enables you to

  • pick a test case for execution,
  • record testing results for test cases and their steps,
  • add issues such as found defects,
  • add comments to test cases and for executed test case steps,
  • add existing test cases to be executed to the run,
  • remove test cases from the run,
  • re-order test cases in the run to a desired order,
  • add new test cases to the test case hierarchy and pick them for execution,
  • edit existing test cases and their steps and
  • cut and copy test cases around in your test case hierarchy.

The actual user interface while executing looks like this:

exploratory_runwindow

The left hand side of the window has the same test case hierarchy tree that is used to manipulate the hierarchy of test cases in test planning view. It enables you to add new categories and test cases and move them around in your hierarchy. The hierarchy tree may be hidden – you can show (and hide) it by clicking the resizing bar of the tree panel. The top right shows you the basic details of the test run you are executing and the list below it shows the test cases picked for execution in this testing session.

The panel below the list of test cases holds the details of a single test case. When no test cases for execution are available, the panel disables itself (like shown in the shot above) and lists all issues from the project. This is especially handy when testing for regression or re-testing – it makes it easy to reopen closed findings and retest resolved issues.

The bottom toolbar of buttons enable you to edit the current test case, add an issue and record results for test cases. The “Finish”, “Abort” and “Stop” buttons should be used to end the current testing session. Keep in mind, that finishing, aborting and stopping testing have their own meaning which we will come to later in this article.

 

Adding and editing test cases during a session

When exploring, it is essential to be able to document the steps for later execution if an issue is found. This way scripted testing for regression is easier later on. Also, if your testing approach aims to document the test cases for later use by exploring the system you must be able to easily add them during execution.

If you have existing test cases added which you would like to pick for execution during a session you can drag the test cases from the test hierarchy tree on to the list of test cases. Additionally, you can add new test cases by selecting New > Test case for the test category you want to add the new test case to. Picking test cases and adding new via inline editing is demonstrated in the following video:

 

 

 

Editing existing test cases is similar to the adding. You just press the Edit button at the bottom bar to switch the view to editing mode. The edits are made in identical fashion compared to adding.

 

Ending the session

When you have executed the tests you want you have three options:

Finish, abort or stop

It is important to understand the difference which comes from the fact that each executed session is always a part of a test run. If you wish to continue executing tests in the same test run at the later time you must Stop the session. This way the test run can be continued later on normally.

If you conclude that the test run is ready and you wish not to continue it anymore you should Finish it. When done so the test run is marked as finished and no testing sessions can be started on it anymore. It should be noted though, that if you discard a result of a test case from the run later on the run is reset back to Started-state and is again executable.

Aborted test runs are considered discarded and cannot be continued later on. So, if for some reason you think that the test run is not valid anymore and should be discarded you can press the Abort run button.

 

Asset workflows and user roles in exploratory testing

Requirements, test cases and issues have an asset workflow tied to them via project’s workflow setting. This means that each asset has states that they can be in (In design, Ready for review, Ready, …) and actions which can be executed on them (Approve, Reject, …). In exploratory testing having a complex workflow for project’s test cases is usually not desirable. For example, having a workflow which requires review of test cases from another party makes no sense when testers should be able to add, edit and execute test cases inline during testing.

That said, if using default workflows it recommended to use the “No review” workflow for your projects. 

 

No review workflow

 

If executing test cases which are not yet approved as ready Testlab tries to automatically approve them on behalf of the user. This means, that if the test case’s workflow allows it (and the user has the needed permissions to do so) the test case is automatically marked as approved during the session. This way using more complex workflows in a project with an exploratory testing approach might work if the transitions between test case’s states are suitable. That said, as the testers must be able to add and edit test cases during executing having a review based workflow is useless.

The asset workflows’ actions are also tied to user roles. For the testers to be able to document test cases during execution the tester users should also be granted a TESTDESIGNER role. This ensures that the users should have the needed permissions to add and edit test cases they need.

 

Using requirements in exploratory testing approach

When designing test cases in exploratory testing sessions the test cases are added without any links to requirements. In Testlab, testing coverage is reported against the system’s requirements and in testing parlance, a test case verifies a requirement it is linked to when the test case is executed and passes.

It is often recommended to bind the added test cases to requirements at the later stage. This way you can easily report what actually has been covered by testing. It should be noted, that the requirements we talk here don’t have to be fully documented business requirements for this to work. For example, if you would just like to know which parts of the system have been covered you might want to add the system’s parts as project’s requirements and bind the appropriate test cases to them. This way a glance at Testlab’s coverage view should give you insight which parts of the system have been tested successfully.

Better yet, if you did pre-plan your test cases in some form (see above) you might consider adding a requirement hierarchy too and linking your test case stubs to these requirements. This would give you insight into your testing coverage straight away when the testing starts.

 

Summary

In this article we talked about the new test execution features of Testlab enabling you to execute your tests using exploratory testing approaches. We went through the advantages of pre-planning some of your testing, using requirements in your exploratory testing, talked about testing sessions and noted how Testlab’s workflows and user roles should be used in exploratory testing.

Facebooktwitterlinkedinmail


Tags for this post: example exploratory features testing usage 


 
 
Best-of-class cross-browser hosted SaaS quality and test management, testing and issue management tools to improve your quality. Site information.