Automated testing takes advantage of computers’ ability to reliably conduct large numbers of simple tasks. Whereas manual testing requires humans to assess software functionalities and gather information from a user’s perspective, automated testing can go through large numbers of features and return a simple result on whether the function performed as expected or not.
The purpose of automated testing is to gather relevant information about the product at hand. This serves developers to understand how the software is performing, what issues arise, and what improvements they can make.
However, the results from current automated tests offer limited information. Upon completion, testers and developers need to revert back to more rudimentary ways of extracting valuable information from the results. Automated testing falls short not only on providing valuable insights from the tests, but the number of unfiltered results that it returns are hard to interpret and communicate.
How can we make automated testing better? In addition to programming computers to work hard, we can also program them to work smart. To achieve this, we require an adequate toolset a strategic approach to creating test plans.
What is automated testing and what is it good for?
As the name implies, automated testing has two components:
- Testing, as described by Michael Bolton in his renowned piece, ‘Testing vs Checking’ describes it as follows:
`Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes to some degree: questioning, study, modeling, observation, inference, etc.`
- Automation is a process in which a task is completed with minimal to no human supervision. This is achieved by developers and engineers writing a set of instructions for the software to follow.
A typical Test Process Management Lifecycle as detailed by the testing solution integrated by Oracle consists of
- defining test requirements,
- developing manual & automated test cases,
- documenting and tracking defects and
- creating reports.
Steps three and four from the above list are what interests us in terms of extracting the information that we need. The knowledge extracted from the results is the end-goal of testing and it can be used to implement changes and improvements to the original product.
Documenting and tracking defects in a rigorous manner is crucial for accurately identifying the problems and issues brought up by the tests. Reporting must be done in such a way that results are presented in an intuitive and informative view with an adequate level of detail. If the defect tracking and report creation steps are not handled correctly, the results will be hard to interpret, especially by non-developer stakeholders such as project managers.
So, is automated testing a silver bullet for learning and understanding more about the product? Kristen Aerbersold suggests that reporting, cleaning test data, setting up and tearing down environments, all also need to be automated. Otherwise, these will each become their own bottleneck to the new testing process and render the entire plan useless.
Improving automated testing
In a comprehensive report published by QA Intelligence in 2017, respondents were asked about what changes they are implementing in their testing processes in order to improve them. A few of them stood out, such as:
“Metrics, specifically in looking for insightful trends rather than seeking percentages and translating that new approach up to management.”
“Bridge the communication gap between the diverse function types”
“More focus on planning. Religiously used Impact matrices to make sure regression is not an issue.”
These insights tell us of some pain points that the people in charge of testing experience, such as gathering useful information, ensuring better communication across multiple domains, and adding clarity about the parts of the systems impacted.
In a perfect world, automated testing would include easy results interpretation and impact analysis, so that the following points would be true:
All stakeholders would be able to interpret the data easily, without having to ask developers for clarifications.
Following the automated tests, each impacted part of the tested system should be easily identifiable
The resulting test data should be classified in terms of relevancy and importance
As mentioned earlier, to achieve this ideal scenario, the testing process would need a specific toolset and an improvement in test planning.
To achieve a clear set of results following automated testing, testers must create a well-defined plan with a specific set of functions that need to undergo testing.
Once a well-defined test plan is in place, the developers or testers must have access to a toolset that allows them to scope and map the results of the automated tests to their original plan. All results should be stored against an identifier whose status is set by the pass or fail status of the tests inside.
Meliora Testlab offers such a solution with a simple mapping process. A set of automated testing results is mapped to an initial set of requirements or specifications. Users are able to do the mapping process by specifying the package identifier of the automated tests to a test case.
The test case’s status is determined by the results of the testing. If all tests in the package have passed, the test case is also marked as ‘pass’. However, if only one test has failed, the test case’s status will be set to ‘fail’. This feature will draw the attention of the tester to further inspect the function that failed and determine the root of the problem.
The mapping process in Meliora Testlab allows users to choose the level of detail that is mapped from the automated function testing to the test plan.
Making most of the freed-up resources
Meliora’s Testlab mapping feature improves the efficiency of the testing process by producing easily interpretable and understandable results. Rather than going through all the results obtained from the testing and drawing conclusions, Meliora Testlab brings to your attention the functions which are passing or failing in your system, so developers can spend their time fixing issues and improving the product rather than analyzing test results.
Additionally, the increase in visibility and accessible information will bring all involved stakeholders to the same level of knowledge. Project Managers will no longer need to rely on second-hand information offered by the developers. They will understand the results themselves.
We must always keep in mind that testing is not about ticking the boxes or satisfying a mandatory requirement sent by a senior figure in the organization. Testing is about building a better product and ensuring a superior customer experience. By gathering higher quality information and lowering the work required at the same time, you will be able to invest your time into more value-focused activities.