You are probably familiar with Parkinson’s law, either by name or just by experience. Parkinson’s Law states: “Work expands so as to fill the time available for its completion”. We have all seen this dreaded fact play out on a project at some point. And without proper planning and tracking the available time tends to expand as well, along with the work.
IT projects are particularly sensitive to this work and time bloat. The Standish Group, famous for The CHAOS Chronicles, has tracked the success rate of IT projects over a number of decades, and has found the following ratios of successful, failed and challenged (completed but not without difficulty and scope issues) projects:
Though challenged projects still remain the largest category, it is evident that over the years the ratio of successful to failed projects has improved. An important reason is the progress made in the ability for IT project managers to accurately estimate the effort required, since that is the first stage of building a robust project plan.
In software projects, especially testing, effort estimation is particularly tricky. Chief reasons for this are:
- Number of test cases executed is always more than those planned: This is because of re-execution of failed test cases and additional regression testing of ‘in/around’ scenario test cases for closure of problem reports.
- Data from similar projects or previous releases of the same project may not always be dependable: This is because failure rate reduces and complexity of the bugs increases with maturity. Even though the failure rate and hence the count of re-executions may be estimated fairly accurately based on prior experience, there can be no perfect estimate of how long a complex bug fix will take before the code is sent back for testing. Add to that the menace of irreproducible bugs and camouflaged bugs that can only be found and re-tested in scenarios of a week or longer.
- Testing effort cannot preempt changes in requirements and design: Number of test cases depends on the coverage of code, various configurations and variations, and the functionality being delivered. Testing effort differs for new functionalities, modified functionalities, and unchanged code. If functionalities are added or modified, or newer configurations are identified during the project, it is bound to shake up the entire test plan.
- Manpower requirements vary depending on the phase and complexity of testing: Certain testing needs dedicated resources while others can be accomplished through less specialized resources shared across projects. More dedicated management and technical expertise is required for testing new features with complex setups where monitoring may not be limited to ‘working hours’ (eg. performance test areas like stability, load, and comparison testing). Accurate mapping of resources to projects based on their complexity, size, and duration, followed by the successful allocation of mapped resources to such projects is not always possible.
One method that successfully tackles these challenges considers the test case execution as well as authoring phase across all test stages. The effort estimation formula given below, illustrates this:
(Fixed Effort = sum of the management and execution effort of dedicated testing resources)
From the above it follows that:
Total Effort = Sum of Test Efforts calculated for Each Test Stage (Functional, System, Integration, Regression, and Release Testing)
This modular method of effort estimation helps to simplify re-planning if and when needed. By listing out all the stages and activities it prevents under-billing or over-billing of effort, and misunderstandings in the scope of work are reduced. Further, organizations that divide their test effort into greater or lesser number of test stages than those considered above can also easily apply this method as long as they have been able to estimate the values of the variables in the equation for each test stage.
However, irrespective of the estimation method used, it is important to standardize the testing effort calculations across test stages and as much as possible, across projects. This helps to provide a common approach for effort estimation and allows learnings from past experiences to be implemented. A common effort estimation framework with the customer results in better clarity of expectations and deliverables; and leads to faster agreement on the scope of work.
Although clichéd, there is much merit in the age old dictum—if you fail to plan, you plan to fail—and organizations that repeatedly deliver projects on time and within budget pay full heed to it. We hope the guidelines provided here help improve your own ability to estimate testing phases of projects.
About the Authors:
Aditya Lal is an Assistant Manager in Marketing at Aricent, and Ajay Garg is Director of Engineering for Aricent Product Engineering Solutions.