Most of the smaller software companies and the mid size companies are finding it difficult to measure the performance of their employees. There are chances that the performance of the test engineer is miscalculated (either way over-rated or under-rated), which could really create an impact on the growth of both the organization, Test Leads, QA's and the Testers.
Here are some of the common metrics used by the Software testers and mangers to analyze the productivity. However, all of the below metrics could not be used in all the projects, applying the below metrics could be trivial in some of the projects. I hope this would be useful.
Software Testing Metrics
1. Cost of finding a defect in testing (CFDT)
= Total effort spent on testing / defects found in testing
Note: Total time spent on testing including time to create, review, rework, execute the test cases and record the defects. This should not include time spent in fixing the defects.
Note: Total time spent on testing including time to create, review, rework, execute the test cases and record the defects. This should not include time spent in fixing the defects.
2. Test Case Adequacy:
This defines the number of actual test cases created vs estimated test cases at the end of test case preparation phase. It is calculated as
No. of actual test cases / No: of test cases estimated
No. of actual test cases / No: of test cases estimated
3. Test Case Effectiveness:
This defines the effectiveness of test cases which is measured in number of defects found in testing without using the test cases. It is calculated as
No. of defects detected using test cases*100/Total no: of defects detected
No. of defects detected using test cases*100/Total no: of defects detected
4. Effort Variance:
Effort Variance can be calculated as
{(Actual Efforts-Estimated Efforts) / Estimated Efforts} *100
{(Actual Efforts-Estimated Efforts) / Estimated Efforts} *100
5. Schedule Variance:
It can be calculated as
{(Actual Duration - Estimated Duration)/Estimated Duration} *100
{(Actual Duration - Estimated Duration)/Estimated Duration} *100
6. Schedule Slippage:
Slippage is defined as the amount of time a task has been delayed from its original baseline schedule. The slippage is the difference between the scheduled start or finish date for a task and the baseline start or finish date. It is calculated as
((Actual End date - Estimated End date) / (Planned End Date – Planned Start Date) * 100
((Actual End date - Estimated End date) / (Planned End Date – Planned Start Date) * 100
7. Rework Effort Ratio:
{(Actual rework efforts spent in that phase / Total actual efforts spent in that phase)} * 100
8. Review Effort Ratio:
(Actual review effort spent in that phase / Total actual efforts spent in that phase) * 100
9. Requirements Stability Index:
{1 - (Total No. of changes /No of initial requirements)}
10. Requirements Creep:
(Total No. of requirements added / No of initial requirements) * 100
11. Weighted Defect Density:
WDD = (5*Count of fatal defects)+(3*Count of Major defects)+(1*Count of minor defects)
Note: Here the Values 5, 3, 1 correspond to severities as mentioned below:
Fatal - 5
Major - 3
Minor - 1
12) Test Coverage = Number of units (KLOC/FP) tested / total size of the system. (LOC represents Lines of Code)
13) Number of tests per unit size = Number of test cases per KLOC/FP (LOC represents Lines of Code).
14) Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria
15) Defects per size = Defects detected / system size
16) Test cost (in %) = Cost of testing / total cost *100
17) Cost to locate defect = Cost of testing / the number of defects located
18) Achieving Budget = Actual cost of testing / Budgeted cost of testing
19) Defects detected in testing = Defects detected in testing / total system defects
20) Defects detected in production = Defects detected in production/system size
21) Quality of Testing = No of defects found during Testing/(No of defects found during testing + No of acceptance defects found after delivery) *100
22) Effectiveness of testing to business = Loss due to problems / total resources processed by the system.
23) System complaints = Number of third party complaints / number of transactions processed
24) Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10
25) Source Code Analysis = Number of source code statements changed / total number of tests.
26) Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for Design and Documentation
27) Test Execution Productivity = No of Test cycles executed / Actual Effort for testing
Join our community in Facebook and Google+ at the below URL's to stay up to date:
Facebook Page: http://www.facebook.com/SoftwareQaHelp
Hmmm - rather than a list of calculations, would have been interested in what the author felt was the intrinsic value of some of those metrics were.
ReplyDeleteExcellent collection of information
ReplyDeleteGood collection..Thanks
ReplyDeleteCool..pretty good information..Thanks!
ReplyDelete