Let us consider a typical Test report - a report that is presented in a meeting for assessing the progress of testing, attended by key stakeholders and team members:
No of Test cases prepared: 1230
No of Test cases Executed: 345
No of Test cases Failed : 50
No of Bugs reported: 59
No of Requirements Analyzed : 45
No of requirements updated :50
No of Transactions covered in Performance Testing : 24
No of Use cases Tested : 233
Productivity
No of Test cases prepared Per Person Per hour = 5
No of Test cases executed per person per hour = 15
What do you see here?
Managers love numbers - Numbers give objective information, numbers quantify observations and help in taking decisions (??). Numbers simplify things, one can see trends in numbers.
You might have heard about one or more of above statements (mostly in review, progress meetings right?). When it comes to testing, followers of Factory approach testing, are comfortable in just counting things like test cases, requirements, use cases, Bugs, passed and failed test cases etc and take decisions about "quality" of the product.
Why counting (without qualifications) is bad idea in testing? What are disadvantages of such practice? Let us briefly take a look of few famous frequently *counted* things
Count Requirements (as in there are 350 requirements for this project)
Can we count?
How to count? Do we have a bulleted list of requirements? If not, what to do?
How to translate given requirements into "bulleted list"
How to account for Information loss, interpretation errors while counting requirements
Count Test cases ( as in test team has written (or designed or prepared) 450 test cases in last week)
Test cases are test ideas. Test case is only a vague, incomplete and shallow representation of actual intellectual engagement that happens in testers mind at the time of test execution (Michael Bolton, mentioned this in his recent Rapid software Testing workshop at Hyderabad)
How can we count ideas?
Test cases can be counted in multiple ways - more often than not, in a ways that are "manipulative" - count is likely to be misleading
When used for knowing or assessing Testing progress - likely to mislead the management
Count Bugs ( we have 45 bugs discovered in this cycle of testing so far)
The story or background about a bug is more interesting and valuable than the count of bugs ( this again I owe it to Micheal Bolton - "Tell me story about this sev 1 bug? would be more informative and revealing question than asking how many sev 1 bugs we have uncovered so far?
when tied to testers effectiveness - is likely cause testers to manipulate bug numbers ( as in Tester 1 is great tester as he always logs maximum number of bugs)
Let us face the fact of life in software testing - there are certain things in testing that can not be counted as we count no of cars in the parking lot, no of patients visited a dentist's clinic or No of Students in a school.
Certain artifacts like test cases, requirements and bugs are not countable things and any attempt to count them can only lead to manipulations and ill-informed decisions.
Wait --- Are there any things at all in testing that we can count without loss of effectiveness and usefulness of the information that a counted number might reveal?
1 comment:
Hai Krishna Prasad
this is munni you are doing a wonderful job which will be helpful to the candidates who want to start their career in software testing
Thanking you for your support i hope
u help will continue .......
bye
Post a Comment