I am a Defect Prevention team member in our CMMI Maturity Level 5 company. We are trying to plot the Defect Removal Efficiency (DRE) metric in our projects. I have a few questions:
- Our DRE is based on phase level defect injection and detection concepts. If we go closely by the definition of the phase "testing/QA", bugs should not be injected in testing phase, as this is detection activity. If not then any examples can be useful for me to understand.
- Can a common DRE format can be utilized for all kinds of projects as in sustaining engineering, maintenance, development and pure testing? Specifically what do to we do for testing or pure QA type of projects?
- DRE is scoped for defects leaked by QA and reached to customer. How we can address defects in other lifecycle phases i.e. deployment and post production activities?
These are very interesting questions. Here is my two cents worth:
- Your assumption that testing/QA phase is not a source of defects may or may not be a good assumption. Test cases can be a source of defects as well as the product. A test case that does not find any defects may in fact itself be defective. The purpose of a test case is to find defects in the product. In this case there might be undetected defects slipping through the test cases or erroneous test results. On the other hand testing can sometimes identify defects in the product that may not actually exist. I encountered this situation years ago in a small software development company when they hired a new tester. The company had no documented procedures and testing criteria. So the new tester did the best job he could under these circumstances and based his testing on the published User’s Manual. He proceeded to find tons of defects in the product. That got management’s attention until they realized that all of these “new” defects had been previously addressed. It turned out that the tester was unwittingly using an obsolete version of the User’s Manual. So that mistake invalidated all of his test results. Either way, each of these two situations would contaminate your DRE calculations. Therefore, I would be very careful in making the assumption that the testing/QA phase is not a source of defects, unless you have thoroughly peer reviewed each test case and validated each test result to ensure that what I have described has not happened.
- In the context of the CMMI, QA usually means for most of us Lead Appraisers PPQA – conducting process audits and work product audits, not testing. So I am not clear as to the distinction you are trying to make between testing and "pure QA". It appears that in your organization they may be two aspects of the same activity. Personally, I think that you could use a common DRE format across all types of projects. However, it doesn’t make sense to mix the data for the various projects. So you will have to be very careful to keep the data segregated, otherwise you won’t be able to draw the proper conclusions. For example, the DRE for new development will be different than for sustaining engineering, different from a testing project, etc.
- I think that you answered your question in your statement of the scope. Just change the definition of the scope for your DRE. You could define the DRE scope on a lifecycle phase by phase basis and then you would be able to measure it for deployment and post-production activities. You could also define a DRE for the entire project lifecycle as well.