Showing posts with label traceability matrix. Show all posts
Showing posts with label traceability matrix. Show all posts

Tuesday, August 4, 2009

Requirements Traceability, To What Level?

It certainly makes sense to include relations between requirements and source code in the requirements traceability matrix. If requirements change, we've a direct view on the impact on source level. But can this be managed and maintained, even with a tool? Is it wortwhile to put a huge effort in maintaining relations between requirements and the source code? Isn't it sufficient to define these relations at the level of design elements and user acceptance test cases? In this case, we've a means to verify that each requirement is covered by design and test. I believe this strongly reduces the risk of uncovered requirements in the final delivery, which is one of the main purposes of requirements traceability.

The answer is, you do whatever is necessary to meet your business goals and objectives. It depends upon the criticality of the product or service you are delivering. If your product is highly complex and someone could lose their life if there was a missed requirement, then it is necessary to put a huge amount of effort into requirements traceability. Just consider the Space Shuttle program, the amount of requirements etc. The Space Shuttle software is as close to zero defects as you will ever find. And at the other end of the spectrum, you would be justified limiting the amount of effort for traceability.

Monday, July 27, 2009

Requirements Traceability Matrix Question

Is Requirements Traceability Matrix (RTM) a configurable item (CI) or not? Do we need to maintain its version history?

This is a question you really have to answer for yourself by first addressing some more basic questions. How critical is the RTM to the success of your project, organization, and business? What do you use the RTM for? Tracing requirements? Verifying and validating requirements? Regression testing? What value to you, the project, and the organization is the change history of the RTM? If you identify strong business needs for the RTM, then the answer to original question will become obvious.

From my perspective, the RTM is not a CI per se as it is a tool for managing your requirements and not necessarily a product component. But it is a very important, if not essential tool, and maintaining its change history could be necessary for project and organizational success.

Tuesday, October 28, 2008

Test Case Coverage & Review Effectiveness

Can anyone help me for the definition of Test Case Coverage and Review Effectiveness? How is Review Effectiveness calculated for work products like SRS, SDD &Test Spec?

The term Test Case Coverage means to me a measure of how many of the product requirements are being tested by the defined test cases. It is the testers’ job to define their test cases based on the requirements and the goal should be 100% coverage or close to that. Please bear in mind that 100% coverage is not the same as exhaustive testing. Exhaustive testing means testing every path through the software using all possible values.

The Requirements Traceability Matrix is what you would use to determine the Test Case Coverage. You should be able to map every test case to the requirements and vice versa. Once you have done that you will be able to determine if one or more of the requirements are not being covered by the test cases. So a simple ratio of the number of reqmts not being tested to the total number of requirements would yield a Test Case Coverage measure.

Review Effectiveness is a bit harder to define. When you hold a review, say for the SRS, then you would have a total number of defects discovered and you also know the number of pages in the document. So, to first order, you could divide the total number of defects by the number of pages and derive a defect density measure for the document. But that number by itself doesn’t tell you a whole lot. You need to have a good understanding of the expected number of defects per page coming out of a review. That number would come from your historical data. But this number still doesn’t yield an effectiveness measure. You could be finding lots of minor defects in a review and the major ones are slipping through. So you have to look for defects that were missed by the review in downstream activities such as testing and other reviews. You could also be grouping defects by major and minor categories etc.

You should be asking several questions that would lead you to an effectiveness measure:
1. In what review did you find the defect?
2. In what activity was the defect inserted?
3. What review should have caught the defect but didn’t?