Sunday, September 11, 2011

Is This a Valid Performance Model?

Is a reliability growth model considered to be a valid PPM in the CMMI?


Asking this question out of context with what you do in the organization does not have a lot of meaning.  The correct answer is both yes and no.  Please remember what High Maturity is all about.  You begin with setting your business goals and objectives and use them to derive your Quality and Process Performance Objectives (QPPOs).  These QPPOs in turn will lead you to the proper measures, process performance baselines (PPBs), and process performance models (PPMs) that your organization needs to quantitatively and statistically manage your work.So, if Reliability Growth is a critical process or sub-process and you have sufficient data to analyze to determine that you have a stable and capable process, then a reliability growth model might be considered a valid PPM.

But just selecting models without performing the analysis I just sketched out is incorrect and you will not be able to demonstrate that your organization is a High Maturity organization.


Thanks for the detail. I just happen to see in CMMI v1.3 High Maturity in which the "reliability model growth" which was given as example in OPP SP1.5 (CMMI v1.2) is deleted. Does it mean that reliability growth model will not be accepted in CMMI v1.3? Or the reliability growth model is not acceptable by the experts? Or is it only good if you use CMM v1.2  and not for CMMI v1.3?

As CMMI v1.3 is an improvement and the practices are carefully analyzed by the SEI and experts, is it advisible to use the reliability growth model given in CMMI v1.2? Or there is any chance that CMMI v1.3 will include the reliability growth model as an example?



Apparently there is some misunderstanding of my answer above.  Whether the CMMI contains the reliability growth model as an example or not is irrelevant to whether or not it is a good model.  Your organization has to mathematically analyze its data, business objectives, QPPOs, PPBs, and PPMs to determine if there is a need for using a reliability growth model.  Do the following analysis:
  1. Describe the reliability growth model in probabilistic terms.
  2. Define the critical sub-processes (those that must be consistently and correctly followed every time) that can be managed using the reliability growth model.
  3. Define how a project manager uses the reliability growth model in the context of his or her projects to predict performance, "what-if" analysis, and predict QPPO achievement.
  4. Provide an equation or show by other means how the stable sub-processes that you have identified in your processes contribute to the reliability growth model.
  5. List the other models that are used in conjunction with reliability growth model and why it has statistical relevance.
Once you have performed this analysis you will have enough information to answer this question yourself.

Saturday, September 10, 2011

Audit Findings

My personal experience shows that when audits are planned monthly or at milestones, it is very difficult to take any proactive quality measures. Let's say that SQA is conducting a review at the end of the design phase just before the milestone review, and during the audit they identify that a particular design option has been selected without applying DAR, then how can they close this type of reported non-compliance by having evidence that the project team is fixing the issue? What I have seen is that sometimes the project team considers the same non-compliance as an oversight like other types of mistakes and they close the non-compliance by labeling it as a lessons learned. Although as SQA I know that there might be a chance that this same issue can occur again in the future. But apart from presenting the findings to the milestone review meeting, we have nothing to do. And the SQA group does not have insight into most of the organization's processes where this type of event occurs so we can ensure every project is following the process per the plan. So please shed some light on this topic and suggest that what type of postmortem we can do as a reactive response and what type of proactive measure we can take?

It sounds like from your description that all that SQA does is flag a problem and then the project team declares what they are going to do and makes the final decision. In other words SQA has no control over the non-compliance after identifying the problem. This is an incorrect implementation of SQA. The SQA or PPQA people are the “eyes and ears” of senior management and if there is a disagreement between the Project Team and SQA about an audit finding, that must be escalated to Senior Management for resolution. The Project Team does not have the authority to declare that an audit finding has been correctly resolved. SQA has the responsibility and authority to decide if the non-compliance is being properly identified and worked. If SQA feels that the Project Team’s action to address the non-compliance is inadequate, then SQA should not accept the closure and insist that the Project Team take appropriate corrective actions. If SQA meets resistance, then SQA should escalate the issue to top management for resolution. Resolution may involve doing nothing, training or re-training the people following the process, modifying the process, or some combination.
Hope this explanation helps.

PI SP 3.1 Confirm Readiness of Product Components for Integration

I need help with understanding this practice. Here is the situation: In our organization we have implemented MS Team System. This tool allows us to analyze the code from different perspectives. We have implemented peer reviews. The code reviews allow us to verify if the complies with the design specification. We have also implemented CM audits to check the identification of every configuration item. Consequently, I´m not certain we are fully aligned with this practice.

The purpose of PI SP 3.1 is to ensure that all of the components that you will be assembling are ready for assembly. For purely a software project, this practice is pretty easy and straightforward. At a minimum, you want to be certain that every module has been properly checked into your CM system, that every configuration unit has been unit tested, and that the external and internal interfaces have been examined to verify that they comply with the documented interface descriptions. It sounds like you might have most of these activities covered by MS Team System and your peer reviews. What I don’t see in your description is any activity associated with checking the interfaces against their descriptions. When you are integrating hardware and software, or have a large and complex software project with many different systems, this practice becomes more complicated.
Hope this short explanation helps.

I have one more question. Should we run unit tests for every configuration unit? Is it possible to implement actions other than unit testing to comply with this best practice? I think that the Static Code Analysis in VS Team System checks the interfaces betwen components. In the peer reviews of code we check the interfaces against their descriptions documented in design specifications.

You are actually focusing on the wrong topic. Instead you should be seeking answers to these types of questions.
  1. What do your business goals and objectives tell you about the required quality level of products?
  2. What is the reason for performing unit tests? Or what are you trying to achieve by unit testing the code?
  3. Do your customer requirements and your business goals and objectives require a quality level that demands that you perform unit tests before creating a product build?
  4. What are your requirements for each configuration item before creating a build?
Answers to these questions will provide the answers to your questions.
Basically, your configuration audits are there in order for you to determine if all of the configuration items are ready to be assembled. Perhaps the Static Code Analysis in VS Team System is satisfactory, perhaps it is not. That is for you to decide based on the quality requirements for your product.

Hope this explanation helps, but there is no clear answer to your question without being able to spend some time with you and your organization to perform an in-depth analysis of your processes and procedures.

Traceability in Pure Testing Projects

I have a question about addressing requirements traceability for pure testing projects (Understanding requirements->Writing Manual Test cases->executing them). If the application is not developed by us, what information other than Module name, Requirement ID, description, and Manual Test Case ID needs to be mapped?

There is no definitive answer to your question. The actual answer is up to you and your organization to decide what is necessary for your traceability. What does each testing project need to know about traceability? If you can answer that question, then you have the answer to your question as well. What you have listed sounds reasonable, but only you can determine if it is complete or that you need to add other elements.