Sunday, September 11, 2011

Is This a Valid Performance Model?

Is a reliability growth model considered to be a valid PPM in the CMMI?


Asking this question out of context with what you do in the organization does not have a lot of meaning.  The correct answer is both yes and no.  Please remember what High Maturity is all about.  You begin with setting your business goals and objectives and use them to derive your Quality and Process Performance Objectives (QPPOs).  These QPPOs in turn will lead you to the proper measures, process performance baselines (PPBs), and process performance models (PPMs) that your organization needs to quantitatively and statistically manage your work.So, if Reliability Growth is a critical process or sub-process and you have sufficient data to analyze to determine that you have a stable and capable process, then a reliability growth model might be considered a valid PPM.

But just selecting models without performing the analysis I just sketched out is incorrect and you will not be able to demonstrate that your organization is a High Maturity organization.


Thanks for the detail. I just happen to see in CMMI v1.3 High Maturity in which the "reliability model growth" which was given as example in OPP SP1.5 (CMMI v1.2) is deleted. Does it mean that reliability growth model will not be accepted in CMMI v1.3? Or the reliability growth model is not acceptable by the experts? Or is it only good if you use CMM v1.2  and not for CMMI v1.3?

As CMMI v1.3 is an improvement and the practices are carefully analyzed by the SEI and experts, is it advisible to use the reliability growth model given in CMMI v1.2? Or there is any chance that CMMI v1.3 will include the reliability growth model as an example?



Apparently there is some misunderstanding of my answer above.  Whether the CMMI contains the reliability growth model as an example or not is irrelevant to whether or not it is a good model.  Your organization has to mathematically analyze its data, business objectives, QPPOs, PPBs, and PPMs to determine if there is a need for using a reliability growth model.  Do the following analysis:
  1. Describe the reliability growth model in probabilistic terms.
  2. Define the critical sub-processes (those that must be consistently and correctly followed every time) that can be managed using the reliability growth model.
  3. Define how a project manager uses the reliability growth model in the context of his or her projects to predict performance, "what-if" analysis, and predict QPPO achievement.
  4. Provide an equation or show by other means how the stable sub-processes that you have identified in your processes contribute to the reliability growth model.
  5. List the other models that are used in conjunction with reliability growth model and why it has statistical relevance.
Once you have performed this analysis you will have enough information to answer this question yourself.

Saturday, September 10, 2011

Audit Findings

My personal experience shows that when audits are planned monthly or at milestones, it is very difficult to take any proactive quality measures. Let's say that SQA is conducting a review at the end of the design phase just before the milestone review, and during the audit they identify that a particular design option has been selected without applying DAR, then how can they close this type of reported non-compliance by having evidence that the project team is fixing the issue? What I have seen is that sometimes the project team considers the same non-compliance as an oversight like other types of mistakes and they close the non-compliance by labeling it as a lessons learned. Although as SQA I know that there might be a chance that this same issue can occur again in the future. But apart from presenting the findings to the milestone review meeting, we have nothing to do. And the SQA group does not have insight into most of the organization's processes where this type of event occurs so we can ensure every project is following the process per the plan. So please shed some light on this topic and suggest that what type of postmortem we can do as a reactive response and what type of proactive measure we can take?

It sounds like from your description that all that SQA does is flag a problem and then the project team declares what they are going to do and makes the final decision. In other words SQA has no control over the non-compliance after identifying the problem. This is an incorrect implementation of SQA. The SQA or PPQA people are the “eyes and ears” of senior management and if there is a disagreement between the Project Team and SQA about an audit finding, that must be escalated to Senior Management for resolution. The Project Team does not have the authority to declare that an audit finding has been correctly resolved. SQA has the responsibility and authority to decide if the non-compliance is being properly identified and worked. If SQA feels that the Project Team’s action to address the non-compliance is inadequate, then SQA should not accept the closure and insist that the Project Team take appropriate corrective actions. If SQA meets resistance, then SQA should escalate the issue to top management for resolution. Resolution may involve doing nothing, training or re-training the people following the process, modifying the process, or some combination.
Hope this explanation helps.

PI SP 3.1 Confirm Readiness of Product Components for Integration

I need help with understanding this practice. Here is the situation: In our organization we have implemented MS Team System. This tool allows us to analyze the code from different perspectives. We have implemented peer reviews. The code reviews allow us to verify if the complies with the design specification. We have also implemented CM audits to check the identification of every configuration item. Consequently, I´m not certain we are fully aligned with this practice.

The purpose of PI SP 3.1 is to ensure that all of the components that you will be assembling are ready for assembly. For purely a software project, this practice is pretty easy and straightforward. At a minimum, you want to be certain that every module has been properly checked into your CM system, that every configuration unit has been unit tested, and that the external and internal interfaces have been examined to verify that they comply with the documented interface descriptions. It sounds like you might have most of these activities covered by MS Team System and your peer reviews. What I don’t see in your description is any activity associated with checking the interfaces against their descriptions. When you are integrating hardware and software, or have a large and complex software project with many different systems, this practice becomes more complicated.
Hope this short explanation helps.

I have one more question. Should we run unit tests for every configuration unit? Is it possible to implement actions other than unit testing to comply with this best practice? I think that the Static Code Analysis in VS Team System checks the interfaces betwen components. In the peer reviews of code we check the interfaces against their descriptions documented in design specifications.

You are actually focusing on the wrong topic. Instead you should be seeking answers to these types of questions.
  1. What do your business goals and objectives tell you about the required quality level of products?
  2. What is the reason for performing unit tests? Or what are you trying to achieve by unit testing the code?
  3. Do your customer requirements and your business goals and objectives require a quality level that demands that you perform unit tests before creating a product build?
  4. What are your requirements for each configuration item before creating a build?
Answers to these questions will provide the answers to your questions.
Basically, your configuration audits are there in order for you to determine if all of the configuration items are ready to be assembled. Perhaps the Static Code Analysis in VS Team System is satisfactory, perhaps it is not. That is for you to decide based on the quality requirements for your product.

Hope this explanation helps, but there is no clear answer to your question without being able to spend some time with you and your organization to perform an in-depth analysis of your processes and procedures.

Traceability in Pure Testing Projects

I have a question about addressing requirements traceability for pure testing projects (Understanding requirements->Writing Manual Test cases->executing them). If the application is not developed by us, what information other than Module name, Requirement ID, description, and Manual Test Case ID needs to be mapped?

There is no definitive answer to your question. The actual answer is up to you and your organization to decide what is necessary for your traceability. What does each testing project need to know about traceability? If you can answer that question, then you have the answer to your question as well. What you have listed sounds reasonable, but only you can determine if it is complete or that you need to add other elements.

Sunday, February 6, 2011

Review Activity for a Short Term Project

Our organization will be going through CMMI Maturity Level 2 Appraisal in a couple of months. I have a PPQA question. As per the PPQA Process Area (PA), we require a review of the work products (content/template) and procedures required at Maturity Level2 during the project life cycle. We have one project that is 3 months long. There are many work products that will be produced during the project development life cycle.
  • Requirement documents such as SRS, Use cases, Bidirectionally traceability matrix document, change log, etc;
  • Plans for all the PAs, e.g. requirements management plan, project plan, configuration plan, etc;
  • Development artifacts, such as ERD, Code, UML diagrams, etc;
  • QC artifacts, such as test cases, test reports, etc.
  • Monitoring/controlling artifacts, such as Issue list, MoMs, Risks, etc.
How is it possible to review the work products for a 3 month project when we don't have a separate QA department and the stakeholders involved in development do the work product reviews one way or the other.

This same question holds true for reviewing procedures.

Of course, we review high priority documents, such as Project Plan, Use Cases, ERD, Application; but not all of them.

Can you help me understand what should be done for a short duration project, such that the PPQA PA requirements are met and we don't have to hire separate people just to fulfill the requirement?

The first thing that I would do is postpone your ML 2 SCAMPI A appraisal as apparently you have a major risk to achieving ML 2 since PPQA does not appear to be in place in your organization. And even if you could put PPQA in place for a 3 month project between now and your appraisals, that may still not be enough time to demonstrate institutionalization, meaning that you have a repeatable process. Essentially you will have one project using PPQA, which is one data point. And it is not possible to determine institutionalization from one data point. Your organization will be at serious risk of not achieving ML 2.

Industry average shows that PPQA is 3 – 5% of your organization. You haven’t told me how large your organization is. But if your organization is 25 people, than 1 person should be assigned to perform the PPQA practices.

I think that you are misunderstanding the differences between reviewing a work product and objectively evaluating a work product. It sounds like your project teams are already reviewing the work products. The role of PPQA is not to review the work products, but to audit the work products and processes to ensure that the work products follow the specific standards and are products according to your documented processes.

I highly recommend that you, or someone you select in your organization, take a training class on how to perform PPQA. I cannot adequately explain how to perform PPQA and answer your specific questions in this blog. The person you select for the training needs to be taught how to conduct a work product audit, how to conduct a process audit, how to plan PPQA audits, how to communicate audit results, and how to track audit non-compliances to resolution. If you don’t already have this capability in house, it will take some time to develop it internally. And I strongly advise against using an external consultant to provide this service. PPQA is for the benefit of your organization and management. It is essentially the eyes and ears of your senior management. And an external consultant may be motivated by other considerations than your best business interests if asked to provide PPQA services.

Tuesday, January 11, 2011

Why Isn't the SEI Implementing the CMMI for Itself?

Why doesn't the SEI use its own model- CMMI for all its different product development and services? Even for SEI projects and program management it is crucial, and they have customers the world over. If the SEI goes for CMMI ML3 Appraisal it will be great for the user community and they can achieve their mission in a planned manner, right?

Do Lead Appraisers & SEI Partners feel that they can benefit if the SEI gets CMMI ML3 (defined Process)?

In such a case, who will appraise the SEI? (sorry for such a hypothetical Question)

As the SEI does not develop software, but delivers services, the CMMI-DEV doesn’t apply. That is why the SEI has not been previously appraised to the CMMI. However, the SEI is now implementing the CMMI-SVC for the services it delivers. This is a good thing and the SEI Partners are noticing some of the improvements. Obviously, by the SEI’s Conflict of Interest policy, a CMMI-SVC Lead Appraiser external to the SEI organization being appraised would have to lead the appraisal team.

Monday, November 1, 2010

Appraisals: Practice or Subpractice level?

For successful SCAMPI appraisals, is there any reason to prepare process compliance at the sub-practice level? Would appraisers be looking for evidence at that level?

This is a question answered by taking the 3-day Introduction to CMMI class and also by your Lead Appraiser. There are three CMMI components: Required, Expected, and Informative. An appraisal only covers the Required (Goals) and Expected (Practices) components. Your Lead Appraiser should also be providing some training or guidance on how to build the PIIDs, which contain the objective evidence for an appraisal. And the whole appraisal team is involved in reviewing the PIIDs during the Readiness Review to determine if the evidence is proper for a SCAMPI appraisal.

If a Lead Appraiser or the appraisal team is appraising you to the sub-practice level, they have gone too far. The SCAMPI method is only concerned with appraising the organization to the Goals and Practices.