Showing posts with label PPBs. Show all posts
Showing posts with label PPBs. Show all posts

Sunday, October 9, 2011

Is a Prediction Model Required for Maturity Level 4

Is it necessary to have a prediction model for L4 and L5?

I collect data from different projects and put them in a 3 sigma band and take care of outliers.  Then I compare the mean and standard deviation of current PBB with previous PBB.  And I also check against the LSL and USL set by the management team the trend coming out of this PBB.  In case the mean, SD, LCL and UCL decrease from the previous PBB, I update the management team and we check the projects on the basis of this.

What happens when I talk to someone in my circle of friends they talk about regression, simulation, Monte Carlo, etc.  In my organization we need to show process performance against set targets, so why make life so complex with so many above mentioned methods?



From what I can glean from your question is that you have developed a Process Performance Baseline (PPB).  But I have no idea why you have done this and what value you are getting from knowing this PPB.  What use is it to your organization?  How does it help you meet your Quality and Process Performance Objectives (QPPOs) and your business goals?

If you are not interested in fully implementing ML 4 or ML 5, then I suppose you don’t need anything else as long as you are deriving some sort of benefit from this PPB.  However, for a complete High Maturity implementation, you are expected to do the proper statistical analysis of data distributions, probability statistics, process engineering, etc. to derive appropriate PPBs and PPMs.  And to be a PPM instead of merely a forecast model and be of use for "what-if" analyses, the PPMs must contain controllable factors that have an impact on the outcome.

Also, if you want to implement ML 4 and/or ML 5, then you need to have established organizational QPPOs, a number of PPBs that support the QPPOs and can be used to evaluate the feasibility of achieving them, and a set of Process Performance Models (PPMs) that are derived from your historical data that are used in conjunction with the PPBs to predict each project’s ability to meet its QPPOs, as well as a number of other activities.

Therefore, if all that you have is one PPB and nothing else, you only have a partial implementation of OPP and still have a lot of work ahead of you before you can consider that you have implemented ML 4, let alone ML 5.



Sunday, September 11, 2011

Is This a Valid Performance Model?

Is a reliability growth model considered to be a valid PPM in the CMMI?


Asking this question out of context with what you do in the organization does not have a lot of meaning.  The correct answer is both yes and no.  Please remember what High Maturity is all about.  You begin with setting your business goals and objectives and use them to derive your Quality and Process Performance Objectives (QPPOs).  These QPPOs in turn will lead you to the proper measures, process performance baselines (PPBs), and process performance models (PPMs) that your organization needs to quantitatively and statistically manage your work.So, if Reliability Growth is a critical process or sub-process and you have sufficient data to analyze to determine that you have a stable and capable process, then a reliability growth model might be considered a valid PPM.

But just selecting models without performing the analysis I just sketched out is incorrect and you will not be able to demonstrate that your organization is a High Maturity organization.


Thanks for the detail. I just happen to see in CMMI v1.3 High Maturity in which the "reliability model growth" which was given as example in OPP SP1.5 (CMMI v1.2) is deleted. Does it mean that reliability growth model will not be accepted in CMMI v1.3? Or the reliability growth model is not acceptable by the experts? Or is it only good if you use CMM v1.2  and not for CMMI v1.3?

As CMMI v1.3 is an improvement and the practices are carefully analyzed by the SEI and experts, is it advisible to use the reliability growth model given in CMMI v1.2? Or there is any chance that CMMI v1.3 will include the reliability growth model as an example?



Apparently there is some misunderstanding of my answer above.  Whether the CMMI contains the reliability growth model as an example or not is irrelevant to whether or not it is a good model.  Your organization has to mathematically analyze its data, business objectives, QPPOs, PPBs, and PPMs to determine if there is a need for using a reliability growth model.  Do the following analysis:
  1. Describe the reliability growth model in probabilistic terms.
  2. Define the critical sub-processes (those that must be consistently and correctly followed every time) that can be managed using the reliability growth model.
  3. Define how a project manager uses the reliability growth model in the context of his or her projects to predict performance, "what-if" analysis, and predict QPPO achievement.
  4. Provide an equation or show by other means how the stable sub-processes that you have identified in your processes contribute to the reliability growth model.
  5. List the other models that are used in conjunction with reliability growth model and why it has statistical relevance.
Once you have performed this analysis you will have enough information to answer this question yourself.

Monday, August 10, 2009

Auditor Directing a PPM

We are trying for CMMI ML 5 Ver 1.2, and our auditor has asked us to come up with a PPM for predicting the outcome of CAR and OID. Can you help me on how to go about it?

When you say auditor, I assume that you mean your Lead Appraiser (LA) is asking you for a PPM for predicting the outcome of Causal Analysis and Resolution (CAR) and Organizational Innovation and Deployment (OID). Is your Lead Appraiser an SEI-certified High Maturity Lead Appraiser? Has your organization identified the need for a Process Performance Model (PPM) to predict the CAR and OID outcomes, or is this solely a request from your LA? Your LA is not the person to tell you which PPMs you need. Do you have Process and Product Quality Objectives (QPPOs) that require PPM(s) to predict CAR and OID outcomes? If the answer is no, then you don't need a PPM for CAR and OID.

What I find odd is that you do not mention a Process Performance Baseline (PPB) for CAR and OID. If you are going to define and develop a PPM, then you really need to develop the CAR and OID PPBs first before you can determine the PPMs. From your brief description, it sounds like your Lead Appraiser may have overstepped his boundaries in asking for the CAR and OID PPM.

Friday, April 10, 2009

PPM - Data Analysis

We are a CMMI ver 1.1 level 5 assessed software company and we are in the process of ver 1.2 assessment. As part of this, we are developing the Process Performance Models. We would like to get input on the following points.
  1. What kind of statistical analysis are required on the data that is collected for the PPM development?.
  2. Do we need to perform Gage R&R test on the data that is collected?. As mentioned, since we are a software company and not a manufacturing company, i am not too sure about the data collection that needs to happen at multiple instances by the same person on the same tool and different person on the same tool.
For example, to check for Repeatability, if I am considering Schedule Variance of various feature development as a measure, there might not be any difference in the schedule variance that will be measured by different people, if the operational definition is clear for the metric.
Now to check for Reproducibility, if competency of the resource for code development is considered as a measure, this could be changing from period to period, as unlike in manufacturing industry where the activities are of repetitive nature. So the reproducibility will be minimal in this scenario.
If its mandatory to perform Gage R&R analysis on the data, can you throw some ideas on the different areas where this can be applied and how the analysis can be performed. Please share your thoughts on this.

It sounds like the Process Performance Models (PPMs) were overlooked for your organization when it was appraised to CMMI v1.1 3 years ago. I would like to make several points regarding PPMs.
  1. The CMMI does not specify any required statistical analysis technique for PPM development. Based on your data, your QPPOs, PPBs, the organization has to decide the proper analytic techniques to use. There is a wide variety available for use.
  2. What is the reason you are considering Repeatability and Reproducibility? Are you led to these items by your Quality and Process-Performance Objectives (QPPOs)?
  3. There is no CMMI requirement to use Gage R&R.
  4. It sounds to me that it would be a good idea for you and your organization to have someone facilitate a Measurement and Analysis workshop for you to properly identify your measures, PPBs, and PPMs.
  5. You are asking some questions that cannot be properly answered on this blog unless we are working directly with your organization and have some knowledge of your business.

Monday, August 4, 2008

Help Understanding Process Performance Models

To better understand the concept of a Process Performance Model (PPM), it helps to have a little background about both Process Performance Baselines (PPBs) and PPMs. Both are based on the historical process data that the organization has been collected and analyzing. The CMMI definition of a PPB is a documented characterization of the actual results achieved by following a process, which is used as a benchmark for comparing actual process performance against expected process performance. In other words, the historical process data, when analyzed, will indicate how the process has behaved in the past and the expected range of the process performance. As an illustrative example consider hurricanes. Based on over 100 years of weather data, we know where hurricanes usually form and where they travel, either up the east coast of the US or into the Caribbean and Gulf of Mexico.



These data define an envelope that contains the expected range of data based on historical information. By statistically analyzing these data we can compute the central line and a set of control limits for the type of control chart that is appropriate for the data being analyzed: p chart, np chart, c chart, or u chart.

The CMMI definition of PPM is a description of the relationships among attributes of a process and its work products that are developed from historical process-performance data and calibrated using collected process and product measures from the project and that are used to predict results to be achieved by following a process. So by further analyzing the historical data, it is possible to derive PPMs that can be used to predict future process performance based on current process performance. The model may be as simple as merely extrapolating a straight line, or as sophisticated as running Monte Carlo simulations of the process if your model is based on a number of independent variables. Going back to the hurricane analogy there are a number of predictive hurricane models that the weather service uses to predict the ground track of the eye and where it might make landfall. Whereas the hurricane PPB is based on the statistics of ground track data from past hurricanes, the hurricane PPMs use that information plus additional parameters such as speed, barometric pressure, other nearby weather systems, high pressures, and fronts to predict the storm’s ground track. Every time there is a new set of storm data collected, the meteorologists revise their predictions by running the data through the PPMs. In addition, like the hurricane example, you may have more than one PPM because of the inherent complexity of the process you are trying to model. Sometimes the models agree fairly closely, as in the example shown. Other times there may be a big variance.


So, to answer your question, the PPB is the control chart, central line and the two control limits UCL and LCL. When you start plotting the actual data on the control chart, you can use the PPM to predict if the current data will violate the control limits in the future.

To gain a better understanding of a PPM you have to also understand what is meant by dependent and independent variables. In mathematical terms, an independent variable is an input to a function and a dependent variable is an output of the function. Thus if we have a function f(x), then x is the independent variable, and f(x) is the dependent variable. The dependent variable depends on the independent variables; hence the names.

In the design of experiments, independent variables are those whose values are controlled or selected by the person experimenting (experimenter) to determine its relationship to an observed phenomenon (the dependent variable). In such an experiment, an attempt is made to find evidence that the values of the independent variable determine the values of the dependent variable (that which is being measured). The independent variable can be changed as required, and its values do not represent a problem requiring explanation in an analysis, but are taken simply as given. The dependent variable on the other hand, usually cannot be directly controlled.

Controlled variables are also important to identify in experiments. They are the variables that are kept constant to prevent their influence on the effect of the independent variable on the dependent. Every experiment has a controlling variable, and it is necessary to not change it, or the results of the experiment won't be valid.

In other words:

The independent variable answers the question "What do I change?"
The dependent variable answers the question "What do I observe?"
The controlled variable answers the question "What do I keep the same?"

Back to the hurricane analogy, the independent variable is the storm’s current position. The dependent variable is the predicted ground track. The controlled variables are the current storm attributes that are assumed to remain constant for the next period of time between observations.

Therefore, a PPM is a predictive model based on one or more independent variables that can be used to predict an outcome (dependent variable). It is a mathematical model built using empirical process performance data.

In order to build a PPM, you have to use statistical and other analytic techniques to examine the historical data to construct a model that you can use to predict future states. Initially the PPM may not be very accurate. But each time you use a process you should incorporate the resulting process data into the model to improve its accuracy for its next use. A simple example is the estimation model used by the project manager. When first starting to use a project estimation model, the predicted results probably will not be very accurate. But collecting the project estimates and incorporating them into the model, the ability to predict accurately will improve over time.

Tuesday, June 24, 2008

Definition of Complexity

While going through some SEI Presentations, I came across a term Design Complexity which was one of the contributing factors for the Delivered Defect Density. As we have chosen this as one of the PPMs in our organization, I would like know how to define the "Design Complexity". I have been searching in Google for the definition, but unable to find it.

I see a problem right from the start in your approach. You have elected to use a term and build a Process Performance Model that has no definition within your organization. Don’t do a literature search to determine what Process Performance Baselines (PPBs) and Process Performance Models (PPMs) to build for your organization. You need to start from the basics. What are your organization’s Quality and Process Performance Objectives (QPPOs)? You need to determine these first. Then you analyze the data in your measurement repository to build PPBs and PPMs that support the QPPOs.

The next step to use Measurement and Analysis (MA) to take the QPPOs, derive the information needs for the QPPOs, and then the measurement specifications that support the information needs and QPPOs. Once you have performed the MA process for your QPPOs, then you will be able to determine if you in fact need to look at Delivered Defect Density and/or Design Complexity. And keep in mind these two terms are just labels. The data you collect and analyze for Delivered Defect Density in one organization may be different from the data for Delivered Defect Density that are collected and analyzed for a different organization. The concepts are important, but going through the MA process will help you define what you need for your organization. Then you can apply whatever label makes sense for you. The MA process will also focus you on what data to analyze to build the PPBs and PPMs that will support your QPPOs.

Obviously Delivered Defect Density and Design Complexity are interesting things to look at, but do they support your organization’s QPPOs? If not, then there is no point in investigating them any further.

Now having said all that, I am surprised that you haven’t been able to locate any information on Design Complexity on the internet. In fact here is an SEI link on this subject
http://www.sei.cmu.edu/str/descriptions/cyclomatic_body.html and a Wikipedia definition http://en.wikipedia.org/wiki/Cyclomatic_complexity

Tom McCabe has developed a whole collection of complexity measures, which you find out more about here
http://www.mccabe.com/

Bottom line, work through the OPP and MA processes first, THEN look for help with definitions and building PPBs and PPMs. Don’t adopt someone else’s PPBs and PPMs just because these worked for them.

Thursday, April 10, 2008

Help! I am Totally Lost When Interpreting the QPM Specific Practices

QPM SP 1.1 states “Establish and maintain the project’s quality and process-performance objectives.” When you are starting to implement the Maturity Level 4 PAs you won’t have a Process Capability Baseline, that is ultimately the goal of being at ML 4 so the organization can then move to ML 5. One of the main thrusts of ML 4 is to analyze the process data looking for Special Causes of Variation. Once those have been analyzed, then it is possible to determine the process capability. For SP 1.1 the project’s quality and process-performance objectives (QPPOs) are set by management and are based, in part, on the organization’s objectives (OPP SP 1.3). The Process Performance Models (PPMs) are then used to determine if the QPPOs can be met. If not, the QPPOs should be appropriately adjusted.

QPM SP 1.2 states "Select the sub-processes that compose the project's defined process based on historical stability and capability datat." One of the key words in this practice statement is "historical". Sources of historical stability and capability data include the organization’s process performance baselines (PPBs) and PPMs (OPP SP 1.4 and SP 1.5). And the intent of this practice is to tailor the organization’s standard processes (OSSP) so the project’s processes will support the project’s QPPOs defined in SP 1.1.

QPM SP 1.3 states “Select the sub-processes of the project’s defined process that will be statistically managed.” The model does not say that these sub-processes MUST be statistically managed, but these WILL be statistically managed. And this practice focuses more on than just selecting sub-processes. It also focuses on identifying the attributes of each sub-process that will be used to statistically manage the sub-process.

People appear to get confused with Maturity Level 4 (ML 4) sounds when trying to understand QPM independently from the rest of the model. You cannot do that. You have to consider OPP and QPM together when looking at ML 4. OPP provides the analysis of the historical data to build the PPBs and PPMs which are then used by the projects to help them appropriately tailor the OSSP to what will meet the project’s QPPOs. QPM SP 1.2 uses the word "compose", which may contribute to some of the confusion. Since compose is not in the CMMI Glossary, then the dictionary definition is applicable. The Webster definition of compose is “To form by putting together two or more things or parts; to put together; to make up; to fashion.” So for this practice, compose means going to the OSSP and selecting the processes and sub-processes that will meet the QPPOs and then applying the necessary tailoring criteria. What this practice implies is that the OSSP may have several different processes for each Process Area (PA) so the project manager can choose the most appropriate one when composing the project's defined process.

Another ML 4 concept that may be the cause of confusion is the notion of Quantitative Management vs. Statistical Management. QPM SG 1 is all about quantitatively managing the project, which means the Project Manager must be periodically reviewing progress, performance, and risks using the PPMs to determine/predict if the project will meet its QPPOs. If not, then appropriate correctives actions must be taken to address the deficiencies in achieving the project’s QPPOs (QPM SP 1.4) Statistical Management is covered by QPM SG 2. The intent is for the project to statistically manage those sub-processes that are critical to achieving the project’s QPPOs. There are only a small set of sub-processes that are critical to achieving the project’s QPPOs. If the organization were to statistically manage all processes, that would be insane. This approach would mean that the organization would need PPBs and PPMs for EVERY process, regardless of their importance to the QPPOs. And then the shear overhead of collecting, analyzing, and reporting data on every process would most likely bring the organization to its knees. Work would come to a standstill because it was taking far too much time to statistically manage each process and taking corrective actions that may not be necessary. Unless you have an infinite budget and dedicated staff to perform these analyses, that is why the model states to statistically manage SELECTED sub-processes.