Showing posts with label process performance baselines. Show all posts
Showing posts with label process performance baselines. Show all posts

Sunday, October 9, 2011

Is a Prediction Model Required for Maturity Level 4

Is it necessary to have a prediction model for L4 and L5?

I collect data from different projects and put them in a 3 sigma band and take care of outliers.  Then I compare the mean and standard deviation of current PBB with previous PBB.  And I also check against the LSL and USL set by the management team the trend coming out of this PBB.  In case the mean, SD, LCL and UCL decrease from the previous PBB, I update the management team and we check the projects on the basis of this.

What happens when I talk to someone in my circle of friends they talk about regression, simulation, Monte Carlo, etc.  In my organization we need to show process performance against set targets, so why make life so complex with so many above mentioned methods?



From what I can glean from your question is that you have developed a Process Performance Baseline (PPB).  But I have no idea why you have done this and what value you are getting from knowing this PPB.  What use is it to your organization?  How does it help you meet your Quality and Process Performance Objectives (QPPOs) and your business goals?

If you are not interested in fully implementing ML 4 or ML 5, then I suppose you don’t need anything else as long as you are deriving some sort of benefit from this PPB.  However, for a complete High Maturity implementation, you are expected to do the proper statistical analysis of data distributions, probability statistics, process engineering, etc. to derive appropriate PPBs and PPMs.  And to be a PPM instead of merely a forecast model and be of use for "what-if" analyses, the PPMs must contain controllable factors that have an impact on the outcome.

Also, if you want to implement ML 4 and/or ML 5, then you need to have established organizational QPPOs, a number of PPBs that support the QPPOs and can be used to evaluate the feasibility of achieving them, and a set of Process Performance Models (PPMs) that are derived from your historical data that are used in conjunction with the PPBs to predict each project’s ability to meet its QPPOs, as well as a number of other activities.

Therefore, if all that you have is one PPB and nothing else, you only have a partial implementation of OPP and still have a lot of work ahead of you before you can consider that you have implemented ML 4, let alone ML 5.



Monday, August 4, 2008

Help Understanding Process Performance Models

To better understand the concept of a Process Performance Model (PPM), it helps to have a little background about both Process Performance Baselines (PPBs) and PPMs. Both are based on the historical process data that the organization has been collected and analyzing. The CMMI definition of a PPB is a documented characterization of the actual results achieved by following a process, which is used as a benchmark for comparing actual process performance against expected process performance. In other words, the historical process data, when analyzed, will indicate how the process has behaved in the past and the expected range of the process performance. As an illustrative example consider hurricanes. Based on over 100 years of weather data, we know where hurricanes usually form and where they travel, either up the east coast of the US or into the Caribbean and Gulf of Mexico.



These data define an envelope that contains the expected range of data based on historical information. By statistically analyzing these data we can compute the central line and a set of control limits for the type of control chart that is appropriate for the data being analyzed: p chart, np chart, c chart, or u chart.

The CMMI definition of PPM is a description of the relationships among attributes of a process and its work products that are developed from historical process-performance data and calibrated using collected process and product measures from the project and that are used to predict results to be achieved by following a process. So by further analyzing the historical data, it is possible to derive PPMs that can be used to predict future process performance based on current process performance. The model may be as simple as merely extrapolating a straight line, or as sophisticated as running Monte Carlo simulations of the process if your model is based on a number of independent variables. Going back to the hurricane analogy there are a number of predictive hurricane models that the weather service uses to predict the ground track of the eye and where it might make landfall. Whereas the hurricane PPB is based on the statistics of ground track data from past hurricanes, the hurricane PPMs use that information plus additional parameters such as speed, barometric pressure, other nearby weather systems, high pressures, and fronts to predict the storm’s ground track. Every time there is a new set of storm data collected, the meteorologists revise their predictions by running the data through the PPMs. In addition, like the hurricane example, you may have more than one PPM because of the inherent complexity of the process you are trying to model. Sometimes the models agree fairly closely, as in the example shown. Other times there may be a big variance.


So, to answer your question, the PPB is the control chart, central line and the two control limits UCL and LCL. When you start plotting the actual data on the control chart, you can use the PPM to predict if the current data will violate the control limits in the future.

To gain a better understanding of a PPM you have to also understand what is meant by dependent and independent variables. In mathematical terms, an independent variable is an input to a function and a dependent variable is an output of the function. Thus if we have a function f(x), then x is the independent variable, and f(x) is the dependent variable. The dependent variable depends on the independent variables; hence the names.

In the design of experiments, independent variables are those whose values are controlled or selected by the person experimenting (experimenter) to determine its relationship to an observed phenomenon (the dependent variable). In such an experiment, an attempt is made to find evidence that the values of the independent variable determine the values of the dependent variable (that which is being measured). The independent variable can be changed as required, and its values do not represent a problem requiring explanation in an analysis, but are taken simply as given. The dependent variable on the other hand, usually cannot be directly controlled.

Controlled variables are also important to identify in experiments. They are the variables that are kept constant to prevent their influence on the effect of the independent variable on the dependent. Every experiment has a controlling variable, and it is necessary to not change it, or the results of the experiment won't be valid.

In other words:

The independent variable answers the question "What do I change?"
The dependent variable answers the question "What do I observe?"
The controlled variable answers the question "What do I keep the same?"

Back to the hurricane analogy, the independent variable is the storm’s current position. The dependent variable is the predicted ground track. The controlled variables are the current storm attributes that are assumed to remain constant for the next period of time between observations.

Therefore, a PPM is a predictive model based on one or more independent variables that can be used to predict an outcome (dependent variable). It is a mathematical model built using empirical process performance data.

In order to build a PPM, you have to use statistical and other analytic techniques to examine the historical data to construct a model that you can use to predict future states. Initially the PPM may not be very accurate. But each time you use a process you should incorporate the resulting process data into the model to improve its accuracy for its next use. A simple example is the estimation model used by the project manager. When first starting to use a project estimation model, the predicted results probably will not be very accurate. But collecting the project estimates and incorporating them into the model, the ability to predict accurately will improve over time.

Monday, June 30, 2008

What is the Difference Between PPB & PCB?

Please explain the difference between Process Performance Baseline (PPB) and Process Capability Baseline (PCB) as OPP PA talks only about establishment of process performance baselines.

This is a good, but very easy question to answer. First, you should ALWAYS check the CMMI Glossary first before asking a question. Many times you will find the answer to your question in the Glossary. Here are the Glossary definitions for these two terms. I think that these definitions clearly point out the differences.

Process Capability – The range of expected results that can be achieved by following a process.
Process Performance Baseline – A documented characterization of the actual results achieved by following a process, which is used as a benchmark for comparing actual process performance against expected process performance.

Thursday, April 10, 2008

Help! I am Totally Lost When Interpreting the QPM Specific Practices

QPM SP 1.1 states “Establish and maintain the project’s quality and process-performance objectives.” When you are starting to implement the Maturity Level 4 PAs you won’t have a Process Capability Baseline, that is ultimately the goal of being at ML 4 so the organization can then move to ML 5. One of the main thrusts of ML 4 is to analyze the process data looking for Special Causes of Variation. Once those have been analyzed, then it is possible to determine the process capability. For SP 1.1 the project’s quality and process-performance objectives (QPPOs) are set by management and are based, in part, on the organization’s objectives (OPP SP 1.3). The Process Performance Models (PPMs) are then used to determine if the QPPOs can be met. If not, the QPPOs should be appropriately adjusted.

QPM SP 1.2 states "Select the sub-processes that compose the project's defined process based on historical stability and capability datat." One of the key words in this practice statement is "historical". Sources of historical stability and capability data include the organization’s process performance baselines (PPBs) and PPMs (OPP SP 1.4 and SP 1.5). And the intent of this practice is to tailor the organization’s standard processes (OSSP) so the project’s processes will support the project’s QPPOs defined in SP 1.1.

QPM SP 1.3 states “Select the sub-processes of the project’s defined process that will be statistically managed.” The model does not say that these sub-processes MUST be statistically managed, but these WILL be statistically managed. And this practice focuses more on than just selecting sub-processes. It also focuses on identifying the attributes of each sub-process that will be used to statistically manage the sub-process.

People appear to get confused with Maturity Level 4 (ML 4) sounds when trying to understand QPM independently from the rest of the model. You cannot do that. You have to consider OPP and QPM together when looking at ML 4. OPP provides the analysis of the historical data to build the PPBs and PPMs which are then used by the projects to help them appropriately tailor the OSSP to what will meet the project’s QPPOs. QPM SP 1.2 uses the word "compose", which may contribute to some of the confusion. Since compose is not in the CMMI Glossary, then the dictionary definition is applicable. The Webster definition of compose is “To form by putting together two or more things or parts; to put together; to make up; to fashion.” So for this practice, compose means going to the OSSP and selecting the processes and sub-processes that will meet the QPPOs and then applying the necessary tailoring criteria. What this practice implies is that the OSSP may have several different processes for each Process Area (PA) so the project manager can choose the most appropriate one when composing the project's defined process.

Another ML 4 concept that may be the cause of confusion is the notion of Quantitative Management vs. Statistical Management. QPM SG 1 is all about quantitatively managing the project, which means the Project Manager must be periodically reviewing progress, performance, and risks using the PPMs to determine/predict if the project will meet its QPPOs. If not, then appropriate correctives actions must be taken to address the deficiencies in achieving the project’s QPPOs (QPM SP 1.4) Statistical Management is covered by QPM SG 2. The intent is for the project to statistically manage those sub-processes that are critical to achieving the project’s QPPOs. There are only a small set of sub-processes that are critical to achieving the project’s QPPOs. If the organization were to statistically manage all processes, that would be insane. This approach would mean that the organization would need PPBs and PPMs for EVERY process, regardless of their importance to the QPPOs. And then the shear overhead of collecting, analyzing, and reporting data on every process would most likely bring the organization to its knees. Work would come to a standstill because it was taking far too much time to statistically manage each process and taking corrective actions that may not be necessary. Unless you have an infinite budget and dedicated staff to perform these analyses, that is why the model states to statistically manage SELECTED sub-processes.