Saturday, May 31, 2008

What is a Project?

When starting down the road to process improvement and implementing the CMMI, one of the critical items that must be clearly defined up front is a project. Maturity Level 2 is all about stabilizing the project and developing realistic and accurate estimation models. And the problem many organizations encounter is appropriately defining a project so they are not overburdened by required processes.

The Project Management Institute (PMI) defines project as a temporary endeavor undertaken to create a unique product or service. And the CMMI defines project as a managed set of interrelated resources which delivers one or more products to a customer or end user. Both definitions are correct and complement each other. Along with these defintions, projects have the following characteristics:
  • goals or objectives
  • starting date and ending date
  • identified deliverables
  • identified resources and budget
  • schedules and milestones
  • plan (who, what, when, how)
  • project manager
  • sponsor
  • stakeholders (other affected groups)

There are at least three basic project types:

  • new development
  • maintenance (bug fixes and enhancements)
  • operations

Organizations with new development projects usually don't have a problem with defining a project. However, when there is maintenance, many times the organization starts by defining an individual product change as a project, and many times a single change (especially for bug fixes) does not have all of the project attributes listed above. Therefore it is extremely important for the process implementation team to sit down for several hours to a day with a CMMI expert and talk through how the organization has organized its work in order to arrive at a definition of project for the organization. And there could be several different project types.

Another interesting situtation arises when the organization is responsible for operating something (power plant, control center, Space Shuttle, etc.) for a customer. Depending on the type of operations needed, the organization may be performing systems engineering tasks and activities as well as some software engineering tasks and activities. However, when you examine an individual systems engineering or software engineering task or activity, it may not satisfy the project attributes listed above and it may not cover the entire product lifecycle. These tasks and activities could be considered as sub-projects. So, just like in the maintenance environment, the organization should meet with a CMMI expert to talk through how work is performed within the organization to determine the proper definition of a project so the CMMI can be correctly applied.

Tuesday, May 27, 2008

CMMI Glossary - An Overlooked Resource

The CMMI Glossary defines the basic terms used in the CMMI, especially those terms that have a specific mean in the context of the CMMI. Many of the definitions are for multiple-word terms. If the Glossary does not contain a definition, then the common usage or other dictionary definition is applicable.

The SEI's Introduction to CMMI class spends some time stressing the importance of the Glossary and the need to consult it whenever a question arises.

Despite all this, the CMMI Glossary, next to the Generic Goals and Generic Practices section, is probably the most overlooked area of the model. People tend to remain focussed on reading the Process Areas and rather than consult the Glossary for specific definitions and also tend to apply their own definitions and interpretations, which can lead to improper implementations. Plus it doesn't help that the Glossary is at the back of the book.

For example, the "CMMI lawyers" can start to argue that documented policies and procedures are not required because there are no explicit words in the CMMI stating documented policies or procedures. What the "lawyers" are missing is the special CMMI meaning of the phrase "establish and maintain." This phrase is emphasized in the Intro to CMMI class and is defined in the Glossary as formulate, document, and use. So in GP 2.1 where it states "establish and maintain an organizational policy for ..." that means to think about the wording of the policy, talk about it and achieve consensus, write it down or document it, and then use it.

So, the lesson here is whenever you have a question about the CMMI, the first place you should go is the Glossary and see if the term or phrase in question has been defined in the context of the CMMI. There is a lot of good information contained in the Glossary. Granted, it is not complete, so you may expect to find a definition here that is missing. If you disagree with a Glossary definition and/or wish to include a definition, you can always write a Change Request to the Glossary and submit it to the SEI for consideration.

Thursday, May 22, 2008

Selection of Processes\Subprocesses

Kindly give your comments on the following queries :
1.Say, in OPP if an organization has selected some process/subprocess (SP 1.1) for performance analysis. A critical criterion for selectionof process or subprocess is its historical stability. To ensure thisdoes the organization needs to use statistical techniques right fromthe beginning of the data collection for that process/subprocess?
2.In QPM, whether the projects have to select the subprocess fromprocess/subprocess identified in OPP or need to select based on their project specific objectives?

Let us first consider the OPP SP 1.1 question. How is it possible to know that a process, or sub-process for that matter, is stable without having already performed some sort of statistical analysis on the historical data? What I think the actual question you may be asking below is at what point does it make sense to begin using statistical techniques. Obviously, OPP SP 1.1 is where you want to be when you reach Maturity Level 4. However, it does take some time and thought before arriving here. So, as an organization begins its ML 4 journey implementing OPP SP 1.1, it may NOT have enough historical data to select the processes or sub-processes. Therefore, the organization must spend some time collecting and analyzing the data before it is smart enough to know if it has a stable process or sub-process or one that can be stabilized. As the organization's processes become more and more stable, the application of statistical techniques may prove insightful. The stabilization of process execution, as reflected in the resulting process data, provides the opportunity to apply more sophisticated analytical techniques, including statistical methods. So it is not necessary to "use statistical techniques right from the beginning of the data collection;" rather statistical techniques can be applied once the process is consistently executed as reflected by the resulting data stability. Therefore, the organization has to devote the effort to data analysis to determine if OPP SP 1.1 can be implemented.

QPM works in conjunction with OPP. The project’s quality and process performance objectives are derived from the organization’s quality and process performance objectives. And the processes and sub-processes being quantitatively and statistically managed by the project are selected from those specified in OPP. If not, what would be the value of OPP SP 1.1? OPP sets up the infrastructure to allow the projects to perform QPM. However, that being said, a project may find that it has unique characteristics or customer objectives that warrant the statistical management of one or more subprocesses for which no process performance baseline has yet been established. In such a case, the project would select that subprocess for statistical management in QPM SG2, establish spec limits for the expected process performance, and use that as its foundation for quantitative management. Once sufficient process data have been accumulated by the project, the actual Upper Control Limit (UCL) and Lower Control Limit (LCL) can be established and used (and process stability and process capability can be verified or denied). Note that this information should be fed back to OPP for potential use on future projects.

Wednesday, May 21, 2008

Reviewing Process Status with Senior Management

When a manager reviews the work products of a process, is it not effectively the same as reviewing the process? The model can't expect status reports to senior management to have a list of 18 or 22 process areas to be reviewed with senior management on a regular basis. Project managers I work with could see no possible value in this and their managers/supervisors won't listen to it. And it's not because they don't want to support process improvement.

It sounds like you are having difficulty understanding the intent of GP 2.10 “Review the activities, status, and results of the process with higher level management and resolve issues.”

First off, I need to ask you why is a manager reviewing the process work products? The model doesn’t require that managers review process work products. However, it sounds like your processes may have this requirement.

GP 2.9 “Objectively evaluate adherence of the process against its process description, standards, and procedures, and address noncompliance” does call for objectively evaluating selected work products, but this is a PPQA responsibility. And the intent is to determine if the work product meets the requirements of your documented processes and applicable standards.
There is a huge difference between reviewing a process work product and reviewing the process. Just because a work product exists does not mean that the documented process was followed for producing the work product. For example, your design process states that you document the customer requirements first, then you derive the functional requirements from the customer requirements and create a functional requirements spec. Then you create a design based on the functional requirements and write a design specification. So you would expect to find these three documents in the project folder at the end of the project.

Scenario 1) The project follows the documented procedures and produces the documents in the order described and per the procedures.
Scenario 2) The project doesn’t follow the documented procedures and launches immediately into design and when preparing to close out the project, they rapidly create these three documents .

The work products exist for both scenarios, but they have value in Scenario 1 and no added no value in Scenario 2 and could be viewed as a waste of time to create. In addition for Scenario 2, the product documentation was written per the design, when in fact the product was supposed to be designed per the written documentation.

So, simply reviewing a document will NOT be the same as reviewing the process.

Now to get back to GP 2.10, the intent of this GP is to provide higher management with appropriate visibility into the process. The reviews called for by GP 2.10 are for managers who provide the policy and overall guidance for the process, not for those who perform the direct day-to-day monitoring and controlling of the process (which is the intent of GP 2.8). The intent of GP 2.10 is review the process results, which is typically an aggregate of the results across projects, not the detailed process results on each project. And yes, the expectation is that higher management reviews the process results for ALL Process Areas (PAs) in scope; 7 PAs for Maturity Level 2, 18 PAs for Maturity Level 3, 20 PAs for Maturity Level 4, and 22 PAs for Maturity Level 5. Why else would this be a Generic Practice? But keep in mind how you have implemented these PAs. Most likely your implementation has combined some PAs into a single process and you may also have multiple processes for a single PA. The intent of GP 2.10 is to review your processes. When you prepare for an appraisal you simply have to demonstrate that you have covered each PA with your process reviews.

From a practical standpoint, it is tedious to review the complete set of PAs in scope at a single setting. Especially those processes that do not occur often and there may be nothing new to review. It also depends on the frequency of your process reviews. The model does not specify a frequency, it has to be set by the organization. You can also have these reviews be event-driven. One approach that I have seen work in several clients is to hold quarterly process review meetings with senior management and only review a subset of the PAs at each review. There is an overall plan for reviewing the PAs, for example it may take a year to complete the PA set, but each quarterly review only examines ¼ of the PAs. That eases the burden on everyone. Then if a process issue arises and that PA is not due to be reviewed for 9 months, a special event-driven review for that PA is held.

Tuesday, May 20, 2008

What is the Difference Between the SEPG and a PAT?

There can be several different groups responsible for process improvement within an organization. At the top level there is the need for someone to provide strategic direction for the process improvement efforts, removing obstacles to process improvement, providing funding for process improvement, and management oversight of the effort. This function is sometimes performed by the Management Steering Group (MSG).

The next level down is responsible for planning and executing the organization's process improvement initiatives. This role is what we usually see fulfilled by the Software Engineering Process Group (SEPG) or the Engineering Process Group (EPG). If there is something like the MSG, then the SEPG reports to the MSG and sometimes the SEPG chair is a member of the MSG.

The third level is responsible for establishing the process and process assets if they don't exist, or improving existing processes and process assets. This role is many times done by the Process Action Teams (PATs). Sometimes the PATs are permanent teams, or there is so much work for them to do that their life span is quite long. Ideally PATs should be small temporary teams consisting of experts that are assigned a specific process improvement task. The PAT activities may also include piloting a candidate process improvement,developing or updating the process training material, and deploying the processes and process assets to the organization, but these activities could also be done by the SEPG.

The SEPG should be orchestrating all of the process improvement efforts and charter the PATs with their assignments. The PATs report to the SEPG and many times the PAT lead is a member of the SEPG. SEPG and PAT membership should include people who are:
  • Genuinely interested in and motivated to improve the organization's processes.
  • Able to effectively communicate with their peers and management.
  • Respected by their peers and management and considered credible.
  • Experts in one or more the organization's processes (engineering,project management, process management, and support).
  • Etc.

A good practice that I have seen in a number of organizations is to, once the SEPG has been established and operating for a year, rotate the SEPG membership with other interested parties in the organization as well as periodically change the SEPG Chair. This rotation or change of responsibilities allows others in the organization to participate and aids with greater buy-in throughout the organization.

This is only one way to structure the organization for process improvement. Many times the lines are blurred between the MSG/SEPG/PATs and one group may be performing all of the roles described above. Jeff Dalton's Ask the CMMI Appraiser blog describes 4 or 5 different SEPG configurations that work.

Monday, May 19, 2008

What Does GP 2.8 Monitor and Control the Process Mean?

In my experience as a Lead Appraiser I find that most organizations struggle with understanding the intent of this Generic Practice. I believe that part of the confusion with this practice is that people may read the practice title instead of the practice.

"Monitor and control the process against the plan for performing the process and take appropriate corrective action."

The intent of GP 2.8 is not just to monitor and control process, as indicated by the title, but to directly monitor and control the process against the plan for the process (GP 2.2) as you PERFORM the process. Many organizations think that by checking some process measures on a monthly or quarterly basis, after the fact, is meeting the intent of GP 2.8. The frequency of monitoring and controlling the process depends on how often you perform the process. Some process occur weekly and others may be only once or twice during the project's lifecycle. So, once again, the monitoring and controlling of the process must occur as you execute the process. Keeping in mind that reporting the results of the monitoring and controlling effort can happen on a monthly or quarterly basis.
The other misconception about GP 2.8 is that you can read the practice and interpret the words to mean that you can monitor and control the process without using data. That is true if you just read the words of the practice title. However, the practice states that the monitoring and controlling of the process is based on the plan for performing the process. Once you are using a "baseline" for comparison, then the implication is that you have some means for measuring and determining if there is a need for taking appropriate corrective action. In fact, the GP 2.8 discussion in the CMMI refers to the Measurement and Analysis (MA) Process Area for more information about measurements. So the intent of GP 2.8 is to use MA to define one or more appropriate process measures and indicators that will be used to measure the process' performance against its plan, as the process is being executed. The GP 2.8 sub-practices communicate the intent of this Generic Practice. Taking corrective action should not be because an arbitrary threshold was exceeded, but instead the criteria for taking corrective action should be when the process requirements and/or objectives are not being met, process issues have been identified, or when process progress deviates significantly from the plan for the process.

Friday, May 16, 2008

What is the Difference Between OPF SP 3.1 and SP 3.2?

There are some fine shades of distinction here between these two Specific Practices that can cause some confusion. Both Specific Practices concern deployment. OPF SP 3.1 covers the deployment of process assets and OPF SP 3.2 covers the deployment of the standard processes. The CMMI defines process asset as “Anything that the organization considers useful in attaining the goals of a process area.” And organizational process assets as “Artifacts that relate to describing, implementing, and improving processes.” In other words, process assets are those things that help or enable you to follow the process. Process assets include the policies, measurements, process descriptions, templates, checklists, etc. It is always best to consult the CMMI Glossary for definitions when you have interpretation questions about the CMMI. There is a lot of helpful information contained in those pages.

So, OPF SP 3.1 is concerned with deploying the process assets (new or changed) across the organization. For example, deploying a new or modified template keeping in mind that the associated process may not have changed, just the template. OPF 3.2 is concerned with deploying new or changed processes across the organization. For example, deploying a new or modified Peer Review process, keeping in mind that the associated process assets may not have changed. The model is splitting these two practices apart for clarity because it is possible, as my examples indicate, to perform them independently. Now, from a practical standpoint, most organizations do these two practices together. When beginning process improvement, organizations usually modify BOTH the process and the associated process assets at the same time. As the organization matures, they may be able to modify a process asset without a corresponding process change and vice versa.

And if you remember back to when you took the Intro to CMMI class, your instructor should have emphasized that there is no implied flow from one Specific Practice to the next. They can occur in any combination or order, just as long as the Specific Goals are satisfied. However, in the Engineering PAs there are certain practices that most everyone performs in a certain order.

Thursday, May 15, 2008

Process Documentation

There are two basic audiences for process documentation:
  1. Process Engineers (those who define and document the processes) and
  2. Practitioners (those who have to follow the processes).
The process engineers are very interested in the overall process architecture, inputs/outputs, interfaces, etc. In contrast, the practitioners simply want to know exactly what they have to do in order to get their jobs done. When you read the purpose of Organizational Process Description (OPD), one of the key messages is establishing and maintaining a USABLE set of processes and process assets. Therefore, in order for your processes and process assets to be usable by the practitioners, it doesn’t help if you provide them all of the process architecture, inputs/outputs, interfaces, etc. that the process engineers need and want. The simplest approach that I have seen for the practitioners is to provide a “swim lane” process flow chart. Then it is very easy for the practitioner to see where they fit into the process. Also providing good, as well as bad, examples of how a template or checklist should be filled out is also a good idea.

But keep in mind, that an approach that works in one organization for achieving “buy-in” may not work in another. You really need to work with the organization and jointly determine the best method for the organization. If most of the practitioners, for example, do not relate to visual process flows, then the “swim lane” approach may not work.

Wednesday, May 14, 2008

PPQA Audit Frequency

Organizations that have no history with peforming Process and Product Quality Assurance (PPQA) audits usually ask me how often should they be auditing their processes and work products. And the correct answer is "it depends", but that is usually not satisfactory. The frequency of audits depends upon the nature and severity of the quality issues associated with following the organization's processes. If there are minor findings or quality issues, then the audits don't have to occur very often, may be only once a year. But if there are major findings or issues, then they should be occuring at a higher rate until the issues go away and the processes stabilize.

Yesterday Pat O'Toole posted a message on the CMMI discussion group that take this approach one step further.

When consulting with a client on PPQA Pat suggests that PPQA use a "compliance scale" similar to that used in a SCAMPI appraisal: Fully Compliant, Largely Compliant, Partially Compliant, and Not Compliant.

This approach avoids the game playing of "just doing enough to get a 'Yes' in the audit." It also allows for a finer grading of compliance metrics and trends. And turns the audit feedback sessions into more of an internal consulting discussion than merely a "did we pass or not" exercise.

To "score" an audit, award 100 points for Fully Compliant, 75 points for Largely Compliant, 25 points for Partially Compliant, and 0 points for Not Compliant. Average the score over all of the audit items and you get the score for that particular audit.

You can average the scores of all PPQA audits conducted on a particular project to get the project-level compliance score. Hopefully you find that there is a positive correlation between projects with high compliance scores and the "success" of the project. (If there is a negative correlation you have serious cause of concern!)

I also recommend that you maintain a database (or Excel spreadsheet) with the audit items and their scores across projects and time. You can use the same scoring mechanism described above to show the average score for each audit item.

Audit items that average 90+ for 3 months are candidates for sampling - people appear to "get it" for these items. Audit items that average below some minimum threshold (60?) are probably candidates for reworking the process infrastructure - whatever you've provided isn't being used anyway, so perhaps it's time to give them something that they CAN use (and/or DO find value added).

Pat's quantitative approach makes it very clear which processes and/or projects need to be audited more frequently than others. So when a process or project scores above 90% (or so), you can reduce the audit frequency for that process or project. The default audit frequency needs to be set by the organization. Auditing once a month may be too frequent for some organizations and just right for others. The frequency should match the normal durations of your project lifecycles. Assuming a monthly frequency, if the audit score is 90+% then the frequency for that audit can go to a bi-monthly frequency. If the next time that particular audit again achieves 90+%, the audit can go to on a six month cycle. If on the other hand the score drops below 90, then audit frequency should drop back to the previous frequency. Now you have a variable audit frequency that you can tie directly to the audit results. Pretty cool!

Tuesday, May 13, 2008

What is mandatory to have about a WBS? What must a top-level WBS contain?

The CMMI defines Work Breakdown Structure (WBS) as an arrangement of work elements and their relationship to each other and to the end product. What this definition means is that the WBS is a basically a list of all of the tasks that must be performed from the start of the project to the conclusion of the project. So typically the WBS includes the tasks associated with:

  1. Managing the project
  2. Managing the configuration(s) and configuration items
  3. Managing the requirements
  4. Managing the software engineering activities of requirements engineering, design, development, test, build, and delivery
  5. Managing the suppliers, if there are any
  6. And perhaps others that I may have overlooked

This list of tasks/activities applies to any type of development from the legacy waterfall to Agile. Just the specific details will differ. Now WBS only appears in four locations in the CMMI-DEV: PP, PMC, IPM, and RSKM.

PP SP 1.1 addresses the top-level WBS that you use to estimate the scope of the project. The intent of this practice is not to develop a detailed WBS at this point, but to start with a somewhat “generic”, for the lack of a better term, WBS that applies to all similar projects in the organization that the Project Manager can use to structure his or her initial estimation efforts. Then over the life of the project, as the PM and team gain knowledge about the project, the WBS becomes more detailed. In some organizations, there is confusion about the term WBS because if the organization is under contract to a customer, the contract may include a WBS. Depending on the size and scope of the project, the contract WBS may not be the same as the WBS for the project.

PMC Introductory Notes refers to the WBS in the context of tracking project progress per the project schedule or WBS.

IPM SP 1.4 sub-practice 7 refers to the WBS in the context of having very tight control of the initiation and completion of the tasks described in the WBS.

RSKM SP 2.1 Hint refers to the WBS as a source for identifying risks.

So the bottom line, is that the most guidance the CMMI gives for the WBS is in PP. There are no specific requirements for what a top-level WBS must contain. What you should do is structure the WBS based on the product architecture, application domain, and methodology. To keep the WBS at the right level, identify groups of related activities that are usually performed together. Each activity group should be defined in sufficient detail so it can be reasonably estimated, have responsibilities assigned, and placed on the project schedule. And finally each activity group should have its outputs/work products clearly defined. If you cannot define one or more work products for an activity group, then you probably have not grouped the activities correctly. And as you grow in knowledge and maturity in using the WBS, you may evolve your WBS so that the tasks and activities it contains are networked (predecessor/successor relationships) to enable critical path analysis.

Monday, May 12, 2008

CMMI Updates from the SEI

Last week I participated in the workshop at the Software Engineering Institute in Pittsburgh to develop questions for the Lead Appraiser certification exam that is planned to be administered for the first time in October 2008. To kick off the workshop, Mike Phillips/Program Manager of the CMMI, gave us the latest information on a number of topics that I want to summarize for you here.
  1. The purpose of this workshop was to help increase the professional aspect of the Lead Appraiser profession and it represents a maturation of the profession.
  2. The first opportunity to use the Lead Appraiser test will be at the Lead Appraiser Workshop in Vancouver, WA in October 2008. All Lead Appraisers must take and pass the exam and there will be a one-year window for taking the exam.
  3. The release of CMMI v1.3 will not be very long after the release of CMMI-SVC constellation, which is currently planned for March 2009. The SEI is trying to get it out sooner, possibly as early as January 2009.
  4. v1.3 will include a number of changes resulting from developing CMMI-ACQ and CMMI-SVC. There are two IPPD practices for ACQ, one in OPD and one in IPM, that are now mandatory. The SEI also wants to include the updated High Maturity material in this release. The intent is to bring all three constellations into a greater harmony. The plan is to release v1.3 by the end of 2009.
  5. The strategy for the Introduction to the CMMI class will also be changing somewhat. At some unspecified time in the future, the new class will consist of a 3-day generic course applicable to any constellation, and then 1-day supplemental classes for each constellation. The generic class is expected to be for everyone and the supplemental classes will be for appraisal team members only.
  6. Someone in the audience asked Mike Phillips if the SEI is going to consider hardware engineering as a separate discipline. His answer was that the SEI is not trying to differentiate hardware engineering at this time. They are backing away from discipline-specific distinctions.
  7. Mike Phillips said that there are ongoing discussions on future constellations. Possibly one for manufacturing and another for operations. But these constellations, if they were to materialize, are way off into the future.
  8. When v1.3 is released near December 2009, the SEI will issues three TRs, one for each constellation.
  9. v1.3 was approved as an idea by the SEI Steering Group one month ago. The next steps are in work, but it is still too early in the process to be definitive.

Friday, May 2, 2008

Query on ML 3 and ML 4

What is the significance of Maturity Level 3 and Maturity Level 4? And can you explain to me what is Integrated Project Management?

What broad questions! These questions really need a long in depth answer and are addressed very well in the Introduction to CMMI class. So first off I would suggest that you find an opportunity to take this class. Please visit for more information about the class.

To briefly answer these two questions, the answer needs to address ML 2 as well. So I will start with some definitions from the CMMI book.
Process Area (PA) – a cluster of related practices in an area that, when implemented collectively, satisfy a set of goals considered important for making improvement in that area.
Maturity Level (ML) – degree of process improvement across a predefined set of process areas in which all goals in the set are attained. An ML is a defined evolutionary plateau for organization process improvement. Each ML matures an important subset of the organization’s processes, preparing it to move to the next ML.
Maturity Level 1: Initial – processes are usually ad hoc and chaotic. The organization usually does not provide a stable environment to support the process. Success in these organizations depends on the competence and heroics of the people in the organization and not on the use of proven processes.
Maturity Level 2: Managed – projects of the organization have ensured that processes are planned and executed in accordance with policy; the projects employ skilled people who have adequate resources to produce controlled outputs; involve relevant stakeholders; are monitored, controlled, and reviewed; and are evaluated for adherence to their process descriptions, The process discipline reflected by ML 2 helps to ensure that existing practices are retained during times of stress. When these practices are in place, projects are performed and managed according to their documented plans.
Maturity Level 3: Defined – processes are well characterized and understood, and are described in standards, procedures, tools, and methods. The organization’s set of standard processes, which is the basis for ML 3, is established and improved over time. These standard processes are used to establish consistency across the organization. Projects establish their defined process by tailoring the organization’s set of standard processes according to tailoring guidelines.
Maturity Level 4: Quantitatively Managed – the organization and projects establish quantitative objectives for quality and process performance and use them as criteria in managing processes. Quantitative objectives are based on the needs of the customer, end users, organization, and process implementers. Quality and process performance is understood in statistical terms and is managed throughout the life of the processes.
Maturity Level 5: Optimizing – an organization continually improves its processes based on a quantitative understanding of the common causes of variation inherent in processes.

Given these definitions and explanations, one of the fundamental differences between ML 3 and ML 4 is that at ML 3 the organization is learning how to use a standard set of processes, tailoring them to the individual project needs, and collecting enough process data such that Process Performance Baselines and Process Performance Models can be built and used at ML 4 to quantitatively manage projects and statistically manage sub-processes to achieve the organization’s quality and process performance objectives.

To answer the second question, you first need to understand the Project Planning (PP), Project Monitoring and Control (PMC), and Integrated Project Management (IPM) PAs. PP and PMC are ML 2 PAs that address the basic project management practices of planning a project, creating a project plan, and using that project plan to track and monitor the project. At ML 2, the organization typically is learning how to create accurate and realistic estimates by building estimation models. It takes time to refine these estimation models, so an ML 2 organization is expected to frequently revise and re-baseline the project plan as the projects get smarter about estimation. At ML 3, one of the project management expectations is that the project estimates are now accurate and realistic. So, rather than constantly update the estimates to match the actuals as done at ML 2, the Project Manager now manages the project to the estimates, meaning that the PM can now fairly accurately predict early on in the lifecycle whether or not the project will hit its downstream targets and take appropriate corrective action to mitigate these risks. The other differences between IPM and PP/PMC include establishing the project’s defined process by applying appropriate tailoring criteria to the organization’s standard processes, establishing the project’s work environment, integrating the various plans that comprise the project plan, managing the project using the integrated plans, and managing the project’s relevant stakeholders. In other words, IPM builds on the project management foundation established by PP and PMC.

This a lengthy explanation but only a surface treatment on these subjects. Again I strongly recommend to anyone interested in this topic that you attend an offering of the Introduction to CMMI v1.2 class. You will go into these concepts in much greater detail and you will come out with a much better understanding of the model, PAs, and MLs than I can convey in this blog.

Thursday, May 1, 2008

Management Commitment

I am continually struck by the misunderstanding of management's role in process improvement in any size organization. Many times the all the senior manager feels that they have to do is give a directive to achieve a Maturity Level by some date. And that is the extent of their involvement, expecting that the Maturity Level will simply happen because they said so. And then to make matters worse, this same senior manager does not understand why the people they put in charge of implementing the process improvement program can't make things happen faster, especially when they are perceived as setting capricious and arbitrary deadlines without asking the process team if the new dates are achievable.

Managers who are committed to correctly performing process improvement both "talk the talk" and "walk the walk". They understand the importance of:
  1. setting realistic process improvement goals
  2. providing the necessary support to the process team
  3. addressing process improvement challenges and removing obstacles
  4. instilling the process improvement mindset throughout the management structure
  5. encouraging process improvement suggestions
  6. being a process improvement advocate

Without being actively supportive of their process improvement goals and emphasizing this same expected behavior from the rest of the management, their process improvement goals are at serious risk. Management does have to become engaged and not simply expect that things will happen just because they spoke.