Friday, June 13, 2008

The CMMI, A Short History

Mike Konrad recently posted a short history of the CMM and CMMI on the Yahoo CMMI discussion group as he has been involved in both models at the SEI. It is a great summary and worthy of finding a home on this blog.

I will frame my answer from this perspective: I joined the SEI in February 1988 and contributed to the development of both the Software CMM (mostly to SW-CMM V1.0 - less so to 1.1 and 2[A-C]) and CMMI (team lead or co-lead for all DEV versions; a team memgber for ACQ and now, SVC).

Relative to the PPM (1988-1991 approx.), I led the very small empirical team that analyzed those questionnaires (including Manuel Lombardero and Alyson Gabbard Wilson, both statisticians and graduate students at CMU at the time) from 1988-1990. We conducted various item analyses and sponsored an independent study (under the supervision of Jane Siegel) of an AIR (American Institutes for Research in the Behavioral Sciences). Those studies examined construct validity and internal consistency - but much less so on predictive validity (whether retrospective or concurrent).

Where did the data from the analyses come from? Many of the early (pre-Software CMM) appraisals were conducted by the SEI - which is where we got our data for the above-mentioned analyses (under the leadership of Watts, Dave Kitson, Tim Kasse - in the early years).

Some of us (but not me) visited the part of IBM in Houston responsible for developing the onboard software for the Space Shuttle (now part of United Space Alliance), which was an eye opener to some of us at the time relative to what was possible in terms of coordinating the development/maintenance of large software systems to produce high-quality operational software.

These studies and visits influenced the early content of the Software CMM. (Under the leadership of Bill Curtis, who had a simple but powerful vision of making the model behind the questionnaires explicit, easily accessible to the community, and change-request- based - a practice continued with the CMMI.)

Mike Bandor points to our online archive of early articles on CMMI - an excellent suggestion. Also, if you have a copy of the CMMI book handy ("CMMI [Second Edition]: Guidelines for Process Integration and Product Improvement"), you will find on pages 9-21 a series of brief "histories" of CMMI as presented by the following individuals: Watts Humphrey, Mike Phillips, Bob Rassa, Roger Bate, and Bill Peterson - all of whom had significant impact in guiding CMMI to what it is today (less directly so, Watts, but I include him because he helped start it all). Reading these will provide a good digest from multiple perspectives (though less so from other influential individuals, including Jack Ferguson).

In 1994-1995, some influential studies about the Software CMM were published (still available as technical reports at the SEI Web site): "After the Appraisal" (CMU/SEI-95-TR-009) and "Moving On Up" (CMU/SEI-95-TR-008) by Goldenson, Herbsleb, Zubrow, and Hayes. These did examine predictive validity and enablers/disablers of successful process improvement. Personally, I consider these reports an important milestone in the maturation of understanding about process improvement and the means by which to analyze it.

About CMMI:

In the early years (1998-2002), the approach to developing CMMI focused on properly integrating three related best practice efforts: the Software CMM Version 2 Draft C, the EIA 731, and the IPD CMM (as Mike Bandor points out). In those early years, we were striving hard (under Jack Ferguson's leadership) to bring a variety of communities separated-through-different-CMMs together. Before V1.0, we did not quite have it right, but with V1.0, we finally seemed to have a version that could be common to multiple communities.

But the studies continued during those years - e.g., there was a workshop on Technology Change Management that reflected on the ML5 SW- CMM KPAs Technology Change Management and Process Change Management - that led to our re-organizing those two KPAs in CMMI, leading to the Organizational Innovation and Deployment process area that we have come to love :-) today. There were also at least two High Maturity Workshops (led by Mark Paulk and others), that brought together early practitioners in the high maturity practices to try to better understand high maturity practice.

In those early CMMI years, our empirical work looked at interpretation issues (arising from bringing multiple communities together) to try to ensure CMMI was usable/useful to a broad group - and frankly, that we retain a large subset of early SW-CMM adopters. During this time, a number of us, especially Mike Phillips, took advantage of the fact that many conferences/workshops/SPIN meetings would bring folks together who were practitioners and who could influence or champion process improvement, and hosted forums, birds- of-a-feather events to elicit feedback on proposed directions for the next CMMI version. (A practice continued last year with our "Beyond V1.2" series.)

Two years ago, as part of our new focus on understanding high maturity, we again brought together expert practitioners/leaders/champions of high maturity in a variety of organizations for insight and input to better understand how the high maturity practices can be better implemented. Starting last March, Bob Stoddard and Dennis Goldenson have been leading what is planned to be a series of workshops looking at best practices in process performance modeling (invitation only in the initial stages, sorry).

Thus, while the focus of our studies and workshops have evolved somewhat over the years, they have continued. From our first focus on providing a diagnostically-useful questionnaire (based on the PPM) to the current focus on better understanding high maturity, the work and support has grown. The models today embrace a larger set of business operations - now including acquisition and services and over a larger region of the world. But many of the principles we've learned in the early days of the Software CMM persist to this day: models available on-line for free, community-based change-request system to guide further evolution of the product suite, a diagnostic methodology that is a companion to the model, and engagement with or support of the broad community of practitioners through SPIN meetings, conferences, and workshops and the like (both directly by the SEI and its many partners). And: the empirical studies continue to this date, including the recent one on the effectiveness of systems engineering practices and two new ones on state-of-the-practice with regards to measurement and analysis and high maturity - as well as a new one on program performance.

Mike Konrad

No comments: