I came to know that these techniques (listed below) were used in SEIs' case studies for high maturity. Does it mean that ALL organisations are expected to use these when operating or appraised at ML 5?This looks quite exhaustive, please advise.- ANOVA Reliability Growth Modeling
- Chi-Square Response Surface Modeling
- Regression Time Series Analysis
- Logistic Regression Hypothesis Testing
- Dummy Variable Regression Logit
- Bayesian Belief Network Monte Carlo Simulation
- Designed Experiments Optimization
- Discrete Event Simulation
- Reliability Growth Modeling
- Response Surface Modeling
- Time Series Analysis
- Hypothesis Testing
- Logit
- Discrete Event Simulation
No, neither the CMMI nor the SEI expect that all organizations must use these techniques. This is a list of example techniques. However, it is expected that High Maturity organizations use these types of techniques. The organization has to determine which technique(s) work best for them to understand the special and common causes of variation, etc. The organization has to build Process Performance Baselines (PPBs) and Process Performance Models (PPMs). To build these things, the organization has to analyze their historical data. This analysis can be done in many different ways and the list of techniques include some of the typical data analysis methods.
So what would be very helpful for you is to either get some training on data analysis and statistical techniques or hire a statistician to help you with your High Maturity efforts. Then you will figure out which quantitative analysis technique(s) are appropriate for your data and your organization.
And, to quote Pat O'Toole: "High maturity is NOT just about statistical techniques. Rather, it is about performing your critical processes so consistently that the information to be gleaned from the use of these techniques contain more signal than noise. You can use the data streaming off the critical processes to detect abnormal performance, and to predict (in a statistical sense) future outcomes of interest."
Organizations that have no history with peforming Process and Product Quality Assurance (PPQA) audits usually ask me how often should they be auditing their processes and work products. And the correct answer is "it depends", but that is usually not satisfactory. The frequency of audits depends upon the nature and severity of the quality issues associated with following the organization's processes. If there are minor findings or quality issues, then the audits don't have to occur very often, may be only once a year. But if there are major findings or issues, then they should be occuring at a higher rate until the issues go away and the processes stabilize. Yesterday Pat O'Toole posted a message on the CMMI discussion group that take this approach one step further. When consulting with a client on PPQA Pat suggests that PPQA use a "compliance scale" similar to that used in a SCAMPI appraisal: Fully Compliant, Largely Compliant, Partially Compliant, and Not Compliant.
This approach avoids the game playing of "just doing enough to get a 'Yes' in the audit." It also allows for a finer grading of compliance metrics and trends. And turns the audit feedback sessions into more of an internal consulting discussion than merely a "did we pass or not" exercise.
To "score" an audit, award 100 points for Fully Compliant, 75 points for Largely Compliant, 25 points for Partially Compliant, and 0 points for Not Compliant. Average the score over all of the audit items and you get the score for that particular audit.
You can average the scores of all PPQA audits conducted on a particular project to get the project-level compliance score. Hopefully you find that there is a positive correlation between projects with high compliance scores and the "success" of the project. (If there is a negative correlation you have serious cause of concern!)
I also recommend that you maintain a database (or Excel spreadsheet) with the audit items and their scores across projects and time. You can use the same scoring mechanism described above to show the average score for each audit item.
Audit items that average 90+ for 3 months are candidates for sampling - people appear to "get it" for these items. Audit items that average below some minimum threshold (60?) are probably candidates for reworking the process infrastructure - whatever you've provided isn't being used anyway, so perhaps it's time to give them something that they CAN use (and/or DO find value added).Pat's quantitative approach makes it very clear which processes and/or projects need to be audited more frequently than others. So when a process or project scores above 90% (or so), you can reduce the audit frequency for that process or project. The default audit frequency needs to be set by the organization. Auditing once a month may be too frequent for some organizations and just right for others. The frequency should match the normal durations of your project lifecycles. Assuming a monthly frequency, if the audit score is 90+% then the frequency for that audit can go to a bi-monthly frequency. If the next time that particular audit again achieves 90+%, the audit can go to on a six month cycle. If on the other hand the score drops below 90, then audit frequency should drop back to the previous frequency. Now you have a variable audit frequency that you can tie directly to the audit results. Pretty cool!
Well, the 2008 SEPG Conference is now a thing of the past. This SEPG was a very informative conference this year. There were many excellent presentations on a variety of topics. Highlights for me were Pat O'Toole's and Herb Weiner's presenation of the ATLAS results for the suggested changes to the High Maturity Process Areas, the keynote speeches on Wednesday by Major General Curtis M. Bedke/Commander of the Air Force Research Laboratory and Karthik and Guha Bala/Vicarious Visions on their use of the Team Software Process (TSP), and the Booz Allen Hamilton presentation on aiding Small Disadvantaged Businesses in achieving Maturity Level 2. Major General Bedke spoke on the increasing importance of software to the Air Force's various applications. And it was a little scary calling up images of Skynet, T2000, and "Judgement Day" since he talked about intelligent robots repairing themselves and learning from their mistakes. The Bala brothers were energetic and fun to listen to. And I was very pleased to see that the gaming industry has matured enough now that they realize the need for discipline in their processes. It apparently only took one bad experience in delivering a game for them to realize that they needed help and TSP was their answer. Their presentation was a great segue after the Air Force as the games they produce include Call of Duty, Guitar Hero, and Spiderman. And of course there was the exhibit hall with several news vendors at the conference this year. The biggest improvement at the 2008 SEPG was the SEI not requiring the attendees to wear the iTag anchors around our necks in order to track attendance at the various sessions. All that you had to do this year was scan a bar code on your badge when you entered a session. And everyone had to revert to the tried and true method of exchanging business cards when you met new people.