Wednesday, May 14, 2008

PPQA Audit Frequency

Organizations that have no history with peforming Process and Product Quality Assurance (PPQA) audits usually ask me how often should they be auditing their processes and work products. And the correct answer is "it depends", but that is usually not satisfactory. The frequency of audits depends upon the nature and severity of the quality issues associated with following the organization's processes. If there are minor findings or quality issues, then the audits don't have to occur very often, may be only once a year. But if there are major findings or issues, then they should be occuring at a higher rate until the issues go away and the processes stabilize.

Yesterday Pat O'Toole posted a message on the CMMI discussion group that take this approach one step further.

When consulting with a client on PPQA Pat suggests that PPQA use a "compliance scale" similar to that used in a SCAMPI appraisal: Fully Compliant, Largely Compliant, Partially Compliant, and Not Compliant.

This approach avoids the game playing of "just doing enough to get a 'Yes' in the audit." It also allows for a finer grading of compliance metrics and trends. And turns the audit feedback sessions into more of an internal consulting discussion than merely a "did we pass or not" exercise.

To "score" an audit, award 100 points for Fully Compliant, 75 points for Largely Compliant, 25 points for Partially Compliant, and 0 points for Not Compliant. Average the score over all of the audit items and you get the score for that particular audit.

You can average the scores of all PPQA audits conducted on a particular project to get the project-level compliance score. Hopefully you find that there is a positive correlation between projects with high compliance scores and the "success" of the project. (If there is a negative correlation you have serious cause of concern!)

I also recommend that you maintain a database (or Excel spreadsheet) with the audit items and their scores across projects and time. You can use the same scoring mechanism described above to show the average score for each audit item.

Audit items that average 90+ for 3 months are candidates for sampling - people appear to "get it" for these items. Audit items that average below some minimum threshold (60?) are probably candidates for reworking the process infrastructure - whatever you've provided isn't being used anyway, so perhaps it's time to give them something that they CAN use (and/or DO find value added).


Pat's quantitative approach makes it very clear which processes and/or projects need to be audited more frequently than others. So when a process or project scores above 90% (or so), you can reduce the audit frequency for that process or project. The default audit frequency needs to be set by the organization. Auditing once a month may be too frequent for some organizations and just right for others. The frequency should match the normal durations of your project lifecycles. Assuming a monthly frequency, if the audit score is 90+% then the frequency for that audit can go to a bi-monthly frequency. If the next time that particular audit again achieves 90+%, the audit can go to on a six month cycle. If on the other hand the score drops below 90, then audit frequency should drop back to the previous frequency. Now you have a variable audit frequency that you can tie directly to the audit results. Pretty cool!

1 comment:

Shahid said...

I appreciate the scoring mechanism specified (acknowledging Pat too) and the associated details. I am sure this or similar scoring mechanisms can be used to identify the frequency of QA or any internal ISO related audits.

Shahid