Saturday, November 3, 2007

Becoming a black belt without Six Sigma

There are not so many articles who really explain Deming's methodology well because, though he is the founder of Quality with Walter A. Shewart, most of the "modern Quality Gurus" do not really understand his strange mixture between Management, Philosophy and Statistics. That's why I'm so pleased to be able to read this article:

Becoming a Black Belt without Six Sigma

by William H. Goodenow

Quality Assurance Manager

One of W. Edwards Deming's major contributions to the accumulated body of knowledge dealing with experimental design is his differentiation between enumerative and analytic studies. The key difference between the two is that the first deals with static (snapshot) conditions, while the latter deals with dynamic (changing) conditions. It is because of this fundamental distinction that the use of design and analysis tools which are intended for static conditions (traditional statistical methods) can be both inappropriate and ill advised for dynamic conditions.

There are other reasons for this concern. For example, most traditional methods of statistical analysis require that certain assumptions be met, the most common are:

  • Normality of the data (a unimodal & symmetric normal--bell-shaped--distribution)
  • Equivalence of the variances (equal variability in the test or response data)
  • Constancy of the cause system (all non-controlled factors held constant)

These assumptions may or may not be met, or even tested for validity, before the analyses are performed--especially with the simplicity of using many of today's off-the-shelf statistical software packages.

In addition, Deming recognized that most symmetric (or composite) functions of a set of numbers almost always throw away a large portion of the actual information contained in the data. A traditional statistical test of significance is a symmetric function of the data. In contrast to this, a proper plot of data points will conserve the inherent information derived from both the comparison itself, and the manner in which the data were collected. For example, symmetric functions, such as the mean and standard deviation, can easily gloss over unknown and unplanned changes in the cause system over time, and, consequently, make a big and potentially misleading difference in the message the data are trying to convey for the purpose of providing a reliable prediction of future process behavior.

In other words, both the "design" approach and the methods of analysis typically taught and used for industrial experimentation leave much to be desired from a reliability of prediction point-of-view.


Since the purpose of industrial experimentation is to improve a process and product's performance in the future, when some conditions may have changed, design efforts must provide for conducting the study over an appropriately wide range of conditions.

The actual determination of how wide to make this range is a critical part of the design stage of experimentation. If the range of conditions selected is too wide, the DOE team could falsely conclude that observed changes in the process will continue in the future when, in fact, these conditions may not be operationally feasible in the long run. If the range of conditions is too narrow, the team may miss important improvements that could result under a wider range of conditions. These errors are not quantifiable in analytic studies, and the reliability of any conclusions reached regarding future performance will be a function of how closely we follow Deming's design principles.

Other considerations include choosing the best variables for a study; handling background variables (those conditions to be either held constant or varied in an appropriately controlled manner), and nuisance variables; unknowns which can be neither held constant nor varied in a controlled manner; and deciding on replication, methods of randomization, the Design Matrix itself, planned methods of statistical analysis, cost and schedule.

There are still two considerations to be addressed:

What is the objective of the study? What background information do you already possess about the variables under study, in terms of their individual descriptive properties (mean, mode, median, standard deviation, the shape of their respective distributions, etc.), and the relationships between them.

Before establishing the objective, we must be as candid and complete as possible about the knowledge we already possess.

The problem with the "planners plan and doers do" mindset is the artificial separation it creates between the planning and execution steps of problem-solving. Deming recognized this tendency and much of his management and statistical thinking was directed at overcoming it. In Figure 1, we see how the idea of sequential learning not only applies to DOE, but how the process of stepwise experimentation, when properly used, can lead to the acquisition of profound knowledge.

To insure that the design process adequately provides for the execution of an appropriate and reliable experiment, it will be helpful to actually use some sort of design checklist and worksheet.

The reasons

A clue to the reasons why traditional analysis methods leave much to be desired in an analytic study was given in our discussion of the problems with symmetric functions alluded to by Deming and his followers--such as losing information inherent in the process and data collection activity. As troublesome as this tendency is, it does not represent the last straw. The fatal flaw shows up when future facts of life change sufficiently to render the original conclusions meaningless at best and clearly wrong at worst. It doesn't matter how big the F-Ratio is in the ANOVA Table, or other tests of statistical significance, they have no meaning if the conditions under which they were derived no longer exist in the same proportions as in the original study.

The alternatives

The best way to analyze data from an analytic study is to use the old stand-by methods of charting. This may include control charts of the SPC variety, variations of these or even the integration of SPC and DOE.

Figure 2 shows a slightly different way of diagramming the sequential application of the Scientific Method (Figure 1). Alternating uses of SPC and DOE can provide a concrete way of developing and exploring various process optimization hypotheses (DOE), and confirming the efficacy of these operational models in the future with carefully planned process control charts (SPC). As the difference between our theories and the facts narrows, knowledge grows.

The really important aspects of control chart analysis are that it: takes into account the order in which the data were generated and collected; and does not require assumptions of normality, equivalence of the variances or constancy of cause systems, since the data in the experiment allows us to reject any or all of these hypotheses. It allows us to study the individual data points, groups of data points, and various patterns over time that may provide valuable clues as to why certain measurement points showed different results than others.

It is this last benefit that forms a fundamental and extremely powerful basis for the analysis of any analytic study. Not only would we have been misled by traditional statistics, but a cursory examination of the control charts, as well, could have allowed us to not really see what the data were trying to tell us.

About the author:

William H. Goodenow has been an examiner with the Wisconsin Forward Award (WFA) program, Wisconsin's adaptation of the Malcom Baldrige Quality Award.

No comments: