John didn’t look happy while walking into my office. “We now have four consecutive failed lots. You’re the numbers guy, would you go take a look at the lot release results and tell me what’s going on with the process.” I replied, “I’ll need few hours to take a look at the production history plot and the release test validation report, and then I will get back to you.” I imagine this exchange happens every day in many manufacturing plants and in many different industries. It is natural that at the start of any investigation, the review of historic experience and the error associated with the process measurement would be investigated. Armed with these two pieces of information, it is often possible to identify root causes for process failures or, at the very least, focus the investigation in the right direction.
When William Cowper, in 1785, wrote, “Variety’s the very spice of life,” he lived in the very beginnings of the Industrial Revolution. As things started rolling in the early 1800s, it became clear that when it comes to industrial scale manufacturing processes, the variety is much less favorable. In fact the very basis for the manufacturing process is the control and the reduction of the process variability or variance.
Process variance on some scale, of course, is natural and unavoidable. In the last few hundred years, many professional and amateur statisticians have created many tools to describe and control process variance. In fact, one may say that without variance there would not be a need for the involvement of a statistician in a typical manufacturing operation.
Although there are many tools that exist to understand process variability and break it down into its minor components (e.g., process, part and measurement variability), one of the more popular approaches was developed in 1960s by General Motors. The method was given a name few years later as the Automotive Industry Action Group (AIAG) Gauge Repeatability and Reproducibility (R&R) study. Over the years the approach was generalized to a study of not only measurement variability but also process variability across many industries.
The popularity of this method is likely due to the relatively simple study design and the simple math associated with the estimation of total and component variances. In general, the study calls for the replicate execution of the measurement process while varying the parts or the items that are being measured. The study can be elaborated by introducing and balancing multiple operators (analysts) and/or instruments, manufacturing sites, etc. After execution of the study and the analysis of the data, the end results is a set of variance estimates associated with each study component (i.e., instruments, operators, etc.) as wells as the repeatability of the measurements and the total variability.
If the Gauge R&R is the study of variance, then the process trend chart is its record. A simple act of trending process the data provides the wealth of information. It may seem natural to do so routinely in 2013, but it was revolutionary in 1924. In spring of that year, Dr. Shewhart, while working for General Electric, introduced the concept of the control chart. This simple time series plot allowed to both track the manufacturing process and to foresee the adverse trends. Shewhart went on to distil the manufacturing process to its simplest components: signal and noise. The noise was the total expected process variance, and the signal was the changes, spikes, and trends (adverse or favorable) in the process.
This variance-based view of the process intrigued another statistician of that time, Edward Deming. Deming’s focus in 1930s was on the analysis of measurement errors in science. He saw Shewhart’s work as complementary. The two worked together to create many of the current statistical process control (SPC) tools and strategies. The timing of this collaboration could not have been better since this work assured higher work productivity that was needed for the US World War II efforts.
The basic framework of the Gauge R&R study and the simple control chart remains an effective tool for process control. The simplicity of the tools and their demonstrated effectiveness perhaps is what has been driving their wide implementation.
A view of process validation, similar to Shewhart and Deming, can also be taken. It may be argued that the goal of process validation is to demonstrate that the process is behaving within the expected variation or noise. Then, in principal, a Gauge R&R study may provide the knowledge of the expected variation, and a trend chart could then be used for the demonstration of the validated state.
This definition of the validation process is at the heart of the US Food and Drug Administration’s 2011 Process Validation guideline. Through its dozen or so pages, the document reiterates the teachings of Shewhart and Deming from almost 100 years ago. The guideline talks about using process characterization and development data to understand process noise and signal. These should be distilled into “inter” and “intra” batch variances–in other words, process variance components. These in turn are used to define acceptance criteria and various process control strategies. Finally, in an introduction of a “new” concept of continued process verification, the guidance document calls for trending of the process parameters to assure the validated state.
Everything old is new again! So perhaps William Cowper’s saying rings true even when it comes to process validation, understanding, and control. Variation is the center of everything. It isn’t good or bad. It is simply there and must be measured, tracked, and controlled.