Peer Reviewed: Variation
Key Points Discussed
- There are two traditional ways to control the variability in process parameters – statistical process control (SPC) & engineering process control (EPC).
- Both of these approaches have much in common with respect to their objectives.
- There are differences, however, that determine which situations each one is applied.
- Misapplication of either approach in the wrong situation will lead to less than optimal results, and in many cases may actually increase variability in the process parameter.
- Understanding how the approaches differ will help ensure they are applied correctly.
When I first joined the ranks of employed engineers (back in the early 80s), I worked in a technical services organization where part of my job responsibilities was to use process control to improve the performance of the manufacturing processes at the site. My job was to find out from the chemists and other engineers what the perfect process should look like and then, as Captain Picard of the USS Enterprise would say, “make it so”. I remember one day sitting down with a chemist and asking her what was important about this part of the process. Her response was that the temperature in the reactor needed to heat to 60C and then stay exactly at this temperature until the reaction was complete. I then worked on this until the temperature was “flat-lined” in that you could see little difference between the target (set point) of 60C and the actual temperature in the reactor during the reaction. If you had asked me what I was doing, I would have described it as improving the control of the process. Indeed, at one point, that same chemist described my role as “making her life easier” because the improved control made it easier to see if the process was behaving normally or not. At the time I would have said I was using EPC to keep the controlled parameter (the reactor temperature) at its set point. EPC has been around for a long time, having started in the process industries (1). EPC is used to control the value of a process parameter (the controlled parameter) to a set point by manipulating the value of another (the manipulated parameter), as shown in Figure 1).
Figure 1: A Temperature Control System.
When the temperature is below the set point, the Hot Supply control and Hot Return block valves are opened so that hot liquid flows around the tank jacket and the tank temperature rises. When the temperature rises above the set point, the Cold Supply control and Cold Return block valves open. The heating/cooling liquid is pumped around the jacket of the reactor to enhance the heat transfer rate. The block valves have only two states open or shut and are used just to ensure that hot liquid is returned to the cooling supply system or cold liquid is not returned to the hot supply system. The control valves can vary their open position from zero percent to 100%. The actual amount the control valves open depends on how far the tank temperature is from the set-point. The manipulated parameters are the percent open position of the Cold and Hot Supply control valves.
In the vernacular used today, my role would be described as one of reducing variation. However, during the time I was in this role, the idea of random variation never impacted what we did in any significant way. It obviously did not need to, since EPC was being used very successfully by many people to reduce variation. This state of affairs remained for several years until we began to hear rumors of a new approach to reducing variation called statistical process control. Use of SPC (and other statistical thinking approaches) were being attributed to the turnaround in the Japanese economy and the much higher quality levels in goods that were being mass produced in Japan (2). Anecdotes about the high levels of quality of Japanese goods began to emerge (such as the one where the variation in the Japanese-made items was so small that measurement systems in the US could not detect the variation!). Eventually, we began to experiment and then implement these ideas and to see benefits. The ideas behind SPC were much easier to appreciate from an implementation perspective, consisting mainly of plotting performance data on a specially constructed chart, called a control chart, and then reacting to the chart in some predefined ways. For example, Figure 2 shows a control chart we might construct to monitor the product potency of a process. So long as the potency stays within the control limits (the red lines) and has no trends over time, the process is considered stable, and we should not react to any particular result as if it were special.
Figure 2: Control Chart used in SPC.
If on the other hand, we see data falling outside the control limits, as at batches A and B in Figure 2, it indicates that something unusual has occurred. These batches should be investigated to see if the cause of the unusual variation can be found. If the potency is lower than expected (point A), we would attempt to eliminate the cause or at least reduce the risk that it happens again. If the potency is higher than expected (point B), we would attempt to see if a positive process change could be identified and then made part of the process so that all future potencies would be higher.
Because of its apparent simplicity and because of the stories that began to circulate about the success of SPC, it caught the imagination of (at least some) management in the Western Hemisphere. Over the years, this has morphed into a situation where SPC has essentially become synonymous with variability reduction. Given the success of both methods, however, it is clear that both have a role to play in process control/variability reduction. It is also intuitive to anyone who has practiced both methods that they have similarities and differences and that there are situations where one approach is preferred over the other. In fact, there are situations where application of one approach is simply wrong. For example, I just had a new gas furnace installed in my home. It uses a sophisticated EPC control system to keep the temperature of my house within suitable limits. So far, it is working very well. Of course, what the control system is doing is keeping the variation of temperature within my house much smaller than the variations in temperature outside my house (due to changing weather) by adding heat when the temperature is below the set point. It is difficult to imagine an SPC control system being able to do this in any practical way.
Not too many people get the opportunity to practice both types of control on an ongoing basis, and thus it makes it difficult for practitioners to see “both sides of the fence.” Most engineers (myself included) are taught EPC without much, if any, reference to random variation. Most statisticians will be taught SPC without any reference to EPC. It is therefore easy to see how biases can creep in. I can once remember a conversation with a statistician about the comparison between SPC and EPC. His opinion was that SPC is better because it is the only method that actually reduced variation! Clearly many generations of engineers can refute this by showing case after case where EPC has reduced variation in a process. Therefore, given this risk of bias, it is very important to clearly understand how SPC and EPC are the same and how they are different. This understanding will ensure that these two excellent approaches are not misapplied. The rest of this article will discuss the similarities and differences between the two approaches.
Earlier, it was noted that SPC is easier to appreciate from an implementation perspective. However, the theoretical underpinnings of SPC are just as involved as EPC. Just try reading Shewhart’s original writings on the topic (3, 4). This can result in situations where people think SPC looks simple and so misapply it. Some of such misapplications have been documented (5-7). For example, some people end up thinking that 3-sigma limits were chosen as the control chart limits because only 0.3% of a normally distributed random variable fall outside this range. However, this was never part of Shewhart’s argument for 3-sigma limits. His argument is purely empirical; over a long time of using these charts, the use of 3-sigma limits seem to strike the right economic balance between over reacting and creating more variability and under-reacting and missing opportunities to reduce variability (3). This situation is not only seen in practitioners, but, in conversations with such practitioners, it is clear that they learned all this from misinformed teachers. The point here is that practitioners of SPC (and EPC or indeed any other skill) need to invest intellectual energy in understanding why something works – the theory behind the method.
Similarities Between EPC and SPC
The first similarity is that they both recognize the notion of an ideal state, a state of control, for the process parameter being controlled. Shewhart (1) has given us a very good definition of process control:
“A phenomenon (process) will be said to be controlled when, through the use of past experience, we can predict, at least within limits, how the phenomenon may be expected to vary in the future. Here it is understood that prediction within limits means that we can state, at least approximately, the probability that the observed phenomenon will fall within the given limits.”
Because there is this idea of a controlled or stable state, it is possible to decide if the process parameter (temperature, potency) is being controlled adequately so that no control action is currently required.
Both approaches recognize the idea of capability. Once a process is controlled (a stable process), the performance can be assessed against the requirements. Because the process is stable, the data can be assembled into a summary view such as the histogram as shown in Figure 3.
Figure 3: Assessing the Capability of a Performance Measure.
The capability is then determined by comparing the summary view with the Lower and Upper Specification Limits (LSL and USL). This can be done visually as in Figure 3 or more quantitatively using some calculated capability index such as Cpk (7).
Both approaches also focus on economics. Shewhart (3) made this clear when he included the word economic in the title of his book. EPC assumes that the cost to create and operate a control system like Figure 1 is more than offset by the gains in keeping the temperature close to the set-point. Neither method promotes the idea of reducing variation without taking cost of implementation into account. There is no point in spending $1,000 to save one dollar. In saying this, it should also be acknowledged that many times the advantage of reducing variation is hard to quantify, so it should be remembered that just because we cannot quantify the benefit, it does not mean there is no benefit.
Both approaches do not need to know the causes of the variability at the beginning. In fact, EPC is never concerned with these causes. It is inherently assumed (based on process knowledge) that it would not be practical to reduce or eliminate the causes of the variability. For example, I could eliminate the need for a heating/cooling system in my home by finding a location where the natural variations in the weather are within my requirements and so no control is required. However, this is not a practical solution to variability reduction. In fact, the objective of the EPC controller is to make the process robust to sources of variability that cannot be eliminated economically. In the case of SPC, the whole point of the approach is to identify some of the causes of variability so they can be reduced. Since it is assumed that it will make economic sense to do this, if they were known at the beginning, then they would be addressed at the beginning. Of course, it is possible that the SPC approach might identify sources of variability that cannot be reduced economically. How we would deal with this is very situation dependent.
Both approaches reduce variability in the same way; that is to say, in both cases, the variability of one parameter is reduced by causing or reducing variation in another parameter. This is based on the notion that nature is causal. If you want to change something in one place, then you must make a change somewhere else! This is easy to see in the case of EPC by looking at Figure 1. The variability in the tank temperature is reduced by creating variation in the manipulated parameter(s) – the position of the control valves. It may not be quite so obvious in the case of SPC. Going back to the potency example in Figure 2, imagine that the batch corresponding to point A has just been completed, the potency has been plotted on the control chart, and a special cause investigation has been started. The investigation reveals that a valve closed more slowly than normal and an extra quantity of a reagent got into the reactor and caused the drop in yield. Further, the valve issue was caused by a gasket that had worn out prematurely. It is obvious that, if the worn gasket is not addressed, then the risk of future low yields is high. So a change must be made! First the worn gasket is replaced, a change to the process. Secondly, the reason for the premature failure is addressed that might lead to needing to change how valve gaskets are selected, a change to a business process that supports the process.
Both approaches are based on feedback control as shown in Figure 4.
Figure 4: A Feedback Control System.
All feedback control works the same way. You start with an objective (keep the temperature at the set point, keep the process stable with no special causes). Then you compare actual performance with the objective; the difference is the (performance) gap. The controller then uses this gap as input to decide if a change is required. This change will cause the actual performance to change, and so the gap is impacted and the cycle is repeated. This should also remind people of the Deming PDCA (Plan/Do/Check/Act) loop (8) that is a feedback loop for process improvement.
Finally, both approaches recognize that a stable process with low variability is key to efficient process improvement. When the variability is low, the impact of changes on the process (both the intended impact and, just as importantly, the unintended impact) will be easier to see, and so the impact of the change will be assessed more quickly and with more certainty.
Thus, it can be seen that there are many similarities between EPC and SPC. However, there are at least three significant differences between them, and it is these differences that account for the different usage of each approach.
Differences Between EPC and SPC
The first major difference is that the EPC approach assumes that a lever can be found that can be adjusted in some economic way to reduce the variation of the controlled parameter. Without this lever (the heating and cooling control valves in Figure 1), EPC is a non-starter. SPC, on the other hand, does not require this assumption to be true. The classic applications of SPC, such as the application to potency in Figure 2), do not have such a lever. Essentially, the purpose applying the SPC approach is to identify the levers and then modify the levers to reduce the variability in the potency. The worn gasket referred to earlier is an example of such an SPC “lever.”
The second major difference is the role of random variation. In SPC, the central attribute of the process parameters involved is that they are dominated by random variation. We would certainly expect to have random variability in a potency due to process variability and measurement error. Since it is well know from the Deming Funnel experiment that no control action should be taken if the data is pure random noise (9), the SPC controller has to be able to differentiate between pure random noise and cases where signals (special causes) are present in the data. This, of course, is the primary purpose of the control chart. The mentality of the SPC approach is that you should only take a control action if there is evidence that the data is not a purely random set of data. The SPC approach is very biased towards a “hands off” control approach. On the other hand, EPC tends to ignore random variation. The EPC mentality is based on the notion that all changes in the data are real changes and that the controller should react to them. When the temperature increases, it is assumed that whatever caused this to happen was not random. It is assumed that whatever has changed will continue to do so unless some action is taken. Of course, the amount by which the controller adjusts the manipulated parameter will depend on the amount by which the controlled parameter is from the set point. The EPC controller, therefore, may make very small or insignificant changes in some cases. The EPC approach is very biased towards a “hands on” approach. This difference in mentality between the two approaches shows up clearly when the parameter to be controlled has significant random variation but also has significant non-random variation present. Examples of such situations abound in the pharmaceutical (and other) industry – controlling the weight of tablets, controlling the fill volume of vials, etc. If the control system was designed with the SPC approach as the starting point, the controller will tend to under control and thus will tend to make infrequent adjustments. On the other hand, if the control system was designed with the EPC approach as the starting point, then the controller will tend over control by making more frequent/larger adjustments. In both cases, this will lead to larger variability in the controlled variability than would be obtained if the optimal adjustments are made (10, 11).
The presence or absence of significant random variation also impacts how capability is measured. For SPC applications, a capability measure is a statistical measure, and the value of the measure gives an indication of the probability that potency would be outside its specifications. For EPC, there is no consideration of random variation in the control of the parameter, and the measure of capability cannot involve a statistical calculation. We simply look and see if the process parameter is controlled inside the specifications, and, if it is, the system is considered capable of meeting its requirements. The data within a batch cannot be used to define a probability of being inside the specifications in any statistical sense. However, it is possible to look at the variation of the process parameter in a statistical sense if the variation is looked at across batches. For example, the variation of the minimum and maximum temperature during a reaction across batches will vary in a random way, and this random variation can be used to characterize the capability of the EPC controller across batches (12, 13).
Probably the most interesting difference between EPC and SPC lies in the cost of adjustment. EPC assumes that the cost of adjustment is insignificant compared to the benefits of variability reduction. Adjusting the control valves in Figure 1 is a trivial cost compared to the benefit of keeping the tank temperature close to the set-point. SPC assumes the exact opposite. This aspect of SPC may not be obvious but is another reason why SPC is so biased against making changes unless you are quite sure that a change has occurred in the process parameter. Consider the point that is being made by SPC. Without SPC, management tends to react to any little change in a parameter as if it were a signal of a real change. They then order their team to figure out what has changed and get to the root cause when in fact nothing has changed. This could waste a lot of time and money and was a big reason why Deming (2) and others promoted statistical methods in general and SPC in particular. By using the control chart limits as a guide to whether or not a real change has occurred, SPC prevents over reaction. Several years ago, I attended a monthly meeting where metrics for a certain operational area were reviewed. One of the metrics was monthly expenses for the area, and the manager, being educated on statistical thinking and SPC, had the financial team member plot the data on a control chart. The first month I attended the meeting the monthly expenses were above the mean; however, because it was below the upper control limit the manager, despite some concern from some team members, did not ask for an investigation. The second month it was slightly higher again. Still the manager did nothing. The third month it was slightly higher again. By now the team was getting really concerned at the lack of action, but the manager held tough. Finally, on the fourth month, the expenses fell and the team breathed a sigh of relief. This is SPC as it should be practiced! The variation was simply random, and, if the manager had insisted on looking for a reason for the short term slight trend upwards, it would have been a waste of time. Worse still, the team in their zeal to find a root cause might have found a “phantom” root cause and made unnecessary changes that cost resources and could have made things worse!
SPC using control charts is basically a dead band control strategy. While the data is within a dead band (the control limits), no action is warranted. The variation is simply random. Action is warranted once data appears outside the control limits. It can be shown that when the cost of adjustment is significant compared to the benefits of variability reduction, a dead band control strategy is preferred (11). In effect, the control strategy gives up some of the benefit for variability reduction by making fewer costly adjustments. EPC also recognizes the validity of this trade-off. For example, one of the downsides of EPC is that by making lots of adjustments, the control valves may wear much faster. In this case, a dead band control approach can be used to reduce the frequency of adjustments and reduce the wear on the control valve (1). This will increase the variability of the temperature, so this dead band strategy is valid if this increase in temperature variability is offset by the loss that could occur if the control valve failed prematurely and the batch was significantly impacted.
There are a lot of similarities between EPC and SPC, and both have their place in our arsenal of weapons to reduce variability in our processes. However, there are definite differences between them, which mean that there are situations where one is preferred over the other. If random variability is not significant, a lever can be found to economically compensate for the variability (the heating/cooling valves), the cost of adjustment is low, or reducing the sources of variability is too costly then an EPC approach is indicated. If, on the other hand, random variability is significant, the cost of adjustment is high, or reducing the sources of variability can be done economically then an SPC approach is indicated. Using either approach inappropriately will lead to higher variability and increased costs.
It should be noted that there are certain situations where all the attributes for EPC are present except that the amount of random variability is significant. In this case, there is a third approach called statistical process adjustment (SPA) that can be used (11). However, that is a subject for a future paper.
- F.G. Shinskey, Process Control Systems – Application, Design, and Tuning, 3rd ed., McGraw-Hill, 1988, ISBN 0-07-056903-7.
- W.E. Deming, , “On Some Statistical Aids Toward Economic Production,” Interfaces 5 (4), 1-15, 1975.
- A.W. Shewhart, “Economic Control of Quality of Manufactured Product,” ASQ 50th Anniversary Commemorative Reissue, D. Van Nostrand Company. Inc., 1980, ISBN 0-87389-076-0.
- A.W. Shewhart, Statistical Method from the Viewpoint of Quality Control, Dover Publications, New York, 1986, ISBN 0-486-65232-7.
- J.S. McConnell, B. Nunnally, B. and McGarvey, “The Dos and Don’ts of Control Charting – Part I,” Journal of Validation Technology 16 (1), 2010.
- J.S. McConnell, B. Nunnally, and B. McGarvey, “The Dos and Don’ts of Control Charting – Part II,” Journal of Validation Technology 17 (4), 2011.
- D.J. Wheeler, Advanced Topics in Statistical Process Control, SPC Press, Knoxville, Tennessee, 1995, ISBN 0-945320-45-0.
- W.E. Deming, “Out of the Crisis”, published by The Center for Advanced Engineering Study, M.I.T., Cambridge, Mass. 02139, ISBN 0-911379-01-0.
- J.S. McConnell, Analysis and Control of Variation, 4th Edition, published by the Delaware Books, ISBN 0-958-83242-0, 1987.
- J.F. MacGregor, “A Different View of the Funnel Experiment,” Journal of Quality Technology 22 (4), 255-259, 1990.
- E. Del Castillo, “Statistical Process Adjustment for Quality Control,” Wiley Series in Probability and Statistics, 2002.
- G. Mitchell, K. Abhivava, K. Griffiths, K. Seibert, and S. Sethuraman, “Unit Operations Characterization Using Historical Manufacturing Performance”, Industrial & Engineering Chemistry Research 47, 6612–6621, 2008.
- G. Mitchell, K. Griffiths, K. Seibert, and S. Sethuraman, “The Use of Routine Process Capability for the Determination of Process Parameter Criticality in Small-molecule API Synthesis,” Journal of Pharmaceutical Innovation 3, 105–112, 2008.
- K.L. Jensen, S.B. Vardeman, “Optimal Adjustment in the Presence of Deterministic Process Drift and Random Adjustment Error,” Technometrics 35 (4), 376-388, 1993.