The following is the product of a collaborative effort between two of IVT Network's highly esteemed event speakers, addressing top issues frequently asked in the validation community.
How many lots or batches do I have to include in my process validation?
Process validation does not require a specific number of batches or lots. Many people believe they must use three lots to perform a process validation, but this isn't necessarily adequate. Instead, focus on including as many sources of variation as you expect to encounter in commercial production: multiple raw material batches, multiple suppliers, multiple operators, shifts, equipment, etc. Including these sources of variation will determine the number of lots required for your validation.
Can I pool or combine data across multiple lots to get to the sample size I need?
It is generally not advisable to pool data across multiple lots in process validation. The goal is to be able to make a reliability and confidence level statement about each of the lots included in the validation, and if you combine the lots to get the sample size needed to make that statement, then you reduce the statistical power of the conclusion. If you must pool data across multiple lots, it can be done in the case of attribute (go/no-go) data, but not variables (numeric) data.
How can I link risk to sample size?
There are many methods for aligning risk to sample size in validation studies. One method is to assign a desired reliability and confidence level to each product characteristic or process operation, then select a sampling plan that will allow you to meet that reliability and confidence level assuming you pass the parameters of the sampling plan. For example, if a manufacturing process generates a weld of two crucial parts of a product, and the risk of the failure of the process to meet that characteristic’s process specification is high, you might assign a reliability of 99.7% and a confidence level of 95% to the weld process. Then, you would choose a sampling plan with an LTPD.05 = 0.3% because 100% - Reliability = LTPD. Assuming this is a variables plan and the process meets the Ppk and Pp for the sampling plan, that lot will pass and you’ll continue with the second lot and so on.
Can I build each validation lot to be equal to the sample size I need?
Not recommended. Some argue to build at anticipated typical conditions (e.g. same pace, process). Appropriate rationale should be provided. Of course limit builds are not anticipated for product going to the field but strive to use operators, processes, and equipment that will be used for normal production but at limit parameters. Highly recommended is to build each validation lot to be equivalent to the lot size you plan to use in commercial manufacturing. Then select the sample size in a representative way from each commercial-size lot you produce for the validation study.
What is LTPD and how does it affect validation?
Lot Tolerance Percent Defective (LTPD) is a measure of the percent defective a manufacturing process would have to be producing in order for a given sampling plan to routinely reject each lot. LTPD is associated with consumers’ risk, and is generally considered more crucial than AQL (see below). Patient risk is directly associated with LTPD and is the basis for choosing a sampling plan from a risk/reliability standpoint. For example, if the reliability required for a given process output is 99% and the confidence level is 95%, then sampling plans with LTPD .05=1% would be appropriate. For a 99% reliability with a 90% confidence level, sampling plans with LTPD .10=1% would be appropriate.
What is AQL and how does it affect validation?
Acceptable Quality Level (AQL) is a measure of the percent defective a manufacturing process would have to be producing in order for a given sampling plan to routinely accept each lot. AQL is associated with producers’ risk, and is not directly considered in the risk-based selection of a sampling plan. However, once the LTPD has driven a reliability and confidence level that results in the selection of a series of suitable sampling plans, AQL can be used to ensure the selection of the optimal sampling plan that gives a good process the best chance of passing the validation.
Is it necessary to perform a normality test on my validation data? What if the normality test fails?
If you select a variables (numeric) sampling plan, then yes, it’s essential to determine whether the raw data are from a normal distribution or not. Variables sampling plans make assumptions about the spread of data relative to the specification limits using the normal distribution as an algorithm or model. If the data do not fit a normal distribution, then capability of the manufacturing process to meet the specification is unpredictable. A rule of thumb is to perform a normality assessment using a p-value of greater than 0.05 to conclude the distribution is normal. A p-value less than or equal to 0.05 would result in a conclusion of non-normality. It’s okay not to be normal, and many manufacturing processes lead to other distributions naturally (e.g., Weibull distributions are common with burst tests and Lognormal distributions are common with tensile tests).If your validation data do not fit a normal distribution, identify the distribution they do fit and make sure it’s logical and appropriate that they fit the non-normal distribution. Document your rationale, transform (e.g. using Box-Cox power transformation) the data and the specification limits (or choose a distribution-free method of analysis), and assess the capability of the transformed data. If data fits a non-normal distribution (e.g. Weibull) and is a logical selection, compute capability using that distribution. Note that attribute data are exempt from the requirement to perform a normality assessment.