JVT

Dealing with Discontinuities within the PV Lifecycle

Abstract

This article is a follow-up to a related paper entitled Remembering Ken Chapman: Points to Consider for 3-Stage PV (JVT Apr 2018). It examines the crucial but unaddressed issue of discontinuities within PV, arguing that they are not the exception but the norm, and should be treated accordingly. Several examples are included in support of the premise that preventable manufacturing and quality problems occur because of a myopic approach, allied to a ‘fear-of-data’ mindset, that fails to recognize this fact. The article introduces and illustrates the concept of ‘link data’ as the common denominator in responding to current deficiencies. It concludes on a positive and practical note, outlining a mechanism whereby root cause deficits in knowledge management and process understanding can be successfully eliminated, to the benefit of all.

Migrating to 21st Century PV provides major opportunities as well as posing some perceived challenges[1] for pharmaceutical and biotech manufacturers and their CMO’s. Proper interpretation of the term ‘continuum’ is fundamental to a successful transition, and the model outlined in the diagram[2] is aimed at delivering a comprehensive and comprehensible organisational response capable of withstanding regulatory scrutiny. 

Key to success #1 is realising that, paradoxical as it may sound, discontinuities are a fact of life within and across the manufacturing continuum. This shouldn’t come as a major surprise, given that we are dealing primarily with batch processes, which are compartmentalised by design, with physical and chronological points of separation incorporated. Working with this reality will maximise process robustness and ensure product quality, unlike the traditional Swiss cheese approach where errors and defects fall through the cracks without detection. 

Two examples, by way of illustration. Consider first the discontinuity between the completion of one Unit Op and the commencement of another. Acceptance criteria for such transitions (duration, temperature, containment, waste streams etc.) are often unstated or assumed, and consequently unverified in the formal sense. Solution: register UO-to-UO transitions as operations or sub-operations and connect their link data to the related UO’s.    

Consider next the discontinuity between PPQ and CPV, validation groups being normally responsible for one and manufacturing or process monitoring departments for the other. As with Unit Ops, acceptance criteria for such transitions (controllability, resolution, ownership, review frequency etc.) are often unstated or assumed, and consequently unverified in the formal sense. Solution: register PPQ-to-CPV transitions and their acceptance criteria as data or metadata for process parameters.

Other blind spots, all based on project experience, include: inaccessibility to the necessary R&D or clinical expertise when assigning relative criticality to quality attributes; unavailability of process specifications when evaluating measurement uncertainty within metrology; omission of communications requirements from local controller specifications; misalignment between process risk assessment outputs and alarm management system inputs; unstated definition of maximum permissible time between sampling of material and its subsequent analysis and reporting; unsynchronised change control between process descriptions and batch records. 

All these examples share a common root cause (i.e. tunnel vision) and the majority would fail any meaningful data integrity assessment. The flipside is that they also enjoy a common solution, i.e. mandatory and ubiquitous declaration of interfaces/data/ownership as part of system and process design. In terms of ownership, technology transfer concepts can and should be applied, with donor and receiver responsibilities clearly delineated. 

The above examples span a range of disciplines and facets of manufacturing, and many of the proposed data elements will already have appeared in the appropriate system specifications or traceability matrixes, albeit in fragmented format. In such cases, the objective of the initiative being outlined here is to ensure that link data essentials (i.e. the things that can hurt you and cause 483’s or PAI failure) are taken into consideration consistently and systematically rather than randomly and arbitrarily. My own recommendation is that link data frameworks should share a common template/lexicon, along the lines of that shown below for the examples discussed. In populating such templates, donor/receiver SME’s should be consulted as to the range of link data required. Note that the examples are illustrative only, and complex situations are likely to result in larger datasets, often with 2-way communication involved.

    ILLUSTRATIVE LINK DATA SCENARIOS & CRITERIA

#

LINK FROM

LINK TO

LINK DATA

1

Unit Op

Unit Op

Duration:
Temperature:
Containment:

2

PPQ

CPV

Controllability:
Resolution:
Review Frequency:

3

R&D

CQA

Safety Factor:
Efficacy Factor:
SQuIPP Category:

4

Parameter

Metrology

Measurement Units:
Decimal Places:
Process Tolerance:

5

Controller

DCS

Operating Ranges:
Signal Format:
Polling Frequency:

6

FMEA

Alarm

Process Criticality:
Process Range:
Alarm Setting:

7

Sample

Analysis

Sample Size:
Hold Temperature:
Max Hold Time:

8

Process Description

Batch Record

Materials:
Parameters:
In-Process Controls:

Regarding general methodology, the traditional (Swiss cheese) approach relies heavily on Gap Analysis. This by its nature is an after-the-fact activity, which seems to tolerate or expect failure or inadequacy, contrary to the principles of QbD. My own preference is to pre-empt and eliminate as much of GA as possible, by elevating links to pivotal status and policing them accordingly across their lifecycle. 
Links clearly cover a multitude, and they are all ‘synaptic’, in their way. They can be physical or functional, automated or manual, discrete or continuous, micro or macro, permanent or temporary. Depending on their complexity or significance, they can be treated as properties of existing items, as items in their own right, or as systems/subsystems with their own componentry.  And three key properties that the preceding and other examples all share: they are all real, they are all in relationships, and they are all amenable to quantification.[3]

Final remark on GA: the root cause of all gaps is the absence of a link.

Some Additional Discontinuities:

  • Inconsistent definitions and abbreviations across documents.
  • Conflicting CQA listings within CMC submission and VMP.
  • Unawareness of prevailing GMPs and/or their specifics.
  • Unsynchronised criticality conventions across disciplines.
  • Mismatch between Org Chart and current actuality.
  • Inaccurate calculation of residual risk for parameters.
  • Low detectability at points of process weakness or risk.

Key to success #2 is realising that the total lifecycle can be assembled and managed via a series of interconnected and overlapping stages and datasets. Using a transportation analogy, this involves sharing of information between successive pairs of ‘carriages’ in the overall ‘train’.  To illustrate, consider datasets 2 & 3 in the model shown on page 1. Dataset #2 cross references CQAs to their implicated Unit Ops, providing justification in matrix or tabular format. Dataset #3 in its turn extends these (and only these) Unit Ops with input material and process parameter commitments, and associated variabilities, in more detailed tabular format. This is a rigorous and disciplined procedure, competing as it does with ad hoc and unstructured process narratives. (It is also an adaptable and iterative process, accommodating knowledge acquired over the totality of the lifecycle). Note that descriptive narratives still have a role play, but with suitably designed support templates or appendices in place. The benefits of a data-driven approach, further illustrated below, should be self-evident, including: precision, consistency, scalability, amenability to categorisation and reuse; and, and lest we forget: efficiency, agility, flexibility. 

Key to success #3 is realising that implementing concepts such as the above is a non-threatening, beneficial and team-based endeavour. It requires little by way of capital investment and delivers clearly defined, testable workflows and dataflows, resulting in maximal transparency and economy of compliance. In situations where this proves to be a difficult sell, be aware that it is also an unavoidable and positive consequence of 3-Stage PV. 

Discussion: It is important to note that data and documentation are complementary items and by no means mutually exclusive. The datasets that I have been advocating are certainly all amenable to printing and version control following QMS norms. The resultant document formats are highly structured however, providing direct traceability to PV-related guidelines and standards on the one hand, and capable of being used as executable protocol attachments, where appropriate, on the other. Finally, where traditional documents rely heavily on tables of contents, datasets rely more on indexes and ‘bills of material’ as their point of entry. And as indicated at the beginning, realistically speaking, you can’t have one without the other. i.e. data-driven-documentation.

Benchmark: What I have been promoting is exactly in line with the concepts described by FDA’s Richard Friedman in Using QRM to Assure Lifecycle Process, Equipment, and Facility Improvement (JVT Dec 2015). His paper discusses how a mature quality system ‘assures a state of control throughout the product lifecycle by vigilantly managing manufacturing and quality risks’.  ‘If the system is working, process and facility vulnerabilities that lead to operational variation and substandard pharmaceutical quality will be detected, understood, and addressed’. The paper emphasises the key role of KM in ‘connecting the dots’, resulting in a QRM system that ‘relies on relevant current and accumulated knowledge to make the right decisions’.  

By way of response, my own view is that the extent to which a process is ‘known’ can be evaluated and demonstrated by applying sensitivity factors to variables (ref. my Ken Chapman paper for details on sensitivity). If process definition datasets are filtered such that ‘sensitised’ variables associated with a given CQA are listed sequentially, then the following representation emerges, providing a transparent declaration of relative risk and process understanding on the one hand, and an indication of proximity to the CQA ‘summit’ on the other. The same information could be displayed for the entire set of CQAs in comparative bar chart format if preferred. This idea was on my mind for some time, but it crystallised when I was watching a Tour de France hill climb on TV this summer, and it’s as close to KM continuity as I can envisage.

Last word: In an ideal environment, the model that I have outlined would be subject of a single or primary process owner, equivalent to a general practitioner in medical terms, enabled and empowered by a secure KM system that provides integrated, dynamic and distributed access to all data elements and their relationships. Realistically speaking however, and bearing in mind the complexities of real-life manufacturing and its delivery, the model should be viewed as a dashboard or super-specification, paving the way for, and inviting examination of, deeper and established systems and datasets (e.g. raw materials, instruments, alarms, trends, personnel).  

Postscript: In researching this article, I re-read The Archaeology of Knowledge by Michel Foucault (1971). Foucault is hard going, but his advice on KM is universal and succinct: ‘know your discontinuities’ and ‘mind the gap!’. I had gradually arrived at the same conclusion myself, based on my training and lessons learned over the course of a 3-decade career as a validation professional. In any case, I hope that the ideas I have expressed will encourage readers to revisit and refine their own existing validation and KM strategies. To reiterate, my recommendation is that MS&T systems and components be specified and designed, not in isolation, but with link data assigned and interconnected, and validation programs executed accordingly. Finally, the ‘interconnected’ bit appeals greatly to my engineering instincts. Given that items can link in both directions (i.e. send & receive), and enjoy multiple assignations if required, Richard Friedman’s suggestion that we connect the dots is no longer metaphorical, and we’ve found ourselves a building block, with which we can form clusters, and articulate a network.

[•---(x)---•]

 

Footnotes

[1] Paraphrasing the regulatory position, it really should not be a ‘challenge’ to implement 21st Century PV, given that responsible companies had been designing robust processes and performing ongoing vigilant monitoring all along. Furthermore, other successful industries have been doing this for decades - automobile, petroleum (strong users of PAT), semiconductor, aerospace, so the concept is certainly not new. It therefore represents more a transformation than a challenge for the less advanced firms. Such companies might indeed find it difficult to move from a QC reactive mindset to a QA (proactive based on iterative knowledge) driven operation.

[2] For simplicity, GMP readiness, F&E qualification, methods validation, environmental monitoring, supplier qualification, storage & shipping qualification, are excluded from the diagram. These, along with risk assessment and change control, are implicit or subject of parallel models. Feedback between each of the model’s stages is also assumed. It should be apparent from the graphic that the model itself is intrinsically feedforward.

[3] Note that mathematical links are equally real, with inputs, formulae, calculations (human/machine) and outputs often omitted from the validation process.

Abbreviations

A/M Auto/Manual

CMC Chemistry, Manufacturing & Controls

CMO Contract Manufacturing Organisation

CPV Continued Process Verification

CQA Critical Quality Attribute

CS Control Strategy

DCS Distributed Control System

F&E Facilities & Equipment

FMEA Failure Mode Effects Analysis

GMP Good Manufacturing Practice

GA Gap Analysis

ICH International Conference on Harmonisation

INST Instrument

JVT Journal of Validation Technology

KM Knowledge Management

MBR Master Batch Record

MS&T Manufacturing Science & Technology

NOR Normal Operating Range

PAI Pre-Approval Inspection

PAR Proven Acceptable Range

PAT Process Analytical Technology

PFD Process Flow Diagram

PPQ Process Performance Qualification

PV Process Validation

QA Quality Assurance

QbD Quality by Design

QC Quality Control

QMS Quality Management System

QRM Quality Risk Management

QTPP Quality Target Product Profile

R&D Research & Development

S&E Safety & Efficacy

SME Subject Matter Expert

SOP Standard Operating Procedure

SOV Source of Variability

SQuIPP Safety, Quality, Identity, Potency, Purity

UO Unit Operation

VMP Validation Master Plan




Product Added Successfully

This product has been added to your account and you can access it from your dashboard. As a member, you are entitled to a total of 0 products.

Do you want access to more of our products? Upgrade your membership now!

Your Product count is over the limit

Do you want access to more of our products? Upgrade your membership now!

Product added to cart successfully.

You can continue shopping or proceed to checkout.

Comments (0)

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
  • Use to create page breaks.
Image CAPTCHA
Enter the characters shown in the image.