New Paradigms for Process Validation

View PDF

 

 

Both the United States and the European Union have recently evolved guidance on how to execute process validation (1, 2) with the prospect of a more appropriate life-cycle approach. It goes beyond the traditional three to five lots run at the center point of proposed ranges for operating parameters. New approaches leverage product design and process development information. They facilitate adapting the quality by design (QbD) paradigm to allow for a science- and risk-based selection of critical process parameters, key process indicators, and appropriate specification criteria. The number of runs for process performance qualification (PPQ) must be determined using a risk-based understanding and control of process variability.

 

This approach allows for more comprehensive use of multiple data sources to strengthen process understanding. Once process performance qualification has been executed, a stage of continued process verification begins for ensuring that a qualified control strategy is sufficient and that the process remains in a state of control. Following an appropriate time frame, process verification can be reduced to standard continuous process monitoring levels for ensuring process robustness and stability.

PRODUCT FOCUS: BIOPHARMACEUTICALS
PROCESS FOCUS: MANUFACTURING
WHO SHOULD READ: PROCESS/PRODUCT DEVELOPMENT, MANUFACTURING, QA/QC
KEYWORDS: QUALIFICATION, QUALITY BY DESIGN (QBD), SCALE-UP, SCALE-DOWN, PRODUCT LIFE-CYCLE MANAGEMENT
LEVEL: INTERMEDIATE

The January 2013 CMC Strategy Forum in Washington, DC examined available regulatory guidances and attempted to answer certain remaining unanswered questions regarding implementation of the new process validation paradigm. Chaired by Rohin Mhatre (Biogen Idec) and Wassim Nashabeh (Genentech/Roche), the forum addressed such issues as

  • how much and what type of data can be used to define when a process is ready for qualification
  • how the required number of qualification runs is defined based on that knowledge
  • what parameters should be included in continued verification
  • how and when to move on to routine process monitoring.

The day encompassed two sessions, each comprising presentations followed by an interactive panel discussion. Moderators facilitated questions and comments from the audience.

 

Morning Presentations

 

The European Approach: Mats Welin, a senior expert at the Swedish Medical Products Agency, opened the session with an introductory talk titled “Process Validation: What to Put in the File — EU Perspective.†He said that validation and evaluation are essential to setting the manufacturing process steps of biotechnology-derived products. Evaluation and validation data provide essential information on the reproducibility and robustness of the process steps, so they are important to guaranteeing consistency in product quality. Those data come from studies performed on product and process steps representative of the commercial process. That may cover a broad range of situations and experiments (e.g., full scale, pilot scale, laboratory scale, and scaled down), depending on the objectives of the studies carried out during development (e.g., consistency, viral safety evaluations, and process-related impurity clearance).

The Q5 and Q11 documents from the International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) address several important aspects or concepts relating to evaluation/validation for medicinal products containing biotechnology-derived proteins as active substance (3). But there has been no guidance at the EU level to cover other aspects such as process- and product-related impurity clearance (e.g., host cell proteins and DNA), column/membrane sanitization and life-time, hold times, reprocessing, pooling of intermediates, and selection of batches to be included in evaluation/validation studies. All those elements contribute to good process and product understanding and are thus needed by assessors who evaluate market authorization applications. So it was considered important to fill this gap through a guideline on process validation.

By contrast with the FDA guidance on these matters, the EU guideline will more narrowly focus on data requirements for process validation and evaluation for submitting market authorizations applications, or variations will take into account existing guidance and new concepts and will describe how to integrate them in the evaluation/validation approach. Drafting of this guideline is not yet finalized, but Welin discussed the current thinking and addressed issues of particular concern.

Stage 1 Process Design: The second speaker in the session was Robert Kuhn of Amgen Inc., who spoke on “Integration of Prior Knowledge, Small-Scale Studies, and Manufacturing Data for Efficient and Effective Process Design.†The purpose of stage 1 process design is to develop an effective commercial control strategy that consistently delivers product of required quality. Stage 1 includes developing an understanding of and designing appropriate controls for significant sources of process variation. Effective and efficient process design should incorporate knowledge from all relevant sources. Prior knowledge and risk management can be strongly leveraged throughout design efforts and can be used to minimize non–value-added work while driving toward an improved understanding of the process.

 

CMC Forum Series

 

The CMC Strategy Forum series provides a venue for biotechnology and biological product discussion. These meetings focus on relevant chemistry, manufacturing, and controls (CMC) issues throughout the lifecycle of such products and thereby foster collaborative technical and regulatory interaction. The forum committee strives to share information with regulatory agencies to assist them in merging good scientific and regulatory practices. Outcomes of the forum meetings are published in this peer-reviewed journal with the hope that they will help assure that biopharmaceutical products manufactured in a regulated environment will continue to be safe and efficacious. The CMC Strategy Forum is organized by CASSS, an International Separation Science Society (formerly the California Separation Science Society), and is supported by the US Food and Drug Administration (FDA).

Both early development data and platform knowledge can be used to identify likely sources of variability and facilitate the design of focused and purposeful process characterization studies. Manufacturing data — including some from similar products and processes — can provide valuable information on expected process variability and capability. Kuhn discussed the benefits and considerations for use and integration of relevant knowledge sources for efficient and effective design, including implications for execution of stage 1 and design of stage 2 process validation.

Scaling Down: The third speaker was Nathan McKnight of Genentech/Roche, who spoke on “Scale-Down Model Qualification and Use in Process Characterization. “Scale-down models are indispensable tools in development and characterization of biopharmaceutical manufacturing processes. During process characteri
zation, such models enable evaluation of variability in input materials and parameters on a process to an extent that simply is not feasible at manufacturing scale. To be effective and credible, a scale-down model must be designed and executed (and ultimately demonstrated) as an appropriate representation of its manufacturing process. The ideal scenario is a model that reproduces manufacturing-scale behavior with a high degree of fidelity, which largely has been achieved for some “standard†unit operations. However, perfect replication of manufacturing-scale behavior is not prerequisite for a model to generate accurate information to enable an understanding of manufacturing-scale behavior.

By definition, a scale-down model is an incomplete representation of a more complicated, expensive, and/or physically larger system. Such models cannot be expected to perfectly represent all aspects of a manufacturing process. Based on available scaling principles, a continuum in the simplicity of scaling down standard unit operations results in models designed to represent whole unit operations — or only certain aspects of them. Information generated by these models must be appropriately interpreted based on how closely they represent manufacturing-scale behavior (and which aspects thereof).

Taking the above factors into account, scale-down model qualification starts with justifying its design and function. Details of how to demonstrate a model as representative of manufacturing-scale (including selection of appropriate statistical tools) depend on what the model is intended to represent as well as practical considerations for generating a realistic data set (not necessarily an ideal one) from the manufacturing-scale system. Finally, measures can be taken at each stage of process validation (and in the control strategy) to mitigate uncertainties from use of scale-down models.

 

Morning Discussion

 

Morning session presentations were followed by a roundtable discussion of specific questions posed to the presenters along with Christian Klock of Sanofi Pasteur.

When does stage 1 process design begin: in early preclinical development, after proof-of-concept studies, or as a commercial process is being defined? If most information justifying or supporting high risks comes from early development, then process design most likely has begun in early development. But it is difficult to define a “commercial process†in this case because significant input is likely to come from early studies and prior knowledge feeding into risk assessments, and that can influence how a commercial process will look. Even though knowledge from early development is extremely useful in supporting process design, “formal†stage 1 design usually begins as the actual commercial process is being defined. Comparability can bridge early development and commercial process definition/changes.

For certain molecules (e.g., monoclonal antibodies, MAbs) these days, stage 1 could be considered to occur before the product candidate is even discovered. This is primarily due to the high commonality across MAb manufacturing processes. The A-MAb case study, for example, includes a great deal of data based on prior knowledge, essentially defining many typical unit operations and their interactions with such products (9).

How should process and product knowledge for novel or nonplatform products be assessed during early stages of development? Can platforms help? Leveraging prior knowledge is valuable in the early stages of development before molecule-specific data become available. Such information forms a basis of both product and process understanding and thus aids in development strategy and risk management. The relevance of prior knowledge should be assessed for associated risks according to how it is being used, with qualitative knowledge used strategically and quantitative data used predictively. In early development stages, such knowledge is often used “directionally†to drive toward effective process design. Later, additional product-specific studies can refine and demonstrate a design’s effectiveness.

 

Sources of Information

 

Data from product design and selection: product “design intent,†quality attributes, and behavior

Product characterization and stability studies: product quality attributes and behavior

Early process design studies: process design intent, general process performance

Pilot and clinical manufacturing: impact of process on QAs, integrated process performance

Facility knowledge: equipment constraints, characteristics, and operating conditions

General scientific knowledge, engineering principles, and literature: foundational understanding and guidance

Modeling and simulation: insights dependent on type and resolution of model

Prior/platform knowledge: experience from similar products and processes

Process characterization: effect of input variables on process performance

Platform processes may have similar product-quality attributes (PQAs) and control requirements as well as process flow, operation, and attribute control points. Platforms often share raw materials and analytical methods, allowing companies to leverage data as much as possible with similar facilities and equipment. When applying either a platform or prior knowledge, consider the following: product and process similarity (design, operation, materials, equipment), control strategy requirements and analytical method parallelisms, available prior knowledge for providing robust conclusions, alignment with scientific and available product-specific knowledge, and data reliability and traceability.

As long as strengths and weaknesses are understood, information is of value regardless of where it comes from (see the “Sources†box). Prior knowledge of molecular structure is useful at early stages to highlight specific product variants to look for and target the types of analytical methods required to assess them. Although general assumptions can be made about class-specific attributes (e.g., MAb terminal heterogeneity), inevitably some molecules will not align with dogma. The value of general assumptions depends on the depth such knowledge can reach (how specific they are to molecular structure/function, as in what elements of glycoform structure do or do not affect Fc-receptor binding).

At what point do we accept that an attribute is noncritical even for a class-specific molecule? Regulators appear to be reluctant to allow such conclusions across the board, preferring to see justification in every case.

How can we best prioritize and document risks studied in process characterization and justify not studying some? Process characterization (PC) is but one source of knowledge that can be used to identify risks in stage 1 design. PC should focus on closing knowledge gaps, particularly for high risks. Even if high risks exist, significant knowledge may be available from other sources such as early design studies, platform knowledge, and general scientific understanding. A documented risk-assessment process should be used to assess process and product risks as part of planning. Those risks should be evaluated against reliable existing knowledge to develop the PC strategy.

Evaluation should assess all potential effects on relevant PQAs. The rationale for not studying operating parameters in PC should be documented in an appropriate risk-management strategy within the quality system and communicated in summary to regulators to justify the chosen PC strategy. Those parameters are typically controlled in a commercial manufacturing setting (at a lower visibility), but such a description is often missing in applications. That leaves doubts for regulators/reviewers if all relevant parameters are not covered by PC studies or other means.

Using a well-defined risk-management system, carrying
out risk assessments with a number of cross functional experts is the best way to capture knowledge and identify the “risk continuum†(as described in the A-MAb case study). This forms a structured conversation resulting in documented rationale. Product knowledge assessments (PKAs) have particular utility in identifying both what you know and don’t know — thus directing the PC studies needed for further understanding. Creating a defined “risk-scoring matrix†within the PKA allows not only for determining whether something has a particular risk associated with it, but also what level that risk is. However, relying solely on mathematics requires caution, and further justification is often required to justify the numbers. Risk assessment can be in the eye of the beholder, and documentation should provide a clear understanding of the underlying thought process. The boundary of a risk assessment should be clearly defined before that assessment is conducted. For example, is scored risk based on severity to patients or only to the next processing step?

The types of risks not studied can vary depending on

  • risk level (e.g., low risks identified in a risk assessment)
  • detectability of risk (high detectability can mitigate even high risks)
  • availability of substantial prior knowledge regarding the potentially low occurrence of a risk, regardless of its impact
  • potential to control a risk factor (the more controllable it is, the less need for studying it)
  • understanding of the applicability of prior knowledge to a particular molecule or process.

Certain risks (e.g., viral/microbial control) appear to require full study and “validation†(e.g., of hold times) regardless of prior knowledge. Modular platform knowledge (e.g., viral filtration) does provide a certain level of relief.

Which studies can enhance the success of process scale-up during late-stage clinical development and postapproval using small-scale models? In general, scale-up success is improved by designing small-scale studies that increase understanding of how certain manufacturing steps affect desired quality outputs. Wherever appropriate, companies can evaluate selected step(s) operating in worst-case and/or abnormal conditions based on high risks identified (e.g., cumulative hold times, spiking challenges) to support or demonstrate the robustness and capability of a process to deliver product of intended quality in such conditions (process robustness). Studies performed early on can use platform knowledge to identify worst-case conditions. Design of experiments (DoE) and/or small-scale studies can provide data on variables likely or unlikely to affect process performance, help you explore the impact of unit operations on PQAs (identification of critical control points), and assist in preliminary parameter classification (both high and low risk).

Small-scale studies are particularly useful for evaluating the control needs of a process, especially the parameters that are likely to change with scale-up or that require tight manufacturing control. Process understanding gained from small-scale studies can help in troubleshooting and identifying potential improvement opportunities. To be useful, it is often unnecessary for such models to be absolutely, quantitatively predictive. Directional information on the impact of process parameters on PQAs can be valuable in process design and in troubleshooting. And small-scale models can be particularly useful in analyzing the impact of different lots or sources of raw materials; doing so at full scale is not usually feasible.

How can large-scale qualification and small-scale design data be effectively linked? How data from small-scale studies contributes to the overall validation package will depend on demonstrating that small-scale models appropriately represent the proposed commercial scales. These studies could leverage data requirements for process verification (e.g., reduced batches) and/or control strategies (e.g., alternative approach to end product testing) depending on the evidence provided to demonstrate step performance and the relevance of an experimental model with regard to a final process. However, data derived from commercial-scale batches should confirm results obtained from small-scale studies used to generate information in support of process validation.

Regardless of the final large-scale use of their data, scale-down models are indispensable for process optimization in development. They help companies evaluate material and parameter variability for characterization and for investigations and improvements postlicensure. By definition, such a model is an incomplete representation of a more complicated, expensive, and/or physically larger system. Some unit operations are more scalable than others. For example, chromatography is generally straightforward, whereas scaling harvest and centrifugation can present difficulties. Scale-down models can be designed based on two general concepts: miniaturization of full-scale unit operations or partial/worst-case models of specific properties (e.g., in shear studies). The elements of all small-scale studies should be described and justified as part of the overall qualification of a scale-down model.

The effectiveness of a model can be increased if you take the following into consideration:

  • match full-scale as much as possible and feasible
  • understand and/or control for differences between scales (e.g., materials of construction, use of different assays)
  • establish and accept scaling parameters and equipment limitations
  • continuously refine and improve scale-down models through development phases and after marketing as new data are gathered
  • base model qualification on a reasonably sized data set.

Is it necessary to demonstrate predicted “off-center†performance during qualification? The purpose of large-scale qualification is to demonstrate consistent performance in a manufacturing setting. It should take into account real-life sources of variability that small-scale studies may not always predict. PPQ at commercial scale is usually performed at set points — the way a normal commercial process is intended to run. Off-target runs at full scale can be high risk and costly should failures occur. “Center-line†performance (as performed at large scale) is never exactly centered although it provides some useful assessment of a process’s ability to accommodate common-cause variability. However, off-center studies for high-risk parameters performed at small scale can help companies understand their impacts and can support categorization of parameters — providing for a “characterization space.â€

Although small-scale studies provide useful supporting data on expected process robustness, purposeful “off-center†manipulations are of limited value during qualification. It is difficult to select representative off-center conditions that will predict what will be experienced during routine manufacturing. Center-line performance data provide for more useful assessment of overall expected process performance and can be used to establish a baseline for evaluation during continued process verification (CPV). There may be a few targeted exceptions to the rule for highly critical process parameters that can be affected by scale or equipment (e.g., maximum hold times).

Should a model of a unit operation’s performance at full-scale be verified before PPQ? Can a scale-down model be qualified through postlicensure process monitoring (Stage 3)? Formal model qualification and verification typically require sufficient full-scale results for comparison. Analysis of model performance during commercial-scale runs can help you look for differences between full-scale and small-scale performance. Unexpected differences can illustrate gaps in knowledge and point to areas that need
further study. Refining small-scale and pilot-scale procedures can remove identified differences wherever possible and practical. However, understanding the nature of those differences is important to driving appropriate actions regarding the model itself. Parameter effects at small scale could be magnified, attenuated, or not representative at all.

Magnified effects don’t necessarily lessen the utility of a small-scale model; in fact, they may be useful in studying impacts in more detail. Attenuated effects are more troublesome because factors may be considered irrelevant for study at full scale. An appropriately sized data set is needed for qualification. Both pivotal and PPQ campaign full-scale runs can be used for comparison. When platform models are used, multiple lots from different molecules can all help qualify a model.

Scaled-down models are tools often used for process design. Expectations for quantitative predictability should be lowered if design space is not claimed. However, scale-down model qualification should be performed for anything that will not be verified at commercial scale (e.g., viral clearance). Other examples include reagent clearance or postlicensure minor process changes, for which there is no plan to repeat PPQ, and models that are used to support high-risk nonconformance investigations. A model can be evaluated early on against pilot-/clinical-scale performance, often providing good assurance of relevance to the commercial scale. Use of predetermined acceptance criteria for qualification of scale-down models may increase the validity of those models.

PPQ at scale provides an overall assessment of design-phase effectiveness, and it should not require a separate preverification of small-scale model performance against commercial scale. However, that passes the burden of risk onto the PPQ runs themselves in case small-scale models have missed some important datum. Unit operations involving viral clearance, however, need to be qualified against commercial scale as part of the overall PPQ exercise. Having qualification data from after stage 3 is useful to provide stronger links back to small-scale results, especially if they will be used to justify changes in the future.

However, at full scale, there is often simply not enough variability to allow for a statistically valid qualification of data from small-scale models, in which parameters are varied to an extent not seen at larger scales. Significant deviations occurring at large scale that have been or could be investigated at small scale provide additional evidence about the utility of small-scale models. Thus, scale-down models can be continually refined during stage 3 to support troubleshooting of commercial, full-scale processes and postapproval process changes (when preplanned variability in certain parameters occurs outside the defined commercial process).

 

Afternoon Presentations

 

The afternoon session discussed the end of stage 2, continued verification, and beyond with session chairs Vijay Chiruvolu of Amgen, Inc. and Linda Ng of the FDA’s Center for Drug Evaluation and Research. Presenters highlighted opportunities and challenges faced in stages 2 and 3 of implementing the life-cycle approach to process validation. Both industry and regulatory perspectives were presented as speakers discussed science- and risk-based approaches to process performance qualification and continued process verification. Speakers also addressed how the level of process understanding from process design affects PPQ and CPV strategy. They focused too on the transition from stage 1 to stage 2 as well as on CPV strategies at the beginning of commercial manufacturing and during postlicensure changes (in processes, sites, and so on).

The first presentation was “Process Validation for Biotech Products: The Compliance Perspective†by Linda Ng of FDA CDER. She described regulatory requirements from previous and recent guidances.

After Stage 2: The second presentation was “Executing PPQ Runs and Demonstrating a High Degree of Assurance at the End of Stage 2†by Wendy Lambert of Abbott Laboratories. The new FDA process validation guidance emphasizes that manufacturers “must evaluate and demonstrate a sufficient understanding to provide a high degree of assurance in the commercial manufacturing process to justify commercial distribution of product.â€To demonstrate that high degree of assurance, regulators recommend objective measurements such as statistical methods for demonstrating an appropriate level of control.

At the end of stage 2, practical limitations associated with biopharmaceutical development and early manufacturing necessitate going beyond statistics alone to achieve and demonstrate that high degree of assurance. Lambert also discussed possible approaches to overcome common challenges in knowledge management, industry terminology, and unexpected outcomes in PPQ.

Stage 3 and Beyond: The third presentation was “Developing the Control Strategy for Continued Verification: Enhanced Testing†by Graham Tulloch of Eli Lilly and Company. Historically, process validation has been a critical milestone in drug-substance commercialization, marking the gateway between process development and commercial manufacturing. Under the new paradigm, it has been expanded to include process design, process qualification, and CPV. However, effective and efficient strategies for developing a control strategy through those stages are still the subject of much discussion. Tulloch examined the process development continuum and described a science- and risk-based approach for evolving a control strategy from PPQ through an enhanced testing program into a CPV program.

The last speaker for the afternoon was Rick Schicho of Bristol Myers Squibb, who presented “Continued Process Verification for a Legacy Product: From Site Transfer Through Post Marketing Changes.†He covered stage 2 and 3 activities for the transfer of an established product to a new manufacturing facility. It was designed and built with a highly automated and integrated system for monitoring and control, including electronic batch records and laboratory notebooks. The company also implemented an advanced analytics system that makes all key data available for near–real-time analysis in statistical process control charts that are automatically updated. Automation in this facility reduces variability in process control and operation, and the data analytics system makes information available for both routine statistical monitoring and ad hoc queries. Both systems enable rapid identification of process performance or operational changes and provide a robust database for qualification of process changes.

 

Global Steering Committee for These Forums

 

Siddharth J. Advant (ImClone)

John Dougherty (Eli Lilly and Company)

Christopher Joneckis (CBER, FDA)

Rohin Mhatre (Biogen Idec Inc.)

Anthony Mire-Sluis (Amgen, Inc.)

Wassim Nashabeh (Genentech, a Member of the Roche Group)

Anthony Ridgway (Health Canada)

Nadine Ritter (Biologics Consulting Group)

Mark Schenerman (MedImmune)

Keith Webber (CDER, FDA)

 

Afternoon Discussion

 

A panel discussion with questions and answers followed the afternoon presentations, with Ranjit Deshmukh (MedImmune), Wendy Lambert (Abbott Laboratories), Linda Ng (FDA CDER), Rick Schicho (Bristol- Myers Squibb), and Graham Tulloch (Eli Lilly).

How should process qualification strategies address globalization and the potential for unexpected occurrences during PPQ (e.g., type/number of “failures†that drive PPQ reassessment)? For globalization, one approach is to have two separate protocols delineating a traditional validation approach for most of the world, with more advanced approaches for the United States and certain other jurisdictions. The number of lots required might differ (no fewer than three outside the United States), enhanced justification mi
ght be necessary, and overall acceptance criteria can be stricter. Data evaluation and interpretation (integration of stage 1 data) may be more advanced, as can be its formal link with stage 4 of CPV.

Base reassessments of PPQ at a minimum on three things: type of parameter failure (critical or key) cause of the failure (process related or not), and the number of failures (high or low). Failures can indicate a lack of process control or consistency.

For stage 3, CPV, how can assessing the criticality of PQAs and process capabilities be used to determine required elements? A CPV plan should be justified using a risk- and knowledge-based control strategy assessment based on both product understanding (what is important to control) and process understanding (how controllable, how capable of identifying failure modes). This approach assesses overall risk, including what attribute testing is warranted, where risks are assessed, and the associated testing strategy (routine, periodic, or event driven).

However, when commercial-scale variability and capabilities are not well established at the beginning of CPV, the plan could default to testing performed for PPQ. At that point, a risk assessment is valuable to determining whether sufficient stage 1 and 2 data (and prior knowledge) can justify a reduced testing strategy. Many companies differentiate that as “stage 3A†with enhanced testing. Once sufficient at-scale data are available, a risk assessment may be used to continually measure and improve the effectiveness of the CPV plan (and overall control strategy) for maintaining a controlled and capable process (stage 3B).

How do we manage the postlicensure stage of a product life-cycle considering potential needs for facility transfer, process scale-up, and/or control system modifications? CPV provides an excellent opportunity to continue learning about both product and process, including expected and acceptable performance levels. Companies can use this approach to assess whether a control system performs appropriately or requires modification. CPV criteria should be adjusted if warranted to provide a useful and meaningful assessment of expected performance.

Criteria refined from an available data set are useful for establishing evaluation criteria for changes such as transfers, scale-ups, or other modifications. A process is in stage 3 when it has completed qualification in a given facility, so if that process transfers to a new facility it should probably return to stage 2 for some form of qualification at that facility.

Note that if a transferred process continues to run in the original facility too, then product made at the two plants may be in different stages of process verification, with the originator site having more data than the new one. In such a case, the program parameters need to be aligned, but if their limits are different, they would have to be compared regularly.

If a process undergoes a significant planned change, then it returns to stage 2, but only for the steps (and their control attributes) affected by the change. That requires a risk-based reevaluation of the process design and control strategy (a limited return to stage 1) and evaluation of the need for requalification and CPV reestablishment. The CPV stage may be attribute-dependent.

To reduce testing of any particular attribute after product licensure, consider the following:

  • Is an attribute controlled to very low levels (e.g., below the limit of quantitation for DNA or protein A) before or at the final control point? The method must be appropriate.
  • What is the level of redundant process capability?
  • Are there any negative effects of unit operation(s) downstream of the final control point.
  • Has an attribute changed during drug-substance storage or drug-product manufacturing/storage? If not, then testing that attribute may offer no “added value†and thus could be considered for removal from a test plan.
  • Are you tracking data on the occurrence of events over time?

In regard to the need for modifying qualification or verification strategy, you can use your quality system as a source to help you identify unforeseen variability: product complaints and adverse-event reporting, out-of-specification (OOS) results, stability studies, nonconformances (or deviations including process yield variation), and data from facility and equipment maintenance and calibration programs.

You can also use a statistical process control program to monitor a process and determine whether changes are required after product licensure. Near–real-time process monitoring can help, as can performing ad hoc queries to investigate statistical trends, shifts, or outliers regardless of statistical limits to identify areas and causes of variability. Potential changes identified for reducing process variability can be evaluated through a change-control system. That should help in determining whether additional process characterization and/or qualification for monitoring attributes outside the quality system. However, facility transfer and process scale-up programs are rarely driven by a statistical process control (SPC) program. Ideally, technology transfer includes process evaluation, including SPC information. Resulting insight then could be used to develop a new process or improve an existing one. And finally, an enhanced testing program can be considered should unexpected events occur after PPQ or if unusual trends become obvious.

What is the state of product and process monitoring for a well-characterized, well-understood product and process late in the product’s life cycle? The end state is a streamlined set of testing controls, including input (e.g., raw materials), in-process, and specification attributes designed to ensure that a process consistently delivers product of expected quality. Based on risk assessments, process understanding, and known process variabilities, a company could move tests from specifications to IPCs, occasional testing, and so on. Controls should reflect product quality requirements, historical process performance, and potential future failure modes (identified through risk assessment as well as prior and platform knowledge).

How should we use CPV to inform choices about continual improvement opportunities after licensure? CPV can inform continual improvement opportunities in several areas, such as improving process control to ensure consistent quality (including identification of low capability, drifts, and shifts). It can also highlight potential needs for adjusting CPV criteria to reflect current process and product understanding and to allow identification of true and meaningful performance changes through a better understanding of the relationships between input and output variables. CPV provides opportunities to reduce testing if a process is shown to be consistent and highly capable. A generated data set of expected and acceptable at-scale performance can support process changes. Increased process understanding from multivariate statistical modeling techniques can proactively identify and control shifts in process inputs to prevent detrimental effects on the manufacturing process or product quality.

What is the best approach to creating a CPV plan and program for legacy products/processes, for which we may not completely understand critical PQAs and ranges or CPPs but with which we have years of experience? Legacy processes should be well understood from a real-life perspective, with a wealth of data available to develop a CPV plan based on years of encountering factors that affect their variability (e.g., process capability analyses, nonconformances, product complaints, and so on). For most such products, CPV could start at stage 3B. Presuming that process performance data have been collected and are available (using appropriate technologies), you should have an excellent understanding of the state of control and capability of a legacy process.
Thus you will know to highlight areas that require reconsidering the control strategy. For example, a CPV plan should reevaluate whether appropriate testing is in place at the right points in a process and conducted frequently enough to ensure consistent product quality.

To detect potential shifts, unacceptable performance, and expected actions taken within the quality system, a CPV plan should describe how data are to be evaluated. Standard SPC techniques are typically used, with procedural rules linking statistically abnormal performance to a noncomformance and corrective and preventive action (NC/CAPA) system. For a legacy product without a mature process design and control strategy, process risk assessment can be conducted. Knowledge gaps thus identified (as high risks) could be filled by conducting targeted studies or analyzing commercial data and supported by a well-designed CPV plan to show that product quality risk is low and that the commercial process works as it should.

About the Author

Author Details
Anthony Mire-Sluis is vice president of North America and Singapore for contract and product quality, Vijay Chiruvolu is director of corporate quality validation, and Bob Kuhn is director of process development at Amgen Inc. Brian Kelley is vice president of bioprocess development, and Nathan McKnight is a principal engineer in process development at Genentech (a member of the Roche Group). Linda Ng is a consumer safety officer in FDA CDER’s Office of Compliance. Reb Russell is Bloomsbury site head in biologics manufacturing and process development at Bristol-Myers Squibb Company. Chiang Syin is a supervisory safety officer in FDA’s Office of Compliance. Victor Vinci is vice president and chief scientific officer at Cook Pharmica LLC. Mats Welin is a senior expert at the Swedish Regulatory Authority, Medical Products Agency (MPA). Christian Klock is deputy director of regulatory affairs in quality and continuous improvement at Sanofi Pasteur. Ranjit Deshmukh is senior director of corporate manufacturing sciences and technology at Medimmune. Siddharth J. Advant is associate vice president of CMC project management at ImClone Systems Corporation.

REFERENCES

1.) CBER/CDER/CVM 2011. Guidance for Industry: Process Validation — General Principles and Practices, US Food and Drug Administration, Rockville.

 

2.) CHMP/CVMP. EMA/CHMP/CVMP/QWP/70278/2012-Rev1 2012. Draft Guideline on Process Validation, European Medicines Agency, London.

 

3.) ICH Q5A (R1) 1998. Viral Safety Evaluation of Biotechnology Products Derived from Cell Lines of Human or Animal Origin. US Fed. Reg. www.ich.org/fileadmin/Public_Web_Site/ICH_Products/Guidelines/Quality/Q5A_R1/Step4/Q5A_R1__Guideline.pdf 63:51074.

 

4.) ICH Q5B 1996. Analysis of the Expression Construct in Cells Used for Production of r-DNA Derived Protein Products. US Fed. Reg. www.ich.org/fileadmin/Public_Web_Site/ICH_Products/Guidelines/Quality/Q5B/Step4/Q5B_Guideline.pdf 61:7006.

 

5.) ICH Q5C 1996. Stability Testing of Biotechnological/Biological Products. US Fed. Reg. www.ich.org/fileadmin/Public_Web_Site/ICH_Products/Guidelines/Quality/Q5C/Step4/Q5C_Guideline.pdf 61:36466.

 

6.) ICH Q5D 1998. Derivation and Characterisation of Cell Substrates Used for Production of Biotechnological/Biological Products. US Fed. Reg. www.ich.org/fileadmin/Public_Web_Site/ICH_Products/Guidelines/Quality/Q5D/Step4/Q5D_Guideline.pdf 63:50244-50249.

 

7.) ICH Q5E 2005. Comparability of Biotechnological/Biological Products Subject to Changes in Their Manufacturing Process. US Fed. Reg. www.ich.org/fileadmin/Public_Web_Site/ICH_Products/Guidelines/Quality/Q5E/Step4/Q5E_Guideline.pdf 70:37861-37862.

 

8.) ICH Q11 2012. Development and Manufacture of Drug Substances (Chemical Entities and Biotechnological/Biological Entities). US Fed. Reg. www.ich.org/fileadmin/Public_Web_Site/ICH_Products/Guidelines/Quality/Q11/Q11_Step_4.pdf 77:69634-69635.

 

9.) CMC Biotech Working Group 2009. A-MAb: A Case Study in Bioprocess Development (version 2.1), International Society for Pharmaceutical Engineering, Tampa.