Analytical Life Cycle Management— The Coming Revolution

Article

The Column

ColumnThe Column-08-13-2018
Volume 14
Issue 8
Pages: 2–7

Incognito looks to a paradigm shift.

Photo Credit: grebeshkovmaxim/Shutterstock.com

Incognito looks to a paradigm shift.

A storm of three letter acronyms (TLAs) is on the way, which is set to radically update the way in which analytical methods are developed, validated, and verified, and the pharmacopeial bodies and regulators are at the heart of the storm.

Those of you working in the pharmaceutical industry may already be aware of some of these proposed changes, however those who do not, but whose work may ultimately be influenced by the new ways of working, should take note; the momentum is gathering and the changes herald a new dawn in analytical measurement for product quality control and beyond.

Some of the TLAs that represent the new approach include:

ACS (Analytical Control Strategy): The ACS is a planned set of controls, derived from an understanding of the requirements for fitness for purpose of the reportable value, an understanding of the analytical procedure as a process, and the management of risk, all of which ensure the performance of the procedure and the quality of the reportable value, in alignment with the ATP, on an ongoing basis (1,2).

ATP (Analytical Target Profile): The ATP states the required quality of the results produced by a procedure in terms of the acceptable error in the measurement; in other words, it states the allowable target measurement uncertainty (TMU) associated with the reportable value. Because the ATP describes the quality attributes of the reportable value, it is applied during the procedure life cycle and connects all of its stages (3).

TMU (Target Measurement Uncertainty): TMU is a more comprehensive term than the traditional term precision to represent random errors, and bias is a term traditionally used to represent systematic errors or accuracy. These terms (uncertainty and bias), when examined holistically, can be considered to represent the TMU associated with the reportable value generated by the procedure (2).

AQbD (OK, there are also some FLAs!) (Analytical Quality by Design): Quality by design (QbD) is a systematic approach to development that begins with predefined objectives and emphasizes understanding and control, based on sound science and quality risk management (3). QbD principles when applied to the development of analytical methods are known as analytical QbD (AQbD). The outcome of AQbD is a well characterized method that is fit for purpose, robust, and will consistently deliver the intended performance throughout its life cycle.

FEMA (Failure Effect Modes Analysis): A step-by-step approach for identifying all possible failures in a process (here the total analytical process). Failures are prioritized according to how serious their consequences are, how frequently they occur, and how easily they can be detected. The purpose of the FEMA is to take action to eliminate or reduce failures, starting with the highest priority factors, and to document current knowledge and actions regarding mitigating the risk of failure for the purpose of continuous improvement.

DoE (Design of Experiments): Sometimes also called experimental design, DoE is the design of any task that aims to describe or explain the variation of information under conditions that are hypothesized to reflect the variation. Using an optimum number of variable combinations (often set as high, medium, low combinations), the primary effects and secondary interactions of variables may be efficiently investigated and described. This approach is more time efficient and more powerful than the one factor at a time (OFAT) approach and typically uses analysis of variance (ANOVA) statistical analysis to interpret the results of the DoE and highlight the variables and combinations of variables that have the largest effect on uncertainty.

MODR (Method Operable Design Region): A multidimensional space derived from AQbD investigations, which defines those combinations of experimental variables that produce a valid measurement as defined by the ATP.

There are also a number of stimuli articles and pharmacopeial documents that you will need to become familiar with:

  • ICH Q9, Quality Risk Management (November 2005) (4)

  • ICH Q12, Technical and Regulatory Considerations for Pharmaceutical Product Lifecycle Management (Draft November 2017, currently out to industry for comments) (5)

  • USP General Chapter <1210> “Statistical Tools for Procedure Validation” (6)

Stimuli articles in pharmacopeial forum:

  • Stimuli Article: Analytical Control Strategy (1)

  • Stimuli Article: Analytical Target Profile, Structure and Application Throughout the Analytical Lifecycle (3)

  • Proposed New USP General Chapter <1220> “The Analytical Procedure Lifecycle” (2)

 

 

There are several other relevant documents to be considered, however they are all referenced in the articles or guidance documents cited above.

The bottom line here is to adopt a risk management approach to analytical methods so that the fitness for purpose of a reportable value and the performance of the analytical procedure are assured on an ongoing basis. By first defining the required performance of the analytical method in its ultimate intended use (that is, assay of drug products for potency prior to release), QbD principles are applied to analytical method development, validation, and ongoing verification to ensure better method performance and control. The ability of the analytical method to deliver a fit for purpose result according to a predefined specification drives every stage of the analytical method development, validation, and ongoing performance verification.

Is your brain hurting already? Well, as this is an opinions column, let’s start with my opinion on the proposed paradigm shift and what impact the changes will have on the quality of the information that we produce. For that, I first refer you back to two of my previous articles (see references 7 and 8).

Well, in a nutshell, someone must have been listening, because essentially the principles of QbD are going to be used to help ensure that analytical measurements are made to within a specified level of measurement of uncertainty throughout the lifetime of the method. In short, I’m very much in favour of this new paradigm. What I’m sure many of us will be more daunted about is the extra work involved in understanding the guidance and regulations, adopting new ways of working, and acquiring the knowledge and skills required to comply.

It’s taken me a long time to become familiar with the principles of the ACS and the documents which outline how we should develop and define ATP, as well as reading all of the stimuli articles and ICH regulations that interlock to define the new approaches. Add to this the ability to produce FEMA or Ishikawa analyses, use these to inform the statistical DoE and ANOVA, which will assess the risk in my analysis, and then use QbD principles to define ranges of key analytical variables, which must be controlled in order to produce data to comply with the ATP.

A single Incognito column isn’t long enough to discuss everything that we need to consider and implement in order to produce an analytical method which follows an ACS or analytical life cycle management approach. What follows are some brief notes and comments from my own (albeit brief) experience, in the hope that they might help you to focus on the important challenges that may lie ahead.

 

Understanding the Concept and Defining the ATP

The concept of the ATP lies at the heart of the ACS. QbD principles are used to define the performance of the analytical determination in terms of the acceptable error in the measurement and consider all aspects of the TMU. The TMU encompasses the precision and accuracy (bias) factors that are considered during analytical method validation (Figure 1).

An example of an ATP may look something like this: An analytical procedure was developed to determine drug substance (Y) in film coated tablets containing [major excipients or other significant ingredients] in the range from 80% to 120% of the specification value. The reported results should fall within ± 3% of the true value level at the 95% level of confidence.

One should note here that the required performance contains limits for both accuracy and precision of the analytical measurement and the required performance under these new constraints will avoid the “acceptance” of results that show both high bias and low precision (which is possible using traditional approaches to the assessment of analytical performance).

In Figure 2 the light blue box area represents any combination of bias and precision estimates allowed under typical method validation criteria. The shaded
area beneath the normal distribution curve shows the combinations of bias and precision allowed under the ATP described above.

Whilst it is possible to estimate method accuracy and precision using a few limited measurements as is typical under current method validation guidelines, it is also possible to estimate the confidence interval of the measurement of both accuracy and precision such that the estimate of uncertainty is known. Statistical distributions (such as the t-distribution or chi squared [χ2] distribution) can be used to estimate the confidence interval (I have used the 95% level of confidence above) from experimental data to define a range of either accuracy or precision that will be 95% certain to contain the true accuracy or precision of the procedure. The area defined by the combined intervals for both accuracy and precision will contain a defined percentage of “true values”. If the 95% level of confidence is used to generate both intervals, around 90% (0.952 = 0.9025) of all measurements in this area will contain the true value of the accuracy and precision of the determination. Hence this type of exercise can be used to derive the ATP statement according to the acceptable limits for the type of measurement being made. Figure 2 also shows a confidence interval range for a procedure with accuracy confidence interval -0.5 to +1.5% and precision
0.0–0.5%.

One would need a reasonable grasp of basic statistics in order to determine the confidence intervals using the appropriate distribution; the statistical power of the model will increase with the number of degrees of freedom, that is, the number of results generated using the procedure that are used to determine the confidence intervals. Often the ATP will be generated at the end of the development process and is defined further below.

As with method validation, it is often the case that the ATP criteria may be further verified or amended after the following steps have been implemented and evaluated.

 

Initial Screening Studies

For the sake of brevity, I’ve assumed that we have decided to use a chromatographic technique for the determination; however, the initial stage in any QbD-based strategy would need to include an assessment of the desired method performance alongside the analyte and matrix properties to evaluate the most appropriate analytical technique. The need to consider the desired analytical performance as the fundamental driver for analytical development is critical to the QbD approach.

Initial screening, in which several combinations of columns, organic solvents, additives, eluent pH, and perhaps column temperature are screened against criteria such as number of peaks within the chromatogram and minimum resolution, may be familiar to many. The outputs may be visually assessed for suitability, chromatography optimization software may be used, or a statistical approach using a full factorial DoE with a restricted number of levels (variables) may be employed. If the separation requires a gradient, some initial experimentation to find the optimum gradient conditions may also be undertaken to arrive at a separation believed to have the basic characteristics that can be further developed into a fit for purpose analytical method.

The findings obtained during initial screening studies, especially when derived from DoE approaches, can be very useful in the primary risk assessment. Once again, a good working knowledge of applied statistics is necessary to conduct a DoE approach with ANOVA to interpret the results and highlight the important variables or combinations of variables from the initial experimentation.

Conducting a Primary Risk Assessment Using Factorial Analysis

The next stage in the process is to perform a factorial analysis of the method using cause and effect diagrams (Ishikawa or Fishbone diagrams) and to classify the relevant factors.

All factors within the high performance liquid chromatography (HPLC) analysis should be taken into account; this will cover sample preparation as well as the instrumental analysis. Each factor is considered (effect of pH, column packing variability, accuracy of volumetric eluent preparation), alongside the mode of failure that may result (irreproducible retention times, changes in selectivity) and the set point of the variable where know (eluent pH 2.8, 10 mM ammonium formate, 55% organic). An NCX code (Noise/Controllable/Experimental) is then assigned to each factor depending upon which of the factors may be mitigated through proper control (C = type of buffer salt used, for example), factors that are difficult to control and need measures to reduce their impact on measurement quality (N = pH adjustment accuracy, for example), and those that need to be investigated experimentally to assess their impact on the quality of the separation and therefore the quality of the data (X = column temperature, for example).

 

Secondary Risk Assessment, FEMA Analysis, and MODR Definition

All factors assigned as category X are then evaluated in a further screening experiment (DoE 2), perhaps based on success criteria such as a minimum resolution for any peak pair. This DoE can then be used to establish the method operable design region (MODR). This is a well-documented approach to the application of AQbD and essentially investigates the range in which the combination of values of each critical variable will result in a fit for purpose measurement according to the ATP. This produces a “control space” (the MODR) in which the ranges of each critical variable may be defined.

All of the factors from risk assessment 1, which are assigned as category N, are then considered using FEMA or a similar approach to identify all of the possible causes of failure of the analytical procedure. This process is informed by the DoE results from the secondary risk assessment and evaluates each factor in terms of severity (magnitude of the effect on the quality of the analysis), occurrence (how the failure might occur), and detectability (how easy it is to spot the potential problem should it occur). The “score” values of S, O, and D are well documented and are designed to produce clearly defined risk factors.

An example here may be the effect of the inter-batch variability of the column packing material:

  • The S factor effect may be a change in peak efficiency or the selectivity of the separation. A score of 5 may be assigned.

  • The O factor effect may be assigned as a failure of the manufacturer to implement proper batch control measures and may attract a score of 3.

  • The D factor effect may be assigned a score of 4 (which is high) as without proper control the issue may remain undetected.

The risk priority number (RPN) of 60 is a result of all three scores being multiplied together. Once preventative or control measures are put in place within the analysis, the RPN score is then re‑evaluated. Here, there is little the end user can do other than choose columns produced by reputable manufacturers, however, the detectability may be improved by implementing a system suitability test, which may reduce the detectability score to 1, resulting in an overall RPN number of 15. This new number is then evaluated against set criteria to define what controls should be in place to ensure fit for purpose results on an ongoing basis. Typical criteria may be:

  • Low (RPN 1–35): sufficiently acceptable risk level; generally, a further reduction of the risk is not required.

  • Medium (RPN 36–59): acceptable risk level; however, some measures to further reduce the risk are desirable.

  • High (RPN 60 or more): unacceptable risk level; some measures to reduce the risk are required.

 

ATP Verification and Ongoing Control Strategies

The DoE 2 experiments may highlight critical variables or interactions between variables (such as an interdependence on method performance between mobile phase pH and gradient time [slope]), which may need to be further investigated using a final DoE experiment to modify the MODR. Furthermore, at this point several analyses might be undertaken to investigate the method bias and precision to ensure that the method performance can meet the criteria outlined in the ATP. If method performance cannot meet the required performance, the ATP criteria may need to be modified or further method improvements implemented. This may also involve altering the method control strategy, such as optimization of the bracketing interval for standards within the sequence to reduce the potential for bias, or setting tighter specification limits for the system suitability test specification.

At this point, a method validation according to ICH Q2 guidelines may also be undertaken to verify the method is fit for purpose in the more traditional sense, although the data generated in the previous experiment is likely to provide much of the necessary information.

The method performance will now be evaluated during its lifetime, and the use of control charts and other measures are required to indicate that the method performance is satisfactory over time and that trends in method performance are identified, understood, and controlled.
This may be achieved by longer term monitoring of control sample results, resolution, relative standard deviation (RSD) of system precision data, routine sample results, quality control sample data, and performance data measured against the ATP specification (both within and out of specification data should be included in this analysis). If the method is found to contain critical variables not identified during development and validation, or is found to be inconsistent, the method or control strategy may need to be updated to meet the ATP specification.

These changes should be carried out under the auspices of a change control process, which evaluates the results of the change against the TMU defined in the ATP and determines that proper re-qualification of the impact of the change is undertaken. Risk assessment tools should be used in order to define the level of method-requalification necessary to establish that the changes will result in improved method performance.

 

Summary

It is clear that QbD principles used for the design and implementation of an ACS change the way in which we will approach method development, validation, and in‑use performance monitoring in the future. Whilst current practice tends to focus on the verification of the analytical method “at that time”, the new approach is more concerned with the quality of the data produced (as opposed to the performance of the analytical system) over time, and wherever the method is being used.

Whilst I see this as a very positive step forward, I’m also concerned for those of us who work in laboratories without access to statisticians, who are not used to “six sigma” type risk assessment and risk control paradigms, and who don’t have highly automated HPLC systems capable of switching columns or eluents to automated, often complex series of experiments defined by DoE for ANOVA data analysis. These are big changes, a paradigm shift in fact, and anyone who is not aware of the requirements should use this introduction to start their journey.

The revolution is coming, it’s a change for good, but there will be pain before we can all see the brand-new dawn.

References

  1. Stimuli Article, “Analytical Control Strategy,” Pharmacopeial Forum42(5), (2016).
  2. USP Proposed General Chapter <1220>, “The Analytical Procedure Lifecycle,” (Pharmacopeial Forum 42[6]) (Pharmacopeial Forum43[1], 2016).
  3. Stimuli Article, “Analytical Target Profile, Structure and Application Throughout the Analytical Lifecycle,” Pharmacopeial Forum42(5), (2016).
  4. International Conference on Harmonization, ICH Q9, Quality Risk Management (ICH, Geneva, Switzerland, 2005).
  5. International Conference on Harmonization, ICH Q12, Technical and Regulatory Considerations for Pharmaceutical Product Lifecycle Management (Draft November 2017, currently out to industry for comments).
  6. General Chapter <1210> “Statistical Tools for Procedure Validation,” in United States Pharmacopeia 41–National Formulary 36, (United States Pharmacopeial Convention, Rockville, Maryland, USA, 2018).
  7. Incognito, The Column11(19), 14–16 (2015).
  8. Incognito, The Column5(8), 11–15 (2009).

Contact author: Incognito
E-mail: kate.mosford@ubm.com

Related Content