Life Cycle Risk Assessment of HPLC Instruments

Article

LCGC Europe

LCGC EuropeLCGC Europe-02-01-2015
Volume 28
Issue 2
Pages: 110-117

This instalment of “Questions of Quality” looks at problems with an operational liquid chromatograph to see if they can be picked up in the performance qualification (PQ) or prevented in the operational qualification (OQ).

 

What does risk assessment in the context of the life cycle of a high performance liquid chromatography (HPLC) instrument really mean? This instalment of "Questions of Quality" will look at problems with an operational liquid chromatograph to see if they can be picked up in the performance qualification (PQ) or prevented in the operational qualification (OQ). The relationship between PQ and OQ and the design qualification (DQ) phases of the life cycle are also explored.

Regulated GxP laboratories must qualify their chromatographs to demonstrate that they are fit for purpose. A qualification process based on the 4Qs model is typically used to qualify liquid chromatographs. The 4Qs model, enshrined in United States Pharmacopoeia (USP) <1058> (1) consists of four interlinked phases: design qualification (DQ), installation qualification (IQ), operational qualification (OQ), and performance qualification (PQ). We will discuss life cycle risk assessment of high performance liquid chromatography (HPLC) instruments in the context of the 4Qs model. In this discussion, we will not consider chromatography data systems (CDS) but there is the underlying assumption that the instrument is controlled by this software.

We will look at life cycle risk assessment of HPLC instruments from the perspective of what can go wrong with a qualified liquid chromatograph during the operational phase (PQ phase). How can identification of problems here be used to help us manage and mitigate risk in other phases? From this perspective we will look at how system suitability tests and their linkage between DQ and OQ can mitigate some, but not all, the instrument problems.

Perceptions of Risk Assessment

Over the past decade, risk management and risk assessment have become part of the pharmaceutical lexicon; they are the subject of an ICH Q9 paper on quality risk management (2). However, what does this mean for regulators and the industry?

From the regulator's perspective, industry should undertake risk assessments to identify the most critical parts of an activity or process and focus mitigation efforts there. It is a means of putting scarce resources where they are most needed and of identifying improvements in quality risk management (3). Generally, from the industry's perspective it can be a means to justify doing less. We will explore some of these points in this column.

What Can Go Wrong?

Features that can go wrong with an operational HPLC system is the starting point for our discussion. This is illustrated in Figure 1 where a liquid chromatograph consists of four modules: pump, injector (autosampler), column oven, and the detector. We have omitted the column from the figure as our aim is to look at an LC instrument's qualification rather than method performance. Underneath each module are listed the main failures that could occur. Note that this list is not exhaustive and some of the failures could be broken down further. However, to keep the discussion simple we have decided to look at this problem from a high level perspective. Some failures may not happen if the instrument in your laboratory does not have a particular feature, for example, for an isocratic pump there will not be gradient errors.


Figure 1: HPLC instrument showing the possible failures for each module.

Now we have listed the main failures, we need to consider the circumstances under which these might be detected in a OQ or PQ, as shown in Table 1. The PQ is broken down into three areas: system suitability test (SST), an instrument detecting the problem, and a group called extra SST where parameters can be measured if the SST is designed to include them.


Table 1: Possible LC instrument failures and the ability to detect them in operational qualification (OQ) or performance qualification (PQ).

What is an Instrument Performance Qualification - Part 1?

As our discussion is focused on the operational phase of an instrument life cycle, we also need to consider the PQ. One of the current ambiguities associated with USP <1058> relates to OQ and PQ. Specifically, what they should contain and who has responsibility for them? Historically, before USP <1058> was first implemented in 2008, the 1987 US Food and Drug Administration (FDA) guidance for process validation (4) was adapted and applied to analytical instrument qualification. This guidance was interpreted in divergent ways by laboratories and service providers. This divergence is recognized in <1058>, which states in the information under Table 1 (1):

Performing the activity is far more important than the phase under which the activity is performed.

And elsewhere:

When an instrument undergoes major repairs or modifications, relevant OQ and/or PQ tests should be repeated [so, in this context, OQ and PQ might be considered interchangeable].

However, USP <1058> also provides the following definitions of OQ and PQ:

Operational qualification is the documented collection of activities necessary to demonstrate that an instrument will function according to its operational specification in the selected environment.

Performance qualification is the documented collection of activities necessary to demonstrate that an instrument consistently performs according to the specifications defined by the user, and is appropriate for the intended use.

Therefore, although there is ambiguity (and from the experience of the authors quite a lot of uncertainty in laboratories), an OQ and a PQ serve completely different functions.

  • The OQ is related to testing the instrument under standardized conditions so that the correct operation of the instrument in the laboratory versus the DQ can be confirmed.

  • The PQ addresses the suitability of the instrument under actual conditions of use in between repetition of the OQ.

The potential role of SST in PQ has been discussed previously in this column (5). Part of the choice that some laboratories may make relates to the conditions under which an OQ or a PQ may be required to be repeated. In the opinion of the authors, an OQ and a PQ must be performed on any "new" instrument before it is used to generate GxP data. With instrumentation purchased for GxP use, the suitability of the instrument for the work it will be documented initially in the DQ. This will define the intended use of the instrument and when the OQ is performed will show why the instrument is fit for purpose.

A Different View of the 4Qs Model

Typically, the instrument qualification life cycle is depicted or implied as a linear model, especially in USP <1058> (1) where it is presented in a table. However, to fully understand the 4Qs model it is better if the first three phases are presented in the form of a V, as shown in Figure 2.


Figure 2: Depiction of the 4Qs model as a V.

In this simplified V model, it is much easier to see the relationship between the stages of the 4Qs. The OQ verifies the specification as outlined in the definition of OQ in USP <1058> (1) - provided whoever performs the OQ knows the content of the DQ or that the DQ has actually been written.

There is also an ongoing, dynamic, requirement to manage the DQ. This is in part dependent on how the DQ is defined (for example, if this references a very specific pharmacopeia requirement or chapter then each time the pharmacopeia is updated the DQ for the instrument will need to be reviewed). Instead, it is more efficient to address any high level compliance in the procedures that support the DQ and to limit the DQ requirements to instrument usage (remember that we are focusing only on the instrument).

This diagram also highlights that where an instrument has a major upgrade (because there is a wide divergence of opinion this is left to an individual laboratory to define) or is used with new methods not previously considered, there is a need to review the DQ for suitability. Without this feedback loop, there is a risk the instrument is not suitable for a new application. In addition, because of the relationship between the DQ and OQ, it will also highlight if different set points need to be included in the OQ to test the range of use. In terms of our discussion of risk the OQ can cover:

  • Qualification of a new instrument;

  • Requalification of an existing instrument following a defined time period typically linked with a preventative maintenance service;

  • Requalification following a major repair;

  • Extending the operating range of a qualified instrument because of an upgrade or new application that operates outside of the existing range;

  • Significant move of a qualified instrument with the justification for the extent of OQ testing documented in a risk assessment.

The OQ is intended to demonstrate that at a fixed time point the instrument operates to the specifications in the DQ and can therefore demonstrate that it meets the intended purpose. At this point it is worth repeating the statement from USP <1058> that routine analytical tests do not constitute OQ testing (1).

What is an Instrument Performance Qualification - Part 2?

Let us return to the OQ versus PQ discussion. The different roles of OQ and PQ need to be fulfilled and supported on an ongoing basis during the lifetime of the instrument and there are options for how this can be achieved. Summarizing from USP <1058> (1) for an HPLC instrument, PQ tests should:

  • Define user specifications for PQ tests to demonstrate trouble-free instrument operation for the intended applications.

  • Verify the acceptable performance of the instrument for its intended use (parameters listed in Table 1 under Extra SST).

  • Be typically based on the applications of the instrument in your laboratory.

  • Be based on good science and reflect the general intended use of the instrument.

  • Be performed concurrently with the test samples (SST) to demonstrate that the instrument is performing suitably.

There is a direct relationship between the DQ and PQ because the latter needs to demonstrate that the ongoing instrument operation is consistent with the intended use requirements in the former.

One of the OQ versus PQ uncertainties relates to defining how often qualification functions should be performed during the life-cycle of the instrument and what triggers qualification requirements. In the absence of black and white guidance (in USP <1058> [1], GAMP [6], or other), generally for HPLC systems, annual requalification is often performed, and this ties the instrument maintenance to the qualification work. One consequence of this tie-in is the potential perception that an instrument may have a fault corrected during maintenance that was not previously detected, but because of the maintenance, this fault is corrected and therefore not evaluated in the subsequent qualification testing or detected.

In some instances, this has resulted in a discussion of "as-found" measurement (testing an instrument parameter before any adjustments are made), particularly where a laboratory may use the word calibration as a descriptive label for the qualification work. This question is at the heart of this paper and the consideration of how an instrument might fail and if that failure is detected. But, before considering this further, it is fundamental that any regulated laboratory understands the planned maintenance work performed and exactly what was done.

Most HPLC maintenance work is associated with wear related to usage and therefore the maintenance procedure defines replacement of consumable parts such as pump seals. Any other part replaced (such as pump pistons following a visual inspection) are listed in the service report (typically, because they are chargeable but this depends on the contract). Firmware upgrades must be pre-approved before installation and a risk assessment carried out using the available information from the manufacturer or service agent to determine the level and extent of requalification. In addition, any testing performed during preventative maintenance (PM) and visibility of any test failure should be included in the PM report and the laboratory should act on this information. Typical tests might include leak tests or temperature tests but this can be dependent on the instrument manufacturer or service provider. Always ensure that the work performed during a planned maintenance or instrument repair is fully documented and understood before being signed off.

With this in mind, the risk that any PM activity will correct a failure before it is detected is usually small. Therefore, requests for an as-found test for an instrument as complex as an HPLC system are meaningless because there is visibility of what work is performed, what tests are done, and the outcome of those. In fact, the only way to provide a 100% check would be to perform a pre-PM OQ, then a PM, then a post-PM OQ, which is not the smartest way to work!

Lean Sigma Versus Scientifically Sound SSTs

It is always good to challenge a business process but to do it without full knowledge of the regulations is foolhardy. Sometimes this is where risk management falls into the Clint Eastwood category of "I'm feeling lucky". A facilitator will challenge what is done and will typically ask "Where does it say that in the regulations?" Sometimes this can verge on an interrogation and failure to identify a regulatory requirement can result in a task being discarded.

Take an assay where each injection takes 30 min and the system suitability test includes 5 replicate standard injections and a blank sample: Where does it explicitly say in either the GMP regulations or the pharmacopoeias that you need a blank? Nowhere. The result is that the blank injection can end up being discarded, saving a vital 30 min. High fives all around!

BUT.... Have you read the regulations? US GMP (21 CFR 211.160[a]) requires that all activities be scientifically sound (7); USP <1058> (1) requires a PQ test to be based on good science, as noted above. Is a blank injection of any value? More importantly, is it scientifically sound? A blank injection can determine the baseline flatness and noise level and if there is any carryover from the autosampler (see Table 1). Yes, you can drop the blank injection, but if problems were found before the samples were committed this would save much time later in laboratory investigations - but only if you have a blank sample in the SST.

Let us be clear here, if you are going to redesign the process it is important to understand the risks that you will carry when eliminating tasks. Saving 30 min for a single injection (which in all probability is minimal as most of the LC injections occur overnight) needs to be weighed against the time spent on laboratory investigations looking for an assignable cause after the run if the run fails. Similar considerations need to be made for the inclusion of an approved and well characterized control sample, particularly for impurity characterization (11). It is not uncommon in a post-lean laboratory for chromatographic methods not to include a standard to serve as a comparison for the run. Given the fact that chromatography is a comparative analytical technique, this could be seen as stupidity on stilts. The choice and the risks you carry or mitigate are yours. Are you feeling lucky?

Assessment of Ongoing Instrument Performance

Designed to satisfy pharmacopeia requirements such as USP <621> (8), SSTs play a pivotal role in documenting the performance of the chromatography system (at the analytical run level). A natural evolution of this is to consider how SSTs can support ongoing PQ requirements (after the initial PQ to move the instrument into the operational phase). Previous consideration of this (5) identified that additional tests need to be added to those routinely defined in USP <621> (8). This requirement has not changed, but consideration of the ways an instrument might fail adds a different perspective. This can potentially be quite a painful process, because the implication is that in the post lean laboratory there may be some failures that are not currently detected. At the heart of this document is the question - how might an instrument fail; and would that failure be detected in your laboratory?

In practice, this approach means that laboratories have to review the information shown in Table 1 (how an instrument might fail) against the SSTs they currently include in their chromatographic methods (for example at the SSTs defined in each of their analytical methods). One of the core strengths of this fundamental approach is that it evolves the thinking within the laboratory management away from lean and back towards scientifically sound. In particular, in an era where regulators such as the FDA are considering data integrity from a fraudulent practices perspective - your approach should be defendable from both a scientific soundness and data integrity perspective.This thought process helps laboratories to identify potential gaps in their regulatory defence. So, instead of thinking purely from a theoretical perspective: "What would happen if", make the situation real: "If this happened, what would the potential performance impact be and how would we defend it in an audit?"

The OQ column of Table 1 shows that the OQ should be designed to evaluate and detect all these potential high level failure modes. However, in operational use, some of the failure modes may not be detected. It is in part dependent on how the laboratory is using the instrument (for example the instrument should initiate a wavelength diagnostic check when turned on, using well characterized lamp emission lines, so is the instrument turned off and on?). While others, such as those which might be detected by retention time differences (flow rate and temperature for example) are dependent on chromatography practices within the laboratory (for example, formalizing acceptance criteria associated with peak identification windows).

Ultimately, ongoing monitoring of SSTs, including any additional requirements identified to help detect failure modes listed in Table 1, must be integrated into the controlling CDS software, so that it can be performed and trended in a semi-automated manner. Periodic review of this SST data would then support compliance with the requirements of FDA draft guidance on method validation (9) and, in part, be analogous to periodic review required for software.

Another benefit of developing this approach is that it provides a risk framework that can be applied to support additional compliance decisions, such as moving from maintenance or qualification based on a fixed annual requirement, towards compliance models that could be based on usage. This has been successfully applied outside of the laboratory area (10), because the method-based risk assessment framework already considers failure modes and the detectability of potential failure.

Care also needs to be taken with injecting samples to evaluate the performance of a chromatograph because recent FDA guidance (11) wants to avoid using sample injections as a means of testing into compliance. Therefore all work needs to be included in documented procedures and the generated data reviewed (11).

What Do I Do When...?

When an instrument fault is detected, the instrument must be removed from service (to prevent use by another analyst) and the laboratory procedure for considering the potential impact of the instrument failure on previous analytical results initiated. Typically, the instrument is repaired and an appropriate level of requalification work must be performed before it is returned to use.

Depending on the service contract for an instrument, minor repairs could be performed by laboratory personnel (for example, change seals, check valves or lamp), while major repairs are taken on by the service agent. Regardless of who does it, it is important that a consistent approach is applied to any requalification work done following repair and before the instrument is returned to use. Sometimes, minor repairs performed by the laboratory might have a different approval process to major repairs performed by the service agent and this represents a potential risk. Inconsistency is always an area of focus during an audit.

A qualification test matrix, either in the laboratory or service agents documentation, should be approved that defines and lists the instrument repairs performed and the requalification work required for the chromatograph. Without this, in principle, a laboratory might be expected to perform a full instrument requalification if the pump seals are replaced, when a pump flow test is all that is required. If any repair work is performed that is not detailed in the requalification test matrix (which essentially pre-approves the qualification work), then the service agent must agree the re-qualification work performed with the customer. For major repairs it can be more efficient from a decision-making and workflow perspective to perform a full requalification, because the time to discuss, agree, and justify anything less can extend the instrument downtime.

Operational Qualification

The definition of OQ from <1058> was presented earlier, and the opinion of the authors is that this definition does not require modification. The OQ phase provides a controlled way of testing the performance of the instrument and the intended use (for example, the "set points" needed to cover the intended range of use). For tests such as temperature and flow, these can be measured as metrology tests using an appropriately calibrated device, while other tests include the use of a reference material and are more holistic in their nature. Generally, wavelength is evaluated using a suitable reference material such as caffeine or holmium oxide in perchloric acid. This means that the wavelengths that can be evaluated are dependent on the availability of suitable reference materials, which can cause a problem for users of detectors who have to operate at 200 nm, 5 nm below the 205 nm peak of caffeine. Here, a justification is required, which the supplier or service provider may be able to help write.

The OQ tests the functional operation of the instrument under standard conditions, which should match the operational range of use. Some tests, such as injection precision, are included in the OQ, the PQ, and ongoing instrument performance evaluation (SST). There is an important distinction to be made here, because some tests, such as injection precision and carryover, are application specific. Therefore, the limits applied in the OQ are related to the standardized method used to measure this in the OQ, while any injection precision limits applied which are related to the analytical methods are best evaluated in the PQ and ongoing SST related to the pharmacopeia requirements (8). Where a PQ is performed, the chromatography method used should be related to the methods and applications applied in the laboratory. Note that this would also be a feedback to the DQ.

Understanding the clear and distinct role of the OQ and PQ relative to the instrument use is fundamental to good compliance practice and robust instrument defence. Where a reference material is used, this should be traceable and appropriate for the intended use - there should be documented justification of the suitability; when a method that is new to the instrument is set up, this should be an automatic trigger to review the DQ to OQ link to decide if different set points are required.

Summary

Representing the life cycle process as a modified V model more clearly illustrates the relationship between the stages of the 4Qs model and makes it conceptually simpler to understand. In particular, that the laboratory defines the usage of the instrument in the DQ stage and that this is tested at the OQ stage using standardized methods which could potentially be independent of the make and model of the instrument. The methods and range of operation defines the set points that need to be considered so that the OQ tests the range of use. Therefore, if methods are changed or added, this becomes an automatic trigger to review the current DQ to OQ relationship - any different wavelengths, temperatures, or flow in the new methods trigger an update to the standardized qualification tests set points in the OQ, so that the OQ is not static but dynamically configured to satisfy the ongoing OQ requirements of the laboratory. The protocol approval process needs to be appropriately structured (application of Lean Sigma principles) to support a more dynamic and responsive approach.

The details of the risk assessment - considering how an instrument failure might be either detected and / or defended, has to be performed in the laboratory, against the actual SST's and working practices currently used in the laboratory. On the face of it, this approach may seem like a lot of work with little benefit. However, once the work is done, the ongoing management of the risk-based matrix is much simpler and the benefits include the development of stronger compliance defence in the laboratory, both in terms of justification of the potential impact of an instrument failure on results and reduction of risks because the possibility of an undetected instrument failure has been significantly reduced. At a time when regulators across the globe are focusing on data integrity and exchanging audit risk information, this has to be a good thing.

Finally, by augmenting the SSTs defined in USP <621> (8), the SSTs can be used to support the demonstration of the ongoing consistent performance of the instrument (PQ) rather than being used to test compliance (11).

Paul Smith is Global Strategic Compliance Program Manager at Agilent Technologies. After initially specializing in spectroscopy and application of chemometrics to spectroscopic data, Paul developed his compliance expertise in a variety of quality and management roles within the 17 years he spent in the pharmaceutical industry. Paul worked as an independent consultant and university lecturer before moving into laboratory compliance consultancy and productivity roles.

"Questions of Quality" editor Bob McDowall is Director at R.D. McDowall Ltd, Bromley, Kent, UK. He is also a member of LCGC Europe's editorial advisory board. Direct correspondence about this column should be addressed to the editor-in-chief, Alasdair Matheson, at amatheson@advanstar.com

References

(1) United States Pharmacopeia, <1058>, Analytical Instrument Qualification.

(2) ICH Q9 Quality Risk Management, step 4, 2005 (www.ich.org).

(3) Kevin O'Donnell et al., PDA J Pharm Sci and Tech66, 243–261 (2012).

(4) FDA Guidance for Industry - Guidelines on General Principals of Process Validation, May 1987.

(5) Lukas Kaminski et al., LCGC Europe24(8), 418–422 (2011).

(6) GAMP Good Practice Guide, A Risk-Based Approach to GXP Compliant Laboratory Computerized Systems, (Second Edition, ISPE Tampa, Florida, USA, 2012).

(7) Code of Federal Regulations, 21CFR part 211.160 (a).

(8) United States Pharmacopeia, <621>, Chromatography.

(9) FDA Draft Guidance for Industry - Analytical Procedures and Methods Validation for Drugs and Biologicals, Section VIII Life Cycle Management of Analytical Procedures, February 2014.

(10) I.H. Afefy, Engineering2, 863–873 (2010)

(11) Item 7, http://www.fda.gov/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/ucm124787.htm

Related Content