Comments on the GAMP Good Practice Guide for Validation of Laboratory Computerized Systems, Part 2

Article

LCGC Europe

LCGC EuropeLCGC Europe-09-01-2006
Volume 19
Issue 9
Pages: 462–468

The fundamental aim of any computerized system validation should be to define its intended use and then test it to demonstrate that it complies with specification.

In the first part of this article1 we discussed the GAMP Good Practice Guide (GPG) for the Validation of Laboratory Computerized Systems.2 We looked at the advantages offered by the System Implementation Life Cycle (SILC) in contrast to the complexity of the system classification proposed in the GPG.

In this part I'll look at the risk assessment methodology, the new US Pharmacopeia (USP) general chapter <1058>,3 which is based upon the AAPS analytical equipment qualification white paper,4 and suggest a way forward to unite the qualification of equipment with the validation of the controlling laboratory computers.

Risk Assessment Methodology

OK, if you managed to get this far after reading Part 1, we now have the finishing touch — the risk assessment methodology. GAMP 45 uses a modified Failure Mode Effect Analysis (FMEA) risk assessment methodology as outlined in Appendix M3.3 This has also been adapted for laboratory systems in the GPG. Why this over complex methodology was selected for laboratory systems is not discussed although I suspect that it is aimed at consistency throughout the GAMP series of publications. The overall process flow for the risk assessment is shown in Figure 1: the first three steps are at the system level and the last two at the individual requirement level.

Figure 1: GAMP GPG risk management process.

FMEA was originally developed for risk assessment for new aeroplane designs in the late 1940s and has been adapted over time to encompass new designs and processes. However, as the majority of laboratory equipment and software used in laboratories is commercially available and purchased rather than built from scratch why is this inappropriate methodology being applied?

Commercially available instruments and systems have already been tested by the vendors which can be verified by audits. Therefore, why should a risk analysis methodology that is very effective for new designs and processes be dumped or foisted on laboratories using mainly commercial systems? There are alternative and simpler risk analysis approaches that can be used for the commercial off-the-shelf and configurable COTS software applications used throughout laboratories. For example, there are also

  • Hazard analysis and critical control points (HACCP)

  • Functional risk assessment (FRA).

A detailed discussion of risk management is outside the scope of this column but I have written a recent paper on the subject that some of you may find useful as it compares the various methodologies available.6

The GPG uses a Boston grid for determining system impact that is outlined in Appendix 1 of the document.2 However, because there are seven classes of laboratory instrumentation and five classes of business impact this requires a 7 × 5 Boston grid. This over complicates the issue and is NOT easily manageable (Table 1). Moreover, because some systems can be classified in a number of laboratory categories there is a possibility that the impact of a system can be underestimated.1

Table 1: GAMP GPG for laboratory systems - system impact table

Testing Approach versus Intended Purpose

Throughout the GPG there appears to be an emphasis on managing regulatory risk. This is in contrast to the introductory statements in the GPG mentioned at the start of this column. From my perspective, this is wrong and emphasis should be placed on defining the intended purpose of the system and hence functions of the instrument and software that are required first and foremost. Only then will you be able to assess the risk for the system based on the intended functions of the system.

The testing approach outlined in Sections 10 (Qualification, Testing and Release) and Appendix 2 need to be viewed critically. Section 10 notes that for testing or verifying the operation of the PQ against user requirements, the following are usually performed:

  • Verification of user SOPs

  • Capacity testing (as required)

  • Processes (between input and output)

  • Testing of the system's back-up and restore (as required)

  • Security

  • Actual application of the system in the production environment (e.g., sample analysis).

Appendix 2 covering the testing priority is a relatively short section that takes each requirement in the URS and assesses risk likelihood (likelihood or frequency of a fault) versus the criticality of requirement or effect of hazard to classify the risk into one of three categories (category 1, 2 or 3). This risk classification is then plotted against the probability of detection to determine high, medium and low priority of testing. A high-risk classification coupled with a low likelihood of detection determines the highest class of test priority.

This probably encapsulates the overall approach of the guide in my view — regulatory rationale rather than business approach in contrast to the stated aims of the guide in the introduction. Using this approach, I believe that you will be performing over complex and over detailed risk assessments forever for commercial systems that constitute the majority of laboratory systems. What the writers of the GPG have forgotten is that the FDA has gone back to basics with Part 11 interpretation.7 Remember that the GMP predicate rules (21 CFR 211 and ICH Q7A for active pharmaceutical ingredients) for equipment/computerized systems state:

§211.63 Equipment Design, Size and Location: Equipment used in the manufacture, processing, packing or holding of a drug product shall be of appropriate design, adequate size, and suitably located to facilitate operations for its intended use and for its cleaning and maintenance.8

ICH Q7A (GMP for active pharmaceutical ingredients), in §5.4 on Computerized Systems states in §5.42: Commercially available software that has been qualified does not require the same level of testing.9

The fundamental aim of any computerized system validation should be to define its intended use and then test it to demonstrate that it complies with specification. The risk assessment should focus the testing effort where it is needed most but built on the testing that a vendor has already done as the GPG notes on page 34.2 Where a vendor has tested the system in the way that you use it (either in system testing or the OQ) then why do you need to repeat this?

Cavalry to the Rescue? — AAPS Guide on Instrument Qualification

As usual in the world, each professional group MUST have their own say in how things should be done. The American Association of Pharmaceutical Scientists (AAPS) is no exception and has produced a white paper titled "Qualification of analytical instruments for use in the pharmaceutical industry; a scientific approach".4 Of course, this is a different approach from GAMP. However, on the bright side the dishwashers bit the dust long before the final version of this publication!

In contrast to the GAMP GPG, which looks at laboratory equipment from the computer perspective, the AAPS document looks at the same issue from equipment qualification perspective. The AAPS white paper has devised three classes of instruments with a user requirements specification necessary to start the process.

  • Group A instruments: Conformance to the specification is achieved visually with no further qualification required. Examples of this group are ovens, vortex mixers, magnetic stirrers and nitrogen evaporators.

  • Group B instruments: Conformance to specification is achieved according to the individual instrument's SOP. Installation of the instrument is relatively simple and causes of failure can be easily observed. Examples of instruments in this group are balance, IR spectrometers, pipettes, vacuum ovens and thermometers.

  • Group C instruments: Conformance to user requirements is highly method specific according to the guide. Installation can be complex and require specialist skills (e.g., the vendor). A full qualification is required for the following spectrometers: atomic absorption, flame absorption, ICP, MS, Raman, UV/vis and XRF.

OK this approach is simpler but the only consideration of the computer aspects is limited to data storage, back-up and archive. Thus, this approach is rather simplistic from the computer validation perspective.

Furthermore, the definition of IQ, OQ and PQ is from the equipment qualification perspective (naturally) with operational release occurring after the OQ stage and PQ intended to ensure continued performance of the instrument. This is different from the GAMP GPG, which uses the computer validation definition of IQ, OQ and PQ where PQ is end user testing and operational release occurs after the end of the PQ phase.4 This is a great problem when two major publications cannot agree on terminology for the same subject.

However, the AAPS white paper is now the baseline document for the new proposed general chapter <1058> for the USP XXIX;3 the draft of which was published for comment in Pharmacopoeial Forum.7 This highlights the flawed approach of the GAMP GPG because there is now a de facto differentiation between laboratory equipment qualification and computer system validation that will be incorporated in the USP.

So are we any further forward? Not really — we are just nibbling at the problem from a different perspective but without solving it decisively. Consider the following issues that are not fully covered by the AAPS guide that will now be enshrined in a formal regulatory text:

  • The scope of the guidance and proposed USP chapter is limited only to commercial off-the-shelf analytical instrumentation and equipment.

  • The three instrument groups are described along with suggested testing approaches to be conducted for each. However, in my view, there is not sufficient definition of the criteria for placing instruments in particular groups.

  • Group C instruments cover a wide spectrum of complexity and risk, and may have very diverse requirements. There is no specific allowance made within the approach for custom developed applications such as macros commonly found when operating spectrometers.

  • The guide covers the initial qualification activities for analytical instruments but there is very little on the validation of the software that controls the instrument. There is little guidance on operational, maintenance and control activities following implementation such as access control, change control, configuration management and data back-up. How many spectrometers can you name that don't have computer-controlled equipment and data acquisition?

  • The proposed chapter uses the term "analytical instrument qualification" (AIQ) to describe the process of ensuring that an instrument is suitable for its intended application but the instrument is only a part of the whole computerized system. It is the computerized system that controls the whole — not the instrument.

Integrated Approach to Computer Validation AND Instrument Qualification

What we really need for any regulated laboratory is an integrated approach to the twin problems of instrument qualification and computer validation. As noted by the GAMP GPG, the majority of laboratory and spectrometer systems come with some degree of computerization from firmware to configurable off-the-shelf software.2 The application software controls the instrument. If you qualify the instrument you will usually need the software to undertake many of the qualification tests with an option to validate the software at the time.

BUT... we look at the two issues separately.

Consider the AAPS Analytical Instrument Qualification guide3 and GAMP laboratory GPG2 as two examples that we have looked at in this column. They are looking at different parts of the same overall problem and coming up with two different approaches. No wonder if we don't take a considered and holistic view of the whole problem.

For example, we use the same qualification terminology (IQ, OQ and PQ) for both instrument qualification and computer system validation but they mean different things.10 This fact is exemplified in the two guides. Confused? You should be. If you are not — then you have not understood the problem!

Therefore, we need to develop the following guidance as a minimum:

  • Integrated terminology covering both the qualification of the instrument and validation of the software. This must ensure that the laboratory is not separated from the organization or creates a profession of Lablish interpreters.

  • Simple classification of laboratory equipment software — based on the existing GAMP software categories to be consistent with the rest of the organization. The laboratory is not a unique part of a facility anymore than production is.

  • Realistic life cycle(s) based on the further development of the simple SILC outlined in the GPG that reflect the different options that we face in the laboratory: from COTS to configurable COTS and where necessary customization of an application

  • Writing a specification or specifications to document both the instrument and the associated software functions. Figure 2 shows one approach to an integrated approach by considering the equipment operational requirements at both the modular and holistic levels and the software functions required; both of which are based on the way of working in a specific laboratory. The equipment qualification requirements for traceable reference standards can also be devised for input into the URS.10

  • Use of a simple to use but effective risk assessment methodology that reflects the majority of instrument and systems are commercial.

  • Integrated and practical approaches to combined equipment qualification and computer validation to test and demonstrate that the system does what it is intended to do.

Figure 2: Integrated approach to laboratory instrument qualification and validation (modified from C. Burgess, personal communication).

I can go on (and usually do) in more detail but the plain truth is that we don't have this holistic approach yet.

Summary

Qualification of laboratory equipment and validation of computerized laboratory systems are going into two different directions that lack an integrated approach. We need to have an integrated approach that recognizes that we need a combined approach to qualifying the instrument through the controlling software that also needs to be validated at the same time. This approach must harmonize the use of terminology and definitions. Until we have this integrated approach there will be confusion in this area.

References

1. R.D. McDowall, LCGC Eur., 19(5), 274–282 (2006).

2. GAMP Forum Good Practice Guide — Laboratory Systems; International Society for Pharmaceutical Engineering: Tampa, Florida, USA (2005).

3. United States Pharmacopeia XXIX (2006).

4. S.K. Bansal et al. Qualification of Analytical Instruments for Use in the Pharmaceutical Industry: A Scientific Approach, American Association of Pharmaceutical Scientists (2004).

5. Good Automated Manufacturing Practice (GAMP) guidelines version 4, International Society of Pharmaceutical Engineering, Tampa, Florida, USA (2001).

6. R.D. McDowall, Quality Assurance Journal, 9, 196–227 (2005).

7. FDA Guidance for Industry on Part 11 Scope and Application (2003).

8. FDA Current Good Manufacturing Practice for Finished Pharmaceutical Products (21 CFR 211).

9. ICH Q7A Good Manufacturing Practice for Active Pharmaceutical Ingredients (2000).

10. Pharmacopoeal Forum, <1058> Analytical Equipment Qualification, January 2005.

R.D. McDowall is principal at McDowall Consulting, Bromley, Kent, UK. He is also a member of the Editorial Advisory Board for LCGC Europe.

Recent Videos
Toby Astill | Image Credit: © Thermo Fisher Scientific
Robert Kennedy
Related Content