OR WAIT null SECS
© 2023 MJH Life Sciences™ and Chromatography Online. All rights reserved.
The fundamental aim of any computerized system validation should be to define its intended use and then test it to demonstrate that it complies with specification.
In the first part of this article1 we discussed the GAMP Good Practice Guide (GPG) for the Validation of Laboratory Computerized Systems.2 We looked at the advantages offered by the System Implementation Life Cycle (SILC) in contrast to the complexity of the system classification proposed in the GPG.
In this part I'll look at the risk assessment methodology, the new US Pharmacopeia (USP) general chapter <1058>,3 which is based upon the AAPS analytical equipment qualification white paper,4 and suggest a way forward to unite the qualification of equipment with the validation of the controlling laboratory computers.
OK, if you managed to get this far after reading Part 1, we now have the finishing touch — the risk assessment methodology. GAMP 45 uses a modified Failure Mode Effect Analysis (FMEA) risk assessment methodology as outlined in Appendix M3.3 This has also been adapted for laboratory systems in the GPG. Why this over complex methodology was selected for laboratory systems is not discussed although I suspect that it is aimed at consistency throughout the GAMP series of publications. The overall process flow for the risk assessment is shown in Figure 1: the first three steps are at the system level and the last two at the individual requirement level.
Figure 1: GAMP GPG risk management process.
FMEA was originally developed for risk assessment for new aeroplane designs in the late 1940s and has been adapted over time to encompass new designs and processes. However, as the majority of laboratory equipment and software used in laboratories is commercially available and purchased rather than built from scratch why is this inappropriate methodology being applied?
Commercially available instruments and systems have already been tested by the vendors which can be verified by audits. Therefore, why should a risk analysis methodology that is very effective for new designs and processes be dumped or foisted on laboratories using mainly commercial systems? There are alternative and simpler risk analysis approaches that can be used for the commercial off-the-shelf and configurable COTS software applications used throughout laboratories. For example, there are also
A detailed discussion of risk management is outside the scope of this column but I have written a recent paper on the subject that some of you may find useful as it compares the various methodologies available.6
The GPG uses a Boston grid for determining system impact that is outlined in Appendix 1 of the document.2 However, because there are seven classes of laboratory instrumentation and five classes of business impact this requires a 7 × 5 Boston grid. This over complicates the issue and is NOT easily manageable (Table 1). Moreover, because some systems can be classified in a number of laboratory categories there is a possibility that the impact of a system can be underestimated.1
Table 1: GAMP GPG for laboratory systems - system impact table
Throughout the GPG there appears to be an emphasis on managing regulatory risk. This is in contrast to the introductory statements in the GPG mentioned at the start of this column. From my perspective, this is wrong and emphasis should be placed on defining the intended purpose of the system and hence functions of the instrument and software that are required first and foremost. Only then will you be able to assess the risk for the system based on the intended functions of the system.
The testing approach outlined in Sections 10 (Qualification, Testing and Release) and Appendix 2 need to be viewed critically. Section 10 notes that for testing or verifying the operation of the PQ against user requirements, the following are usually performed:
Appendix 2 covering the testing priority is a relatively short section that takes each requirement in the URS and assesses risk likelihood (likelihood or frequency of a fault) versus the criticality of requirement or effect of hazard to classify the risk into one of three categories (category 1, 2 or 3). This risk classification is then plotted against the probability of detection to determine high, medium and low priority of testing. A high-risk classification coupled with a low likelihood of detection determines the highest class of test priority.
This probably encapsulates the overall approach of the guide in my view — regulatory rationale rather than business approach in contrast to the stated aims of the guide in the introduction. Using this approach, I believe that you will be performing over complex and over detailed risk assessments forever for commercial systems that constitute the majority of laboratory systems. What the writers of the GPG have forgotten is that the FDA has gone back to basics with Part 11 interpretation.7 Remember that the GMP predicate rules (21 CFR 211 and ICH Q7A for active pharmaceutical ingredients) for equipment/computerized systems state:
§211.63 Equipment Design, Size and Location: Equipment used in the manufacture, processing, packing or holding of a drug product shall be of appropriate design, adequate size, and suitably located to facilitate operations for its intended use and for its cleaning and maintenance.8
ICH Q7A (GMP for active pharmaceutical ingredients), in §5.4 on Computerized Systems states in §5.42: Commercially available software that has been qualified does not require the same level of testing.9
The fundamental aim of any computerized system validation should be to define its intended use and then test it to demonstrate that it complies with specification. The risk assessment should focus the testing effort where it is needed most but built on the testing that a vendor has already done as the GPG notes on page 34.2 Where a vendor has tested the system in the way that you use it (either in system testing or the OQ) then why do you need to repeat this?
As usual in the world, each professional group MUST have their own say in how things should be done. The American Association of Pharmaceutical Scientists (AAPS) is no exception and has produced a white paper titled "Qualification of analytical instruments for use in the pharmaceutical industry; a scientific approach".4 Of course, this is a different approach from GAMP. However, on the bright side the dishwashers bit the dust long before the final version of this publication!
In contrast to the GAMP GPG, which looks at laboratory equipment from the computer perspective, the AAPS document looks at the same issue from equipment qualification perspective. The AAPS white paper has devised three classes of instruments with a user requirements specification necessary to start the process.
OK this approach is simpler but the only consideration of the computer aspects is limited to data storage, back-up and archive. Thus, this approach is rather simplistic from the computer validation perspective.
Furthermore, the definition of IQ, OQ and PQ is from the equipment qualification perspective (naturally) with operational release occurring after the OQ stage and PQ intended to ensure continued performance of the instrument. This is different from the GAMP GPG, which uses the computer validation definition of IQ, OQ and PQ where PQ is end user testing and operational release occurs after the end of the PQ phase.4 This is a great problem when two major publications cannot agree on terminology for the same subject.
However, the AAPS white paper is now the baseline document for the new proposed general chapter <1058> for the USP XXIX;3 the draft of which was published for comment in Pharmacopoeial Forum.7 This highlights the flawed approach of the GAMP GPG because there is now a de facto differentiation between laboratory equipment qualification and computer system validation that will be incorporated in the USP.
So are we any further forward? Not really — we are just nibbling at the problem from a different perspective but without solving it decisively. Consider the following issues that are not fully covered by the AAPS guide that will now be enshrined in a formal regulatory text:
What we really need for any regulated laboratory is an integrated approach to the twin problems of instrument qualification and computer validation. As noted by the GAMP GPG, the majority of laboratory and spectrometer systems come with some degree of computerization from firmware to configurable off-the-shelf software.2 The application software controls the instrument. If you qualify the instrument you will usually need the software to undertake many of the qualification tests with an option to validate the software at the time.
BUT... we look at the two issues separately.
Consider the AAPS Analytical Instrument Qualification guide3 and GAMP laboratory GPG2 as two examples that we have looked at in this column. They are looking at different parts of the same overall problem and coming up with two different approaches. No wonder if we don't take a considered and holistic view of the whole problem.
For example, we use the same qualification terminology (IQ, OQ and PQ) for both instrument qualification and computer system validation but they mean different things.10 This fact is exemplified in the two guides. Confused? You should be. If you are not — then you have not understood the problem!
Therefore, we need to develop the following guidance as a minimum:
Figure 2: Integrated approach to laboratory instrument qualification and validation (modified from C. Burgess, personal communication).
I can go on (and usually do) in more detail but the plain truth is that we don't have this holistic approach yet.
Qualification of laboratory equipment and validation of computerized laboratory systems are going into two different directions that lack an integrated approach. We need to have an integrated approach that recognizes that we need a combined approach to qualifying the instrument through the controlling software that also needs to be validated at the same time. This approach must harmonize the use of terminology and definitions. Until we have this integrated approach there will be confusion in this area.
1. R.D. McDowall, LCGC Eur., 19(5), 274–282 (2006).
2. GAMP Forum Good Practice Guide — Laboratory Systems; International Society for Pharmaceutical Engineering: Tampa, Florida, USA (2005).
3. United States Pharmacopeia XXIX (2006).
4. S.K. Bansal et al. Qualification of Analytical Instruments for Use in the Pharmaceutical Industry: A Scientific Approach, American Association of Pharmaceutical Scientists (2004).
5. Good Automated Manufacturing Practice (GAMP) guidelines version 4, International Society of Pharmaceutical Engineering, Tampa, Florida, USA (2001).
6. R.D. McDowall, Quality Assurance Journal, 9, 196–227 (2005).
7. FDA Guidance for Industry on Part 11 Scope and Application (2003).
8. FDA Current Good Manufacturing Practice for Finished Pharmaceutical Products (21 CFR 211).
9. ICH Q7A Good Manufacturing Practice for Active Pharmaceutical Ingredients (2000).
10. Pharmacopoeal Forum, <1058> Analytical Equipment Qualification, January 2005.
R.D. McDowall is principal at McDowall Consulting, Bromley, Kent, UK. He is also a member of the Editorial Advisory Board for LCGC Europe.