Data Integrity in the Chromatography Laboratory, Part V: Second-Person Review

Article

LCGC North America

LCGC North AmericaLCGC North America-08-01-2018
Volume 36
Issue 8
Pages: 527–529

The series continues with a crucial scientific and regulatory necessity in ensuring reliable information-the second-person review.

The fifth article in this series discusses the second-person review process for regulated records and the risks to be addressed, such as reviewing all data in the run, including aborted or rejected data. The added review costs (and time) of hybrid records and paper logbooks are illustrated. Common failure modes for review are identified. Finally, some management behaviors that compound an already complex step are described along with their consequences.

This article is the fifth of six on data integrity in a regulated chromatography laboratory. The first article discussed sampling and sample preparation (1), and the second focused on preparing the instrument for analysis and acquiring data (2). These articles were followed by the third part, which examined the integration of acquired chromatograms (3), and the fourth installment, where we discussed the calculation of the reportable results (4). In this article, we consider the second-person review of all the data, and records generated in the process.

Regulatory Requirements for a Second-Person Review

One of the keys to data integrity in a regulated laboratory is an effective second-person review, and its importance is reinforced by the Food and Drug Administration (FDA) and European Union Good Manufacturing Practice (EU GMP) regulations:

21 CFR 211.194(a) Laboratory records shall include complete data derived from all tests necessary to assure compliance with established specifications and standards, including examinations and assays, as follows:

 

(8) The initials or signature of a second person showing that the original records have been reviewed for accuracy, completeness, and compliance with established standards (5).

 

6.17. The tests performed should be recorded and the records should include at least the following data:

vii. Initials of the persons who verified the testing and the calculations, where appropriate (6).

In addition to its being a regulatory requirement, the second-person review is the first-line defense of data integrity because it is the initial opportunity to discover errors and omissions in documentation and execution. In addition, it is important to note that a review is conducted by the most qualified person to assess the scientific merit of all activities taken in the preparation of the reportable result. Who is more qualified to review the dataset than an experienced scientist who knows the technical details of executing the analytical procedure?

Scope of Second-Person Review

Because a second-person review is a regulated activity, it is necessary for the activity to conform to the regulations involved. One of the most important regulatory requirements is that the complete set of data must be reviewed, as noted in 21 CFR 211.194(a): "A complete record of all data secured in the course of each test, including all graphs, charts, and spectra from laboratory instrumentation" (5).

How do we interpret this requirement for a chromatographic analysis? The scope of this requirement is shown in Figure 1, and it encompasses all records from taking the sample to generating the reportable result. It consists of all records generated in sampling, preparation, instrumental analysis, data evaluation, and calculation of the reportable result, as we have presented in Parts I–IV of this article series (1-4). Also, in Figure 1 you can see the scope of complete data (5) or raw data (6), and the scope of the audit trail review.


Figure 1: The scope of a second-person review of a chromatographic analysis (7).

However, some laboratories have struggled with the term "complete," defining it as only the data directly used to calculate the reportable result. This is wrong, as it excludes data that might have been generated, but excluded from the reportable result for various reasons. It is imperative that the reason for data exclusion is scientifically justified and documented, and the reviewer should be given this data to assess the scientific merit for excluding the data from the reportable result. See questions 2, 12, and 14 in the FDA's data integrity guidance (8) for more detail on this subject. In a recent warning letter, excluded data were not given to the reviewer, resulting in a regulatory citation (9). Transparency of data is a key part of ensuring integrity in regulated laboratory operations.

Some examples of this excluded data that must be presented for second person review include

  • Aborted runs or aborted tests

  • Rejected runs that failed to meet suitability or acceptance criteria

  • Runs that are out of specification (OOS)or out of trend (OOT)

  • Runs that are voided for any reason (dropped injections, power failure) along with supporting evidence

  • Prior calculations of result values, if performed on external applications (for example, statistical programs or spreadsheets)

  • Original records containing errors that have been corrected and for which both the original and corrected printouts are being resubmitted for review.

Compiling the complete record of actions can be quite challenging, especially when multiple systems and hybrid records are involved. World Health Organization (WHO) guidance in clause 11.14 advises

"To ensure that the entire set of data is considered in the reported data, the review of original electronic data should include checks of all locations where data may have been stored . . ." (10).

This step requires a procedure for the review.

 

Procedure for a Second-Person Review

A standard operating procedure (SOP), coupled with effective training, is also required for a second-person review. An outline for such a procedure is shown in Figure 2, and includes the review of printouts from a hybrid system or even a summary from a chromatography data system (CDS) used with electronic signatures.


Figure 2: An outline for a second-person review SOP (7).

Associated with this SOP, it is recommended that each analytical procedure have a checklist to assist reviewers in identifying the complete record of testing, because the records will vary from method to method within a laboratory. Each checklist will need to be controlled and uniquely numbered (11). It may also be necessary to provide instructions on how to retrieve aborted or rejected runs from the CDS, and include them in the second-person review.

Who Should Be a Reviewer?

Selecting a reviewer is the single most critical part of the review process. Even with a robust checklist, the final quality of the review is wholly dependent on the person performing it, their knowledge and experience of the CDS functions, and the support they receive from management.

The experience and demeanor of the reviewer must be up to the task. As a result, reviewers should understand the theory and have performed the method for an extended period. They are aware of the challenging steps, and the places where errors can happen. They can look at the time taken for analysis, and tell when someone is rushing or struggling. This background and experience requirement means that reviewers will often be more senior people within the organization.

In addition to the technical background, a reviewer must be detail-oriented. Reviewing requires a person to go to multiple sources of data (systems, notebooks, and so on) and tediously review records for accuracy. When a pipette was used to dispense a sample, the reviewer will stop, find the pipette logbook, and verify that it was in calibration on the date it was used in the assay. The reviewer will verify that dates, pipettor identification numbers, and entered values are not only recorded, but are also reasonable and consistent. Good reviewers are constantly evaluating what they see, noting anything that seems out of place or unusual. They are naturally curious. There are many people who possess the academic and experience requirements, but the ability to focus on detail and natural curiosity to evaluate and investigate data set the best reviewers apart from the rest of the analysts.

Is Management the Problem, Part I?

Even well-qualified reviewers cannot be effective without the support of their managers. Managers must ensure that adequate time and priority are given to perform thorough reviews, and they must support their findings when issues or changes must be made before test release. Nothing prevents adequate review like managers who complain about the time needed for a second-person review, especially when remediation of many systems just involves additional procedural controls that will slow the review, rather than implement technical solutions that will speed up the review process.

Audit-Trail Review

One of the key aspects is the second-person review, as shown in Figure 1, is the review of audit-trail entries for all GMP-relevant changes and deletions (12). Because the laboratory records of an analysis must include complete data as required by 21 CFR 211.194(a) (5), and the record must be reviewed by a second person, as specified in 21 CFR 211.194(a)(8) (5), it follows that the second-person verification must be a complete review of the complete record of testing. The challenge in an electronic, relational database world of a CDS: What is the complete record?

In a paper world, the complete record was simpler to define, because there was much less information, and it was all in paper form. But even in the days of paper records, there had to be a list of papers that defined the complete record of testing. The move to electronic records seems more complex on the surface, but at the core it is really the same exercise. For example, an analyst could make a calculation error, discover it, and recompute the reported result. On paper, it was crossed off and changed following the documentation SOP. In a CDS, that same change is documented in an audit trail. So, the reviewer must look at the audit trail, or the review is inadequate. This same logic of matching paper to electronic records must be done for each method. After they are matched, you will discover that some audit trails do not need to be reviewed every time a test result is released from the laboratory. For example, a login–logout audit trail seldom has information that is critical (direct impact on reported result). This mapping exercise requires business and systems knowledge, and is best done by a team of people and documented for the benefit of all lab analysts executing the method. Simply allowing each reviewer to decide what to review is a sure recipe for inconsistency, errors, and omissions in released results.

What makes an audit trail "critical"? The simplest criteria would be any audit trail with two characteristics:

1. It is required by a predicate rule, and

2. It has a direct impact on a reportable result.

Examples include changes to calculations, changed (overridden) original data values, calculation factors, sample identities, and reference standard potencies. However, this statement assumes that the audit trail entries of a CDS are understandable (13). If not, they fail the ALCOA+ test of legible.

Focus on the Instrument Logbook

A previous article (14) focused on the instrument logbook normally used for maintenance and use in chronological order. Although it is simple to create logbooks to document required information about use and maintenance of instruments, they impose burdens on the organization. These burdens come into focus when performing a second-person review as shown in Figure 2.

With interfaced electronic systems such as CDS and laboratory information management systems (LIMS), or a electronic laboratory notebook (ELN), the audit-trail review can be done electronically. Some systems give reviewers a flag when changes have been made to an original record, so no time is wasted looking for a nonexistent audit trail. Moving from LIMS or ELN back to source data is simple, sometimes with only a few keystrokes needed. Sample preparation, equipment, and solution records are linked from the test back to the detailed records of these activities. Electronic systems make access to information efficient and accurate.

In contrast, hybrid systems with poor security have an abundance of logbooks that make test review both time consuming and error-laden. Consider this scenario, where an analytical procedure uses an instrument, pipette, and an analytical balance and the test data are recorded manually in a simple LIMS or ELN system, as shown in Table I.

Notice the number of logbooks to be reviewed? Each book requires time to verify the correct record for review, read, and assess the entry, and then return to the summary record. Errors or blank fields found during the review require the notebook to be returned to the offending person for correction and justification.

Contrast this process with electronic systems where blank fields are flagged at the time of entry. Second-person review is one place where the expense of migrating to interfaced electronic systems returns a large dividend. It is worth pointing out that methods with several standalone electronic systems can have second-person review times exceeding the time to perform the method, excluding the added time to administer the logbooks (assign and manage).

Unnecessary logbooks are created by electronic systems with data integrity gaps. For example, a system that does not permit individual user accounts with passwords-a regulatory expectation-must be mitigated by creating a use logbook. The need for this logbook is completely avoided by replacing such a system with one using individual accounts and passwords.

 

Recording the Review

It is necessary to leave evidence of a completed review as shown in Figure 2. Sadly, many systems in use today provide no transaction to capture this evidence in an electronic format, forcing the users to document their review in hybrid form. This process is inefficient and must be a major consideration when purchasing a new system.

Hybrid or Electronic CDS?

With the use of external spreadsheets or other applications to support the business workflow, and the use of paper logbooks for equipment, organizations often find themselves with a hybrid record scenario. This represents the worst of all possible worlds for data life cycle management and second-person review. As mentioned above, the hybrid process for second-person review is cumbersome and time consuming. Consequently, more effort (and time) is required for a thorough review of the complete data. This extra effort, required for every test, becomes a temptation: It is far easier to skip a few records-for example, the pipette calibration logbook-and save some time in review. Then, add a rationalization: "After all, how often has it failed?" Hybrid records are faster to create than pure electronic records, but after their creation they carry a very heavy cost for the remaining portion of the data life cycle.

If the inefficiency and increased risk of missed errors is not enough, there is the regulatory reason to avoid hybrid records. WHO TRS 996 Annex 5 on page 203 states the World Health Organization's position (10): "In the hybrid approach, which is not the preferred approach, paper printouts of original electronic records from computerized systems may be useful as summary reports if the requirements for original electronic records are also met."

To rely upon these printed summaries of results for future decision-making, a second person would have to review the original electronic data and any relevant metadata such as audit trails, to verify that the printed summary is representative of all results. This verification would then be documented and the printout could be used for subsequent decision making.

Given the hidden costs and preferences of regulators, it is wise to do everything possible to shift processes to electronic records, and reduce paper–electronic links to the minimum possible number.

What Can Possibly Go Wrong?

If conducted correctly, a second-person review is a value added activity for any laboratory giving confidence in the analytical results and any decisions taken with them. However, review failure is possible, and it can come from four primary sources:

Failure to provide the complete record of testing for review

The responsibility for this failure belongs mostly to laboratory analysts (9). However, a portion arises from management failing to define the compete record and training people to provide it.

Failure to assess the complete record during review, including all associated audit trails, logbooks, and paper and electronic records

This failure mode mostly belongs to the reviewer, but a portion is caused by the lack of a checklist that leads them to a complete review of all relevant records; the checklist is a management failure.

Failure to provide adequate time to perform a second-person review

This failure mode is mostly the responsibility of management, but a portion could be because of a lack of priority given by the reviewer (4).

Failure to assign qualified and trained people to the duty of second person review

This failure mode belongs squarely on the shoulders of management.

In total, two of the four primary failure modes belong to management, and the other two have a secondary responsibility to management. Yes, management has significant control over the quality of any second-person review, for better or worse.

Is Management the Problem, Part II?

As we can see above, management can adversely influence a second-person review. Are there other areas where management can also impact a review for the worse? Here's a nonexhaustive list:

  • Inadequate recognition or reward for good reviewers-the more errors caught is a good indicator of reviewer performance rather than bad

  • Inadequate time given for quality review and issue follow-up

  • Complaints about time required to test, review, and release with no effort to improve the process (such as electronic systems)

  • Procedural controls implemented because they are quick and cheap compounded with the failure to invest versus superior and faster technical controls

Therefore, to ensure effective second-person reviews requires management to provide the time, the training, and the tools.

Summary

A second-person review is an essential scientific and regulatory requirement for ensuring data integrity. However, this process requires the right people and the right understanding of the scope of the data and records to be reviewed for completeness, consistency, and accuracy. Reviews can be hindered by management who fail to invest in electronic systems with technical controls to ensure speedy reviews. Without working electronically, a good review can take longer than the time to perform the test. Management attitudes and lack of understanding can also adversely affect the quality of a second person review for the worse.

In the final part of this series, we will look at analyst training and how to establish and maintain an open culture and metrics for monitoring chromatographic analyses.

References

(1) M.E. Newton and R.D. McDowall, LCGC North Am. 36(1), 46–51 (2018).

(2) M.E. Newton and R.D. McDowall, LCGC North Am. 36(4), 270–274 (2018).

(3) M.E. Newton and R.D. McDowall, LCGC North Am. 36(5), 330–335 (2018).

(4) M.E. Newton and R.D. McDowall, LCGC North Am. 36(7), 458–462 (2018).

(5) 21 CFR 211 Current Good Manufacturing Practice for Finished Pharmaceutical Products (FDA, Silver Spring, Maryland, 2008).

(6) EudraLex - Volume 4 Good Manufacturing Practice (GMP) Guidelines, Chapter 6 Quality Control. (European Commission, Brussels, Belgium, 2014).

(7) R.D. McDowall, Data Integrity and Data Governance: Practical Implementation in Regulated Laboratories (Royal Society of Chemistry, Cambridge, UK, 2018).

(8) U.S. Food and Drug Administration, Guidance for Industry Data Integrity and Compliance with cGMP (FDA, Silver Spring, Maryland, 2016).

(9) U.S. Food and Drug Administration, Warning Letter IDT Australia Limited (Warning Letter 320-18-55).(FDA, Silver Spring, Maryland, 2018).

(10) WHO Technical Report Series No.996 Annex 5 Guidance on Good Data and Records Management Practices. (World Health Organization, Geneva, Switzerland, 2016).

(11) C. Burgess and R.D. McDowall, LCGC Europe 29(9), 498–504 (2016).

(12) EudraLex - Volume 4 Good Manufacturing Practice (GMP) Guidelines, Annex 11 Computerised Systems. (European Commission, Brussels, Belgium, 2011).

(13) R.D. McDowall, Spectroscopy 32(11), 24–27 (2017).

(14) R.D. McDowall, Spectroscopy 32(12), 8–12, 28 (2017).

Mark E. Newton is the principal at Heartland QA in Lebanon, Indiana. Direct correspondence to: mark@heartlandQA.com

R.D. McDowall is the director of RD McDowall Limited in the UK. Direct correspondence to: rdmcdowall@btconnect.com

Related Content