Understanding the Review Process in Ensuring Data Integrity

Publication
Article
LCGC North AmericaLCGC-North America-12-01-2020
Volume 38
Issue 12
Pages: 681

A key step in ensuring data integrity is second-person review of data, and data sets must be locked during the review process. Monica Cahilly explains.

The concept of data integrity is straightforward. Actually ensuring data integrity in practice, however, is a more complex topic, and involves many elements, such as the right corporate culture, good standard operating procedures, and the right laboratory setup. A key step is also second-person review of data. To understand more, we spoke to Monica Cahilly, the president and founder of Green Mountain Quality Assurance, in a podcast interview. Cahilly has more than 25 years of consulting experience, with specialized interest in and enthusiasm for data integrity assurance and data governance. A short excerpt of that interview is presented here.

What is the importance of having multiple reviewers to ensure data integrity?


Second-person review is a data-integrity requirement that goes back fundamentally to the inception of the GXP regulations. It takes into account the principle that humans are error prone. Because the work we are doing in GXP-regulated industries is so important and can potentially affect the lives of our customers (patients), it is important to ensure that we have a risk reduction strategy. Having a second person review what we do is an important element of that risk reduction strategy.

Historically, if we were filling out a laboratory worksheet or a laboratory notebook on paper, we would always have a second person review what we did to make sure they did not see any mistakes, and to make sure we are following procedures or analytical methods correctly, and to make sure we are properly trained. That reviewer would also look to see if there were things that required additional investigation and corrective and preventative actions. Maybe we deviated from our procedures or our analytical methods, or maybe there was something odd in the data set. Perhaps, in some extreme cases, there may be the potential of someone falsifying the data.

Therefore, we need to have a second person reviewing data to look for these mistakes, whether they’re intentional or unintentional. In my experience, the vast majority of data integrity risk actually is not intentional. It isn’t that people come to work intending to do something incorrectly. I believe that most people come to work trying to do the best they can, but we’re all human beings and we all make mistakes. It’s really beneficial for both the individual and the larger entity that we have someone who looks over our work, and helps us catch those mistakes and correct them before they turn into something that could potentially affect one of our patients.

Why is it important to lock the data set of results while it is going through the review and approval process?

Locking data during a data process is one of the principles of good data process design. When you think about good data process design, one of the things I try to ask myself is, “How can I prevent risk, and where in my data process are opportunities to prevent risk?” Because I know that when I leverage prevention and controls that prevent risk, it makes my life a lot easier on the detection side. As an example scenario, imagine that someone is entering data, and, as they enter that data, it’s going from a chromatography data system (CDS) into a laboratory information management system (LIMS). Then the reviewer comes in, and starts looking at the data that are in the LIMS and comparing them back to the data that are in the CDS. Or the reviewer might be looking directly at the data in the CDS and going through all the different ways that the original analyst processed the data. If, as the reviewer is doing the review, they are locking the data behind them, it prevents additional changes from taking place later that would undermine the review. It creates a secure link between the signature and the data set.

Also, once data are locked in a way that people can no longer modify data or delete data, it means that there are going to be fewer entries in audit trails. In other words, if, as an ordinary user, I am unable to modify, delete, or continue to create more data, then there are no audit trails that will be triggered by me after that locking process.

Locking processes are never 100%. There always is going to be someone who can, in fact, unlock data. However, locking data will reduce that residual risk to a much lower level, when the locking process is done concurrently with the review and approval processes.

This interview has been lightly edited for style and space.

Related Content