The LCGC Blog: Laboratory Accreditation is Not a Cloak of Infallibility

Article

This blog is a collaboration between LCGC and the American Chemical Society Analytical Division Subdivision on Chromatography and Separations Chemistry.

Those working in a technical field are always trying to innovate and increase operational efficiency to produce the most value out of any given project. This requires making data-driven decisions to identify patterns and to forecast the potential implications of specific strategic moves. As with anything, these decisions are only as good as the information that they are predicated upon, and the data are only as good as the people and processes that generate them.

If your decision making relies on analytical chemistry, then you want to be confident that the measurements are an accurate representation of the matrix that is being analyzed, and that they are of “publication” quality. This is a major feather in one’s cap, especially when litigation is involved, because the state and federal court systems regard peer-reviewed data as the gold standard.

But how can you know for sure if the analytical laboratory that you’ve selected is producing reliable data? Often, we are quick to assume that analytical data is sound and accurate based on an accreditation or certification held by the laboratory in question. These sorts of credentials are meant to indicate reliable data production, because a series of compliance training, proficiency testing, and validations is generally required to achieve such qualifications. Even so, they are not the broad-scale cloak of infallibility that many might think they are. In other words, just because a given laboratory carries a certain accreditation does not mean that the data that they generate can be trusted, without performing a deeper dive into their quality assurance and quality control protocols (QA/QC).

From our experience in the energy, environmental, agricultural, and forensic sectors, one must take accreditations at face value and ask a bounty of questions about control and surrogate retention measurements, calibrations, validations, and instrumental upkeep. Fortunately, the majority of this information is made available in a good laboratory’s QA/QC reports. Below are some suggestions regarding what to look for in QA/QC reports, so that you can secure confidence in the laboratory’s work and the data that it produces.

Control and surrogate measurements: The use of surrogate, “spiked,” or control measurements are a standard practice whereby laboratory operators use samples of known concentration to assess the accuracy and precision of the response of their instruments. For many inorganic constituents, such as arsenic, an accuracy threshold window of 85–115% of the known concentration is an acceptable range. For organic constituents, such as certain volatile organic compounds (VOCs), this range can be a bit broader; however, one should evaluate this quite closely in a given laboratory’s QA/QC to ensure consistency and conformity to standards.

As an example, we recently participated in an environmental assessment of an alleged case of surface-water contamination where the detection of several VOC contaminants was being evaluated within the context of federal drinking-water standards. In this particular case, the concentrations of benzene in several samples collected from the water body of interest were elevated above the Environmental Protection Agency’s 5 parts-per-billion Safe Drinking Water Act standard. However, the QA/QC report of the accredited laboratory that performed the analyses reported accuracy ranges between 50–200% as acceptable. This means that their instruments that performed the VOC analysis could yield data that is artificially low or high by a very significant amount, and that this is supposed to be acceptable for some seemingly arbitrary reason. From our perspective, this was absolutely incomprehensible when we considered the potential legal and financial implications of the reported data in this particularly litigious matter.

Calibrations: Accurate instrumental calibration may singlehandedly be the most important aspect of reliable data generation in analytical chemistry. Proper calibration allows the operator of a given instrument to interpolate the concentration of a particular constituent based on the instrumental response to samples of known concentrations. This requires that one’s calibration standards and certified reference materials (CRMs) are dependable and haven’t expired, and that their serial dilution, for the preparation of a multipoint calibration curve, is performed accurately. If performed correctly, a multipoint calibration curve of five to seven points should yield a linear correlation value (r-value) >0.99. The incorporation of an internal standard is nearly always a best practice.

It’s important to keep in mind that an instrumental measurement is only reliable if the sample response signal falls within the range of the calibration curve. Attempting to quantify the concentration of a given analyte when extrapolating beyond the range of the calibration curve can lead to inaccurate measurements and unreliable data.

We’ve seen this phenomenon plague commercial laboratories in the cannabis/hemp sector, particularly with the analysis of concentrates and oil products. Depending on the state, laboratories operating in this space may not yet be guided by a specific set of analytical recommendations to guide their QA/QC practices. A problem arises when analysts may only run three calibration standards to create their calibration curve, which can then represent an artificially high (or low) calibration range. As a result, the analysis of a concentrated sample can then present a signal that is beyond the calibration range. The software then does its best to extrapolate the concentration of the constituent of interest, and you end up with an inaccurate measurement.

A particularly egregious example of this was presented on LinkedIn. A cannabis business owner was so proud to present their new concentrate product that had a delta-9-tetrahydrocannabinol (THC) concentration of 1,045 mg/g. That’s right, it was 104.5% pure – a reality-defying innovation. We attempted to shine light on this inaccuracy by inquiring about the QA/QC with the laboratory in question, but we all know how “collegial” discourse goes on the internet. Long story short, it turned out to be a calibration issue that wasn’t caught during data processing and reporting.

Furthermore, it is important to ask your laboratory of interest how frequently they calibrate their instruments. Instruments can fall out of calibration for a number of reasons, and it is important that these changes in performance are accounted for as they arise. Ideally, calibration should be performed on the same day that real samples are analyzed for their determination.

Validations: Laboratories performing quantitative and qualitative analyses need to follow certain protocols for method validation. These are generally industry-specific but are similar in the types of the measures they prescribe in order to demonstrate that a method is fit for use and can be expected to provide reliable results. In forensics chemical analysis, we often see this lacking—it appears that much of the standard method validation guidance was provided some years after the establishment of many forensics measurements and that the industry has been slow to conform. This lack of conformity creates a great deal of uncertainty in measurement results.

As just one example, the Texas Department of Public Safety currently asserts that all of their headspace gas chromatograph instruments operated across the state are free of matrix effects during blood alcohol measurements, because this was evaluated on a couple of instruments in Austin. Many of these instruments are also from different manufacturers than those in Austin and have different headspace sampling configurations. That is akin to assuming that your Honda Civic runs well because your friend’s Ford Fiesta is running well. This is not a reliable practice. Every instrument must be individually fully validated using the prescribed procedures in order to prove that they provide reliable results.

Instrumental upkeep: Analytical instruments need routine maintenance and often require major repairs after a significant amount of use. Laboratories should keep regular maintenance logs to document instrumental upkeep. Additionally, major alterations, such as the installation of new columns, require re-validation of the method according to guidelines. There should be documented re-validation of the instrument performance following major maintenance efforts.

Collectively, reviewing these QA/QC-related items can reveal a considerable amount about the veracity of the data being generated by a given laboratory. The technical fidelity of a laboratory’s QA/QC report is far more insightful than a simple recitation of standardized accreditations and certifications that may be held, which as we’ve mentioned, should be taken with a grain of salt. All of this feeds back into the simple truth that the only thing worse than no data is bad data. Given our evolving world where correlations, projections, and ultimately, data-driven decision making are the standard, it is paramount that the information that we are reliant upon is generated from a trustworthy source that exercises proper QA/QC procedures.

Kevin Schug

Kevin Schug

Kevin A. Schug is a Full Professor and the Shimadzu Distinguished Professor of Analytical Chemistry in the Department of Chemistry & Biochemistry at The University of Texas (UT) at Arlington. He is also a Partner in Medusa Analytical, LLC. Research in the Schug group at UT Arlington spans fundamental and applied areas of separation science, spectroscopy, and mass spectrometry. Schug was named the LCGC Emerging Leader in Chromatography in 2009 and the 2012 American Chemical Society Division of Analytical Chemistry Young Investigator in Separation Science. He is a fellow of both the U.T. Arlington and U.T. System-Wide Academies of Distinguished Teachers.

Zacariah L. Hildenbrand

Zacariah L. Hildenbrand

Zacariah L. Hildenbrand is a partner of Medusa Analytical. He sits on the scientific advisory board of the Collaborative Laboratories for Environmental Analysis and Remediation (CLEAR), is a director of the Curtis Mathes Corporation (OTC:TLED) and is a research professor at the University of Texas at El Paso. Hildenbrand’s research has produced more than 60 peer-reviewed scientific journal articles and textbook chapters. He is regarded as an expert in point source attribution and has participated in some of the highest profile oil and gas contamination cases across the United States. Hildenbrand has also provided consultation for several private-sector clients on various water-treatment and hydrocarbon-capturing technologies.

This blog is a collaboration between LCGC and the American Chemical Society Analytical Division Subdivision on Chromatography and Separations Chemistry (ACS AD SCSC). The goals of the subdivision include

  • promoting chromatography and separations chemistry
  • organizing and sponsoring symposia on topics of interest to separations chemists
  • developing activities to promote the growth of separations science
  • increasing the professional status and the contacts between separations scientists.

For more information about the subdivision, or to get involved, please visit https://acsanalytical.org/subdivisions/separations/.

Related Content