OR WAIT null SECS
The lack of proper method validation has important implications in blood alcohol concentration (BAC) determinations. We examine this issue, addressing the importance of gas chromatography (GC) for BAC determination, why certain validation procedures are important, and why accreditation bodies need to step up their game.
Over the past several years, we have consulted in a variety of litigation surrounding various patent, environmental, and forensics cases. What we have learned is that laboratory accreditation is not a cloak of infallibility when it comes to assessing the veracity of different chemical measurements (1). With that said, there is nowhere we have encountered more deficiencies than in blood alcohol concentration (BAC) determinations performed by forensics laboratories.
These forensics laboratories are accredited for their capability to meet International Organization for Standardization (ISO) 17025 standards for testing and calibration procedures. ISO 17025 is an international standard designed to apply to anyone running a chemical testing laboratory. It is a standard that is meant to show, when followed, that the laboratory produces reliable results.
ISO 17025 is quite vague when it comes to the essentials associated with creating, testing, and controlling the quality of a method designed for a specific purpose, such as for BAC determination. As a result, the American National Standards Institute (ANSI) and the American Academy of Forensic Science Standards Board (ASB) created a series of more specific guidance documents, one of which is ANSI/ASB Standard 036: Standard Practices for Method Validation in Forensics Toxicology (2). The document clearly states that, “This standard was developed to provide the minimum requirements for validating analytical methods in forensic toxicology laboratories” (emphasis added). Although the first edition of this standard was released in 2019, it was built from similar guidance that had been established in the field previously (3).
For those who have validated bioanalytical methods in the past, the procedures and tests prescribed by Standard 036 for method validation will look familiar. It describes establishing a validation plan depending on the scope of the method. It then describes specifically how to assess aspects of the method, such as bias and precision, calibration, carryover, interfer- ence studies, ionization suppression and enhancement (especially for methods based on liquid chromatography–mass spectrometry [LC–MS]), and limits of detection and quantitation. The guidance document also indicates when methods should be revalidated—when major changes are made to the method (for example, when a new analyte is added) or to the sample processing procedures, following major maintenance events, and when there is a software change. Most notably, it says that the method should be revalidated if the current method validation used by the laboratory does not include the validation experiments described in this document.
We have rarely encountered a BAC method that has been properly validated according to Standard 036 guidelines, or even according to its predecessor guidelines. When a method is not properly validated, the results generated by the method should not be considered reliable. The various method validation steps, and ensuing quality control during routine operation, are meant to provide proof that the method is not subject to a wide variety of issues that would increase the uncertainty of the results. The propagation of potential errors associated with lack of appropriate measures to assess the presence or absence of these deficiencies culminates in a very uncertain result. The accuracy of such a result cannot be trusted.
Most laboratories use aqueous calibration standards and controls during routine BAC analysis. To be clear, the calibration standards and most quality control standards are prepared in water, whereas the actual case samples involve a whole blood matrix. It is only valid to use an aqueous-based calibration of an instrument if the laboratory has shown that accurate results can be achieved for whole blood samples when using the surrogate calibration matrix, which requires a series of procedures to assess accuracy, interference, and carryover by analyzing fortified (spiked) and blank whole blood matrices from a variety of different sources. Without these measurements, it is impossible to assess whether the mismatch of calibrator matrix and sample matrix cause issues with accuracy.
Matrix effects is a catch-all term for interferences caused by other constituents in the sample. There are many ways that matrix effects can occur—during headspace extraction, during injection, during separation, and during detection. It is not always easy to determine the cause without a good troubleshooting strategy.
Matrix effects can be manifested as extra signals in the chromatogram. Because a flame ionization detection (FID) instrument does not discriminate what it detects, some of these extra peaks could occur at the same time as the signals for the ethanol or internal standard peaks. You would never know how susceptible your method is to these interferences without running an interference study where blank blood matrices from a variety of sources indicate that there are no aberrant signals at the retention times of interest.
Blank and fortified blood samples should also be included during batch analyses of case samples because instrument performance can change over time as instrumental components become dirty and worn. Without including these controls in the batch, it is impossible to tell whether instrumental performance deficiencies that cause matrix effects have developed since the last performed validation.
Proficiency tests alone are not sufficient to prove that a method is valid. Proficiency tests involve measuring BAC in old-case samples, often provided from an outside source and measured previously by a (hopefully) validated instrument and method to see if the same results are obtained on the new instrument and method. One would like to assume that the previously determined values for the old-case samples are accurate (and that they have not changed over time), but the reference results for the proficiency samples have often also been measured on an improperly validated instrument. Although proficiency testing can be a step in method validation, it should not be considered enough to supplant measurement of blank and fortified whole blood samples as prescribed in Standard 036. In a whole fortified blood sample, one knows exactly how much ethanol is present. Such control does not exist with proficiency test samples.
When forensics laboratories report results for BAC, they include an uncertainty value that has been somehow assessed previously. This uncertainty value can represent the uncertainty of the measurement if all of the steps of the method were performed perfectly. However, every misstep will increase this uncertainty. Contamination during blood draws, improperly calibrated pipettes, coelutions during matrix effects, and any other inconsistencies that may be present in a measured case sample increase the uncertainty of the measurement. As the uncertainty increases, the accuracy of the measurement becomes more doubtful. Without proper validation and quality control, it is impossible to refute the presence or absence of these deleterious effects.
Completing the steps required for proper method validation per Standard 036, as well as for the inclusion of proper quality control standards during batch analysis, are not onerous tasks. Within a week, a laboratory could remove an instrument from service and properly validate it. Many laboratories that we have encountered have been running BAC determinations for many years on instruments that have never been properly validated. They also fail to revalidate the method after major maintenance events. This deficiency casts doubt on the veracity of the reported measurements.
In our opinion, it is well past the time that governing officials should address this gross deficiency in forensics testing. Accreditation bodies do not appear to closely monitor the methodology used for BAC determination in a forensics laboratory, which devalues accreditation. More importantly, the results of these measurements can massively compromise the livelihood of an individual under investigation. Why are not more effort and attention given to ensure a reliable measurement is performed? In the meantime, clients are forced to pay large sums of money to hire experts who can refute the measurement based on the grounds described above. Alternatively, those that cannot afford this service may not be able to contest inaccurate BAC measurements, which can have a profound impact on their sentencing, and ultimately, their civil liberties. The negative implications of unjust persecution and expensive litigation could be avoided if forensic laboratories brought their practices up to current industry standards.
(1) K.A. Schug and Z.L. Hildenbrand, Laboratory Accreditation is Not a Cloak of Infallibility. The LCGC Blog. https://www.chromatographyonline.com/view/the-lcgc-blog-laboratory-accreditation-is-not-a-cloak-of-infallibility (Accessed January 6, 2022).
(2) AAFS Standards Board, ANSI/ASB Standard 036, First Edition 2019. Standard Practices for Method Validation in Forensics Toxicology. https://www.aafs.org/sites/default/files/media/documents/036_Std_e1.pdf (Accessed May 18, 2022).
(3) Scientific Working Group for Forensic Toxicology (SWGTOX), J. Anal. Toxicol. 37(7), 452–474 (2013). https://doi.org/10.1093/jat/bkt054