Performance Characteristics Measured in Method Validation

Article

LCGC North America

LCGC North AmericaLCGC North America-06-01-2012
Volume 30
Issue 6
Pages: 526

Analytical method validation is the process of demonstrating that a method does what it is intended to do.

Analytical method validation (AMV) is sometimes referred to as "the process of providing documented evidence that a method does what it is intended to do." Laboratories in the pharmaceutical and other regulated industries must perform AMV to comply with regulations. Published guidelines (1,2) help laboratories understand expectations. Conducting AMV is also good science.

In AMV, several performance characteristics may be investigated, depending on the type of method and its intended use. These are summarized below.

Specificity is the ability to measure accurately and specifically the analyte of interest in the presence of other components. In drug assays, specificity takes into account the degree of interference from other active ingredients, excipients, impurities, degradation products, or matrices, and ensures that a chromatographic peak corresponds to a single component. Specificity can be demonstrated by the resolution between peaks of interest. Modern chromatographic methods typically include a peak-purity test based upon photodiode-array detection or mass spectrometry.

Accuracy is the closeness of test results to the true value. For drug substances, accuracy measurements are obtained by comparing test results to the analysis of a standard reference material or to a second, well-characterized method. For drug products, accuracy is evaluated by analyzing synthetic mixtures (containing all excipient materials in the correct proportions) spiked with known quantities of analyte. Guidelines recommend that data be collected from a minimum of nine determinations over at least three concentration levels covering the specified range. The data should be reported as the percent recovery of the known, added amount, or as the difference between the mean and true value with confidence intervals (such as ±1 SD). Acceptability criteria are defined by end users but rarely fall outside 97–103% of the nominal value. Statistical analysis can be applied using a one sample t-test.

Precision measures the degree of agreement among test results when the method is applied repeatedly to multiple samplings of a homogeneous sample. Precision is commonly described in terms of repeatability, intermediate precision, and reproducibility. Repeatability is investigated by analyzing a minimum of nine determinations using the same equipment and sample, covering the specified range of the procedure, or a minimum of six determinations at 100% of the test concentration and reported as percent relative standard deviation (RSD). Intermediate precision refers to the agreement among the results from a single laboratory, despite potential variations in sample preparation, analysts, or equipment. Reproducibility refers to the agreement among the results from different laboratories. Results are reported as % RSD, and the percent difference in the mean values between the analysts must be within specifications. Less than 2% RSD is often recommended, but less than 5% RSD can be acceptable for minor components.

The limit of detection (LOD) is the lowest concentration of an analyte in a sample that can be detected. The limit of quantitation (LOQ) is the lowest concentration of an analyte in a sample that can be quantified with acceptable precision and accuracy under the stated operational conditions of the method. In a chromatography laboratory, the most common way to determine both the LOD and the LOQ is using signal-to-noise ratios (S/N), commonly 3:1 for LOD and 10:1 for LOQ. An appropriate number of samples must be analyzed to fully validate the method performance at the limit.

Linearity is the ability of a method to provide results that are directly proportional to analyte concentration within a given range. Range is the interval between the upper and lower concentrations of an analyte that have been demonstrated to be determined with acceptable precision, accuracy, and linearity using the method. The range is normally expressed in the same units as the test results obtained by the method (for example, nanograms per milliliter). Guidelines specify that a minimum of five concentration levels be used to determine the range and linearity, along with certain minimum specified ranges depending on the type of method. Data to be reported generally include the equation for the calibration curve line, the coefficient of determination (r2), residuals, and the curve itself.

Robustness is a measure of a method's capacity to obtain comparable and acceptable results when perturbed by small but deliberate variations in procedural parameters; it provides an indication of the method's suitability and reliability during normal use. During a robustness study, method parameters (such as eluent composition, gradient, and detector settings) are intentionally varied to study the effects on analytical results. Common chromatography parameters used to measure and document robustness include critical peak pair resolution (Rs), plate number (N) or peak width in gradient elution, retention time (tR), tailing factor (TF), peak area (and height) and concentration.

References

US Food and Drug Administration. Guidance for Industry, "Analytical Procedures and Method Validation." Fed. Reg. 65(169), 52776–52777, 30 August 2000.

International Conference on Harmonization, Harmonized Tripartite Guideline, Validation of Analytical Procedures, Text and Methodology, Q2(R1), November 2005.

Related Videos
Robert Kennedy
John McLean | Image Credit: © Aaron Acevedo