Integration Problems

Oct 01, 2009
Volume 27, Issue 10, pg 892–899

What About Peak Height?

Before the advent of modern data systems, there was regular discussion about whether peak height or peak area was a more accurate way to quantify a peak. With manually measured peaks, peak area was estimated by triangulating the peak between points on the baseline at the beginning and end of the peak and at its apex. This meant three measurements, plus the assumption that the peak is a triangle, when it more closely represents a Gaussian distribution. Three measurements meant more error than the two required for the perpendicular height (apex and baseline), so it was argued that height was a better choice. Those arguments are somewhat moot today, because all data systems measure the peak as area or height slices from the baseline to each point along the peak. The height is the largest of these slices and the area is the sum of the slices. Most of the time, peak area will give better results because it inherently averages error across the peak, but area is not always the best choice.

In Figure 1b, the peak height for peaks 2 and 3 is shown by the vertical dashed line. When two or more peaks overlap, as in Figure 1, peak height may be a better choice than peak area. If you were to draw the tail of peak 2 and the front of peak 3 to baseline, you would conclude that there is very little overlap at the center of the peaks where the peak height is measured. This would be expected to give less error than peak area, where there is known overlap.

On the other hand, peak height for peaks such as those in Figure 3a would be expected to have a much greater probability of picking a false peak maximum than peak area, which averages out the noise across the peak. So peak area would be the preferred technique here. With the smoothed signal, the difference would not be expected to differ as much.

The simplest way to determine the best integration technique is to run a set of known samples, such as during or before validation and collect data for both peak height and area. Calculate the results using both techniques and use the method that gives the most accurate and precise results.

Is Hand-Integration Legitimate?

Another common question that I get regarding integration is the complaint that goes something like, "My boss won't let me adjust integration baselines in the chromatogram. He insists that the data system must be set to integrate all peaks properly and that the regulatory agencies will not allow manual baseline adjustments." This may be a fine argument for well-behaved methods with well-resolved peaks and negligible baseline noise, for example, content uniformity or potency tests for drug substance or drug product. However, whenever trace analysis is involved, such as impurities methods, drugs in plasma, or pesticides in the environment, peak integration can be much more challenging. As the peak gets smaller, the signal-to-noise ratio (S/N) gets smaller and it is harder for the integrator to find the true baseline. This is shown for the large peak of Figure 2c, and it is easy to imagine that the difficulty increases as the peak size decreases and baseline noise and drift increase. In the laboratory I was most recently involved with, we had a process we called "peer review," where after the analyst had finished integrating the chromatograms, they were reviewed by someone else before going to the quality group, at which point the data were "locked down" and integration changes required additional documentation. In this system, I reviewed thousands of chromatograms from hundreds of batches of samples, many of which were at or near the LLOQ. Of these, my guess is that less than a dozen did not require some manual integration. While the data systems today do a good job of integration, they are not perfect.

To protect against frivolous reintegration or adjusting baselines so that the results meet a desired result, the FDA has a guideline commonly referred to as "21 CFR Part 11," "Part 11," or just "Electronic Signatures Rules" (1). This guideline states that if manual integration is performed, four criteria must be met: the original "raw" data must be preserved for later inspection, the person who made the change must be identified, the time and date of change must be noted, and the reason for the change must be recorded. Nearly every data system available today has an "audit trail" feature that complies with these regulations. A copy of the original data is archived. Because the user must sign on to the computer to use it, the username, date, and time are automatically stamped on each event. The software prompts for an explanation for each manual integration event. For example, the correction of the baseline in Figure 2c might be noted as "wrong baseline end." So the bottom line here is that the regulatory agencies anticipate that manual integration will be required and have set down guidelines to follow in such cases. If these guidelines are followed, you should not worry about negative regulatory action when you manually integrate a chromatogram to correct an error by the data system.

lorem ipsum