LCGC Asia Pacific
As I write this instalment of "LC Troubleshooting", I have just completed teaching a series of liquid chromatography (LC) method development classes to pharmaceutical scientists in India. As a parting gift, my host gave me a copy of Thomas Friedman's The World is Flat?.1 One central theme of this book is that the technology and skills for the science and information technology sectors are available around the world and are no longer the exclusive domain of the US and Western Europe.
As I write this instalment of "LC Troubleshooting", I have just completed teaching a series of liquid chromatography (LC) method development classes to pharmaceutical scientists in India. As a parting gift, my host gave me a copy of Thomas Friedman's The World is Flat.1 One central theme of this book is that the technology and skills for the science and information technology sectors are available around the world and are no longer the exclusive domain of the US and Western Europe. If I had any doubts before my trip about the technical expertise of the Indian pharmaceutical companies, I certainly have discarded them as a result of many conversations with Indian chromatographers over the last two weeks. I also reconfirmed my strong belief that LC problems have no respect for company or national boundaries. So this "LC Troubleshooting" column discusses some of the LC problems brought up by my Indian colleagues.
One question that often arose related to the determination of peak purity. When looking at force-degraded samples, impurity assays, drugs in biological fluids, trace environmental contaminants and other methods in which small peaks often occur in the presence of large ones, the purity of a given peak can be in question. This, of course, is the challenge of LC method development — to separate the compounds of interest from potentially interfering peaks. The problem is especially challenging in the case of stability-indicating methods, which require the reporting of all peaks ≥0.05% of the area of the active pharmaceutical ingredient (API).
There are detection tools that can help assess peak purity. One advertised widely is the use of diode-array UV detection (DAD). Software algorithms for diode-array detectors are designed to measure changes in the UV spectra across the peak and report these as some kind of peak purity number. Although I have seen convincing scientific papers and application notes supporting the validity of this technique, I agree with most users that the dependability of such techniques with real samples is sadly lacking. This is because of several factors. Spectral ratioing can work when the minor peak is large enough, but at <<1% area ratios, the minor peak is hard to distinguish from the major one because of baseline noise, peak tailing and the likely similarity in spectral characteristics of the closely eluted compounds of similar molecular structure.
Mass spectrometry (MS) detection can be a powerful tool to help determine peak purity, but it is by no means a magic bullet. Again, background noise and spectral similarity or ion suppression can reduce the quality of information obtained from MS detection.
We must remember that, as much as we would like to think otherwise, it is impossible to prove that a given peak is pure — we can only prove that a peak is not pure. So we should perform enough experiments or measurements to give ourselves sufficient confidence from a scientific standpoint that no impurities are present. This can include trying additional mobile or stationary phases, different detectors or other analytical techniques. Our report of these studies should be sufficiently convincing that our fellow scientists and regulatory auditors come to the same conclusions as we do.
One of the goals in the development of stability indicating and impurities methods is to show that the method is capable of separating "all" potential impurities, at least from the API. For method development purposes, degradants are generated by forced-degradation studies (also called purposeful degradation). To accomplish this, the API is typically exposed to acid, base, heat, light and oxidation conditions with a target of 10–20% degradation of the API. The common assumption is that <10% degradation will not produce sufficient levels of degradants to use, and that >20% will produce secondary products, thus, confusing the separation process. Often the API is stable to one or more degradation conditions, but usually a total of 10–20 degradants are generated by the various experiments.
Separation of the sample components generated by forced degradation can be a challenge, because many of the peaks are small and can be of similar structure. One measure of a successful separation is to account for all of the original API by adding the areas of all separated peaks detected at the detector wavelength used for the API and comparing the sum with the area equivalent of the original API sample. This is referred to as mass balance, and most companies would like to see mass balance of 95–105%, but lower mass balance figures might be encountered. At least two factors work against obtaining 100% mass balance. One is the assumption that all the degradants have the same detector response characteristics as the API. This, of course, is unlikely, especially if the API is measured at UV wavelengths greater than ≈220 nm. Fragmentation of a molecule is likely to change its UV absorbance characteristics. A second compromising factor is that in reversed-phase separations, polar fragments can be eluted at the column dead time t0 and, thus, can be lost in the initial baseline disturbance. The response of the t0 peak is notoriously inconsistent — replicate injections of the same sample can generate peak area reproducibility for retained peaks of ≤0.5% relative standard deviation (RSD), yet the t0 peak might visibly vary from one injection to the next. Retained peaks will have much more consistent areas than unretained ones. This is one reason why the US Pharmacopeia (USP) and other regulatory agencies suggest retention factors (k) of at least 2 for the first peak of interest. As with the case of peak purity in the previous section, method development experiments should be performed to give you confidence that no unaccounted for degradants or impurities are hiding in the t0 disturbance. Retention of these polar materials might be aided by a change in pH, use of ion pairing, or changing to normal-phase or hydrophilic interaction chromatography (HILIC) techniques.
A reluctance to make any changes to a validated or compendial method is common, and often well founded. However, there are times when such changes, at least on a temporary basis, is wise, if not mandatory. As a case in point, one of the course attendees expressed her concern over a USP method designated to test the purity of her raw material. She knew that the method missed one important impurity in the raw material she had purchased. She knew this because she had developed another LC method to quantify the impurity. Her frustration centred on her perception that because the method was an official compendial method, it was the only acceptable method and must be used. This perception, unfortunately, is widespread and not true. In fact, the introduction to the guidance documents on the FDA's Center for Drug Evaluation and Research (CDER) website states, "An alternative approach can be used if such approach satisfies the requirements of the applicable statute, regulations, or both".2 My interpretation of this statement in the current context is that not only is the alternate method justified, but that it is mandatory. If you know that one method (USP in the present case) is flawed and have at your disposal a better method, I think you will be more subject to regulatory enforcement actions if you use the compendial method than if you use the alternate method. Of course the alternate method must be properly validated before routine use.
The reluctance to modify methods that have been validated by your company can also be misplaced, if you know there are problems that compromise the quality of the results. As has been discussed in prior columns (for example, see reference 3), unless you are willing to change a method, at least on an exploratory basis, you might be unable to identify the root cause of method failure. Only when you determine the source of the problem and the modification necessary to correct it will you be able to determine if the method must be modified and revalidated before proceeding.
Many workers are hesitant to adjust the integration of chromatographic peaks after the data system has done its initial peak processing. They base this on a reluctance to modify raw data, fearing violation of the Code of Federal Regulations, 21 CFR Part 11, commonly referred to as "Electronic Signatures Rules". This is an over-restrictive interpretation of the regulations. 21 CFR Part 11 is designed to prevent arbitrary alteration of raw data. Reintegration following manual adjustment of baselines is a commonly accepted practice in trace analysis by LC. Even the most sophisticated integration algorithm is no match for the human eye, especially when the peaks are small and on a noisy or drifting baseline. All 21 CFR Part 11 requires is that the proper controls are in place to provide an audit trail of any changes to the raw data. This is why you want to be using a 21 CFR Part 11 compliant data system. If you use such a system, with the audit trail features turned on, you will be required to identify what change you made, state why it was made, and your time- and date-stamped electronic signature will be recorded. This should satisfy Part 11 requirements as well as our personal goal of generating scientifically sound data.
Several attendees asked questions about how to appropriately integrate peaks that were not fully resolved from each other. Some options include perpendicular drops to baseline, baseline-to-baseline, valley-to-valley, tangent skim, and so forth. These choices apply to setting up the initial automatic integration parameters as well as postrun reintegration of peaks. Different methods are appropriate for different cases. In my experience, the built-in integration algorithms do a pretty good job in most instances, so the default parameters are a good place to start. Then make fine-tuning adjustments to accommodate the idiosyncrasies of your particular method. A thorough discussion of peak integration is beyond this month's discussion, but if you want more information, consult the operator's manual for your data system, the application note section of the data system manufacturer's website or reference 4.
As the book title says, the world is flat; this applies to chromatographers as well as other industries. Our problems are the same around the world. But thanks to extensive applications and support libraries on equipment manufacturer's websites and worldwide access to experts and fellow users through web-based discussion groups such as Chromatography Forum,5 we all have equal access to solutions to our problems.
"LC Troubleshooting" editor John W. Dolan is vice-president of LC Reosurces, Walnut Creek, California, USA; and a member of the Editorial Advisory Board of LCGC Asia Pacific. Direct correspondence about this column to "LC Troubleshooting", LCGC Asia Pacific, Advanstar House, Park West, Sealand Road, Chester CH1 4RN, UK.
Readers can also direct questions to the Chromatography Forum at http://www.chromforum.com
1. T. Friedman, The World is Flat (Penguin Books, New York, USA, 2005).
2. www.fda.gov/cder/guidance/index.htm
3. J.W. Dolan, LCGC N. Am., 22(6), 524–528 (2004).
4. N. Dyson, Chromatographic Integration Methods, 2nd ed., RSC Chromatography Monographs (Royal Society of Chemistry, Cambridge, UK 1998).
New Algorithm Created for Detecting Volatile Organic Compounds in Air
October 9th 2024Scientists from Institut de Combustion, Aérothermique, Réactivité et Environnement (ICARE-CNRS) in Orléans, France and Chromatotec in Saint-Antoine, France recently created a new algorithm for detecting volatile organic compounds (VOCs) in ambient air.