OR WAIT null SECS
How Dare You! The Emotive Matter of Questioning Chromatographic Integration…
Incognito gets emotional.
I have just read Bob McDowall’s article on manual integration in the June issue of LCGC Europe with a large amount of déjà vu,1 having very recently been involved in a “situation” with a client involving this very subject.
To be honest, it feels like a long time since I read anything about integration in high performance liquid chromatography (HPLC) and I wonder why this is. Is it because we have largely overcome the issues associated with data integration? Have data systems improved to the point at which we have overcome major barriers? Or have we been so brain-washed by the regulators that we must avoid reintegration (especially manual integration) at all costs, which has led to a reluctance to apply legitimate adjustments to errors created by our data systems?
Whenever I talk to anyone about how they develop fit-for-purpose integration algorithms, ask why they do so much manual manipulation of integration settings (what McDowall calls “interventions”, such as changing settings within the algorithm from run to run or within a batch), or question why baselines have been repositioned or redrawn (“manual reintegration”), it always results in an emotive conversation, as if I’m accusing them of “cheating” by merely instigating a conversation that questions their approach to peak area measurement. The subject of integration is one that I think the majority of analytical chemists are largely ignorant of, unless you are one of the people who needs to sit in front of a data system every day, trudging through lots of standards, QC checks, sensitivity standards, system suitability runs, and, of course, samples to ensure that the integration is “satisfactory” - an event which, by the way, is often only triggered by results that lie outside of specification (and not out of “trend”, which is a whole other column for the future). Why only under these circumstances do we check the validity of the integration? This is only one of many questions that I could ask you to consider at this point - and by “you” I mean everyone who processes chromatography data in a quantitative manner - not just those working in the pharmaceutical industry.
If the answer to any of the questions above is no, then you may need to reconsider the validity of your approach and your documentation, especially where manual integration is being used.
Of course, I can’t really justify the above statement based on any actual regulatory guidelines. The FDA and Pharmacopeial guidelines really don’t give any useful guidance on reintegration and, considering the fuss that is rightly made regarding manual integration, there really isn’t any regulatory help for chromatographers. So the statement is based on personal experience and information from the 483 letters issued by the United States Food and Drug Administration (FDA).2
I firmly believe in the importance of knowing how the numbers produced by your data system are actually generated, and this obviously includes the ways in which integration algorithms work. So if you are not sure about sampling rate (often called peak width), bunching (smoothing), peak threshold detection (threshold or slope sensitivity), minimum peak area (peak area reject), tangent skimming, exponential skimming, dropping perpendiculars, and all of the many variables associated with the mechanics of integration, then I suggest you learn: the book by Dyson from 1998 is still an excellent place to start.3
If you don’t know how your instrument deals with:
and if you can’t adjust your integration parameters to cope with these situations in a dynamic fashion, then you will be failing to produce a robust data analysis method. I’ve yet to see a system or operator who can do a good job of producing an exponentially skimmed baseline via manual intervention - which obviously then gives an inconsistency to that data file amongst all the others in the batch. This is the key to successful - and legally defensible - data analysis because consistent and appropriate integration is easy to defend, whereas inconsistent and inappropriate (illogical) integration very much is not.
It is true that modern chromatography data systems have very advanced algorithms for data smoothing and peak start and end detection, and several can analyze “training” chromatograms to decide on the correct settings for the smoothing algorithm, peak width, and slope sensitivity. However, bad chromatography is bad chromatography and the world’s most advanced data system will not be able to appropriately and consistently deal with the variations.
None of the chromatographic “issues” listed above can be overcome without skill and knowledge in chromatographic theory and practice. There are very few analyses with inappropriate integration, or batches of analyses that require hours of operator intervention, that could not benefit from some more development of the chromatography or indeed a little more thought on the sample preparation aspects. Improved specificity of solid-phase extraction protocols would be a particular favourite of mine to cite here.
Yes we are all busy, yes we all produce or know of methods that produce convoluted chromatography, or where the separation is only barely possible, or where there are so many matrix components that full baseline resolution onto a flat baseline is impossible. In these cases ask yourself, are you using the correct analytical technique? Should you be using a chromatographic method at all if you require highly accurate and repeatable quantitation? Can you scientifically defend the accuracy and reproducibility of the numbers produced by this method with a clear conscience?
This is where I came in. My “discussion” with our client arose because I questioned whether a chromatographic separation with mass spectrometry (MS) detection and very little sample preparation was suitable for the analysis of a highly complex sample. I made this comment after looking at their in-house generated chromatography and pre-validation numbers and studying the baselines on their data system; as I said they really weren’t happy. But someone has to ask the question and if it’s not me, it may well be their regulator.
Could you answer your own regulator when questioned about some of the issues mentioned above? If by some chance your answer involves the excuses of time, throughput, or laboratory management, go and raise these issues with a superior - because none of these factors are scientifically defensible.
Contact author: Incognito