The LCGC Blog: Generic Methods – The Potluck Supper of Analytical Chemistry?


As we enter the Generalization phase of the industrialization of Analytical Science, we find ourselves striving for the generic in as many areas as possible.

As we enter the Generalization phase of the industrialization of Analytical Science, we find ourselves striving for the generic in as many areas as possible.

Pareto is omnipotent, his law guides our efforts in method development for HPLC, GC, and sample preparation, the drive for 80% of every analysis being covered by a single method, so we can devote our “thinking time” to the 20% of “difficult” separations, which don’t work with our generic protocols. In theory this should mean that we can spend 20% of our time doing high throughput analysis and 80% of our time solving the issues with our more difficult separations.

It’s expected, it’s how industrialization works, it’s going to be OK-isn’t it?

But read on and I’ll try to point out why the generic method may be taking us down a road that few of us want to go.

I read recently of efforts (1) to produce a generic HPLC method for pharmaceutical analysis of new chemical entities (NCE's) using 0.05% (v/v) formic acid (circa. pH 3.0) and acetonitrile eluent, a –C18 core shell stationary phase (dp 2.7 μm, 50 mm x 2.1 mm) and relatively steep gradient (30–100% B in 1 min), using an UHPLC system. The method, using a number of NCE test probes, produced a good separation with a peak capacity (Pc) around 100. 

The article is by a respected author and published on a highly respected website and it’s a good read. But that’s not where the story ends.

We need to consider various “pitfalls” to this approach and how they might be overcome, which are also covered in the article, but I’ll precis here to illustrate my own point.

What if, in final use, the laboratory has not yet moved to UHPLC and is still using an HPLC system with limited pressure capability? Well then we may need to move to a different column and flow rate combination, the selection was a 50 mm x 3.0 mm column using 2.7μm core-shell particles with the same stationary phase (naturally). Problem solved, and using one of the ubiquitous “method translators” which are available, we needn’t have to think about getting out the calculator–the software will work out the new flow rate and perhaps injection volume that we require.

Of course, if the dwell volumes of the two systems are different, we may need to compensate for this, and so a quick measurement of the dwell volumes of each system entered into the translation software will compensate and avoid any changes in retention or selectivity. If we don’t know the dwell volumes or don’t know how to measure them-well it’s a generic method and we can probably change the gradient a little to make sure the separation is satisfactory. As the method has a very short run time-this shouldn’t take too long and starting the gradient before we make the injection should compensate for the larger dwell volume in the HPLC vs the UHPLC system and most data systems will allow this these days.

It’s a nice approach. At pH 3 we can assume that we are far enough away from analyte pKa values to avoid retention time/resolution changes due to small changes in pH and therefore we should have a reasonably robust method. Acetonitrile generally produces high peak capacities, has low UV cut-off and lower viscosity to help keep back-pressures low. The formic acid at pH 3 will hopefully produce good analyte ionization efficiency in electrospray mass spectrometer sources, if MS detection is preferred.

If we need to perform stability indicating analysis where we are separating structural analogues of the NCE’s we may be able to move to a longer segmented gradient with a shallow initial stage to help with retention of hydrophilic analytes, a middle stage with a slope to allow the separation of the analogues which may have similar LogP (LogD) values and then a ballistic gradient to elute any highly retained matrix components. These gradients can often be sub 10 minutes and computer modelling can be used to help predict optimum gradient conditions.

To improve peak shape or provide better pH control in case we stray closer to the pKa of any of the analytes, the author suggests an aqueous eluent component of 20 mM ammonium formate at pH 3.7. We may expect retention and/or selectivity changes at this stage due to the higher pH and the change in ionic strength of the mobile phase.


If we are using UV detection at lower wavelengths, we may need to use 0.03% formic acid (v/v) in the acetonitrile and 0.05% formic acid (v/v) (aq) to balance the absorbance of the eluents and avoid baseline drift during the gradient methods.

For more complex separations or NCE’s with highly structurally similar impurities or degradants, we might explore different column chemistry or analysis at high pH which will result in different ionization states for ionogenic compounds and usually improved peak shape for basic analytes.

Good approach? Yes, commendable in the sense that various approaches are suggested for overcoming different challenges in pharmaceutical development and one has a good underlying sense of when and why changes may be required. Generic? It’s highly debatable.

I would ask a simple question. If you don’t get a suitable separation, or worse if you don’t realize the separation is suitable because of analyte co-elution for example, without a good underlying sense of the principles of chromatography, where do you go next?

This is the crux of my argument – we need to be able to realize something has gone wrong and act upon it. The more generic we make things, the less we think about what non-optimal might look like, and the less we know what to do to correct it.

One needs to recognize when peak shapes are affecting resolution or reproducible integration. One needs to recognize when retention time variability due to small changes in eluent pH, when analyzing ionogenic compounds, may be leading to poor resolution in a chromatogram or poor quantitative results and more importantly what can be done. One needs to understand the links between sample diluent, injection volume, and pH control (for example, buffering) to avoid peak shape and retention time issues. We need to understand what to do in order to optimize gradient conditions in order to separate analytes whose structure and/or physico-chemical properties are similar and without knowing something of the nature of gradient HPLC and the relationships we can use to predict gradient range and slope, we often revert to what I call wing of bat and eye of newt development. That is-change something in the eluent or method (potion), give it a stir, and see what happens, without any idea of what is producing the change or what to do if it doesn’t work. Importantly, we also need to develop a sense of when trying to optimize a method is futile and we require a radically different approach such as a change in stationary phase chemistry or a change to the mode of chromatography. One needs to be able to assess when shorter columns are not producing enough plates to derive a separation and when one needs to use a longer chromatographic bed so efficiency (N) can be used to aid the separation when selectivity is not optimal.

While I absolutely applaud the move toward the generic, and let’s face it, it’s going to continue whether I like it or not, and that I really like the article cited above and the authors treatment of the various approaches taken to meet the demands of the various types of analysis, what is very important is that we don’t lose sight of the underlying scientific principles. That we learn to understand the risks in generic method approaches and work hard to improve our skills in recognizing when problems occur.  Even better, that we can use our knowledge of chromatography to devise solutions to the problems and have enough strategies in our tool box to do this effectively and with insight. These are the 20% of the separations that Pareto tells us we will now have 80% of our time to fix. The problem is-given the throughput demands of the modern laboratory and the “black box” nature of instruments and chromatographic conditions, that 80% of time we are supposed to dedicate to recognizing problems and solving them is actually no time at all!

I’ve often found that looking at any set of chromatographic conditions and the attributes of our sample and analytes (if we are lucky enough to know them) and asking “why are those conditions used and what could go wrong” can be a great help in increasing my knowledge of chromatography and the requirements for particular applications. Most important of all-if you don’t know-find out! 

Being of the “generic generation” is not shameful-it’s a really exciting time in analytical chemistry-as long as you remain informed enough to make the decision if what you are looking at falls into Paretos’  80% category or 20% category...



Tony Taylor is the technical director of Crawford Scientific and ChromAcademy. He comes from a pharmaceutical background and has many years of research and development experience in small molecule analysis and bioanalysis using LC, GC, and hyphenated MS techniques. Taylor is actively involved in method development within the analytical services laboratory at Crawford Scientific and continues to conduct research in LC-MS and GC-MS methods for structural characterization. As the technical director of the ChromAcademy, Taylor has spent the past 12 years as a trainer and developing online education materials in analytical chemistry techniques.


Related Content