The LCGC Blog: Don’t Fear the Automation

Article

This blog is a collaboration between LCGC and the American Chemical Society Analytical Division Subdivision on Chromatography and Separations Chemistry.

This month’s blog is dedicated to Christopher J. Watson, co-author of the first reference. Chris was an amazing friend, scientist, father, and so much more, who was taken far too soon from this Earth.

With modern chromatography modeling software, is it time to hang up our laboratory coats? Is resistance to software solving separations futile?

We live in a world with ever-increasing automation, from the simple (hello automatic coffee maker!) to the complex (I think I’ll just let the car do the driving... or at least the parking). The pervasive, almost intelligent machines have seemingly always been a part of an analytical chemist’s world. Are we ready to let programs and instruments take over the art of separations?

High Throughput on a 400 MHz Processor

Some of my favorite research time in Prof. Robert T. Kennedy’s laboratory as a graduate student was spent writing programs to automate and speed up data collection and processing. For the most part, these activities did not make it into my thesis, but I sincerely believe they were integral to my own experiments, as well as accelerating the research of others’ projects. One particular program I worked on for many months (years?) was a means for rapid batch processing of the tens to thousands of electropherograms we were generating on a daily basis in the group. This was a tribute to the high-speed capillary electrophoresis being carried out in Bob’s laboratory. However, performing peak integrations serially and manually by poor grad students resulted in a painful bottleneck in terms of converting raw data to useful knowledge. With some additional features, such as peak deconvolution and statistical moment analysis, we ultimately published this work as “High-throughput automated post-processing of separation data” (1). The automated analysis was rapidly adopted and allowed the scientists to not only assess their data faster, but to redirect their reclaimed time to more productive endeavors. I’m fairly certain that none of the students or post-docs felt they would be replaced by this program; it simply allowed scientists to focus on more interesting ideas and challenges. Indeed, the program is still used in the laboratory today, and, based on citations, is employed in other research laboratories, which is an interesting feat for a program whose metrics were benchmarked on a 400 MHz Pentium II computer.

All Aboard the In Silico Train

Upon joining Bristol Myers Squibb (BMS), I was introduced to some very cool software programs to aid in liquid chromatography (LC) method development, mainly in the achiral reversed-phase LC (RPLC) domain, such as Molnar Institute’s DryLab, S-Matrix’s Fusion, and ACD’s LC Simulator, to name a few. With only a couple of empirical LC runs, these modeling programs could map out predicted separations in silico across a wide design space. With a click of a button, we could potentially let the program “choose the optimum” separation, which we would then quickly discard. Why? Mainly because the programs at the time did not know what we humans really wanted. For example, it was difficult to specify our needs, such as: Only make gradient changes in this region; allow a step gradient here; keep the retention factors between 1 and 20; all while maintaining baseline resolution in under 5 min runtime...with re-equilibration. Usually, we were faster at defining these boundary conditions in our heads, mapping it into the program’s gradient, and seeing what popped out visually, followed by empirical verification. However, these programs are making significant and rapid evolutions, with: better optimal design optionality; more dimensionality by increasing from one dimension (for example, gradient), to two (such as add-on temperature), to three (for example, add in pH); more separation modes to be able to model non-reversed-phase separations with the same facility as RPLC; and back-end analysis of robustness, sometimes with pre-templated reports. Regardless, the human has traditionally always been the conductor of the in silico express train, picking the input packages, keeping them on the rails, and deciding on the final destination. Sometimes wholly iterative method development will arrive at the same place as the modeling, albeit at a slower pace. However, sometimes the modeling program can reveal better-operating spaces, while your iterative work only finds a locally ideal space that is not globally optimal. In this case, the automation not only removes the labor-intensive grunt work but also yields a better end-product that could save additional time down the road by being less prone to failure.

Speed Through Screens

We need a starting point for our separation before we can start modeling it. We could grab our personal favorite C18–mobile-phase combination or use a generic platform method (as fellow columnist Tony Taylor puts it, the potluck supper of analytical chemistry) (2) to see if there’s a hit. Alternatively, we can use some sort of screening system, which could be as simple as a couple of columns and solvents, or quite exhaustive. Indeed, at BMS we have a comprehensive RPLC system capable of automatically running well over 100 conditions based on the MeDuSA paradigm (3). In these cases, automation results in single-button data acquisition with an overnight run. Although processing can be automated to some degree, at this stage data interpretation can be the primary bottleneck. Some efforts have been made in current commercial chromatography data systems (frequently as an add-on purchase) to facilitate this evaluation, but it is still up to the human to decide what “optimal” actually means. Do we prioritize peak count? Efficiency? Resolution? Tailing factors? Does the answer change if I tell you the sample has 20 components but only 10 are relevant? Similar to the modeling case, the scientist is accountable for picking the right samples to inject, determining the results’ relevancy, and usually defining the ultimate final separation conditions. Additionally, in the absence of human–data engagement, potential critical situations may be missed, such as a clean sample turning up a new unknown during screening (those pesky peak shoulders!). The bottom line is that my fellow scientists and I are extremely grateful we don’t have to serially churn through setting up all these conditions, which gives us the time back to delve deeper into the data or work on other, more rewarding activities.

The Art of Chromatography

A more recent LC method automation system available today is feedback optimized screening, such as by ACD’s Autochrom. In this implementation, we let the software choose the best screening condition, but also allow it to do its own final optimization on that condition through an automated modeling exercise. Will this perfect marriage of screening and modeling result in the bench scientists hanging up their lab coats? I hope the previous paragraphs have already convinced you otherwise. Beyond the above arguments, there are just too many separation problems that require serious intervention beyond what current systems can handle. If my automated system relies on mass spectrometry to track peaks, what will it do when I need to do low UV detection (I’m still waiting for the perfect low wavelength cutoff and volatile buffer). What if the optimal condition is missed because of chelation effects that I could mitigate with a mobile-phase additive? Buffer concentration? Ionic strength? Alternative ion pairs? Uncommon columns? On-column degradation? New technology? Lest we forget, the human is also in the driver’s seat for proper sample preparation (it’s more than just a separation!). We are both blessed and cursed to have so many selectivity levers in LC separations. There’s still plenty of opportunity for artistry by those who study the field. Let automation give you the freedom to explore the many less-common ways of doing your separations. You may just find the next big thing.

References

  1. J.G. Shackman, C.J. Watson, R.T. Kennedy, J. Chromatogr. A, 1040 (2004) 273–282.
  2. The LCGC Blog: http://www.chromatographyonline.com/lcgc-blog-generic-methods-potluck-supper-analytical-chemistry.
  3. B.D. Karcher, M.L. Davies, E.J. Delaney, J.J. Venit, Clin. Lab. Med., 27 (2007) 93–111.

Jonathan Shackman

Jonathan Shackman

Jonathan Shackman is an Associate Scientific Director in the Chemical Process Development department at Bristol Myers Squibb (BMS) and is based in New Jersey, USA. He earned his two B.S. degrees at the University of Arizona and his Ph.D. in Chemistry from the University of Michigan under the direction of Prof. Robert T. Kennedy. Prior to joining BMS, he held a National Research Council position at the National Institute of Standards and Technology (NIST) and was a professor of chemistry at Temple University in Philadelphia, PA. To date, he has authored more than 30 manuscripts and two book chapters. He has presented more than 30 oral or poster presentations and holds one patent in the field of separation science. Jonathan has proudly served on the executive board of the ACS Subdivision on Chromatography and Separations Chemistry (SCSC) for two terms.

This blog is a collaboration between LCGC and the American Chemical Society Analytical Division Subdivision on Chromatography and Separations Chemistry (ACS AD SCSC). The goals of the subdivision include:

  • promoting chromatography and separations chemistry
  • organizing and sponsoring symposia on topics of interest to separations chemists
  • developing activities to promote the growth of separations science
  • increasing the professional status and the contacts between separations scientists

For more information about the subdivision, or to get involved, please visit https://acsanalytical.org/subdivisions/separations/


Related Videos
Toby Astill | Image Credit: © Thermo Fisher Scientific
Related Content