Do Chromatographers Need More Automation?

Article

The Column

ColumnThe Column-07-16-2018
Volume 14
Issue 7

Incognito asks if chromatographers are behind the curve when it comes to automation.

Photo Credit: Golden Sikorka/Shutterstock.com

Incognito asks if chromatographers are behind the curve when it comes to automation.

I’ve seen a real drive in recent years towards letting the scientists concentrate on science while the more menial tasks in the analytical laboratory are taken care of by staff employed by outsourcing or facilities management companies. This move is generally driven by two factors: the need to reduce costs and the requirement to be more productive.

In an ideal world, we would request an eluent and chromatography system setup this afternoon and come into the laboratory tomorrow to find the system ready to go-already qualified by test injections of a generic system suitability standard. By having the outsourced staff start work earlier than the “scientists” this is very much a reality for staff within some of the larger research and development companies that I’ve visited. But is this really “ideal”?

Most folks wouldn’t need a kaizen burst or a spaghetti diagram derived by their local six sigma expert to tell them where the bottlenecks are in their processes, and the areas in which outsourcing may make a positive impact should be fairly obvious.

However, I can’t shake the notion that as an industry we are lagging behind in the degree of automation that is used to achieve cost reduction and improved throughput. By no means do I want to see the outsourcing company staff become unemployed, but by the same token I really do see a great opportunity to further automate many of our operations and to address the areas in which the real bottlenecks lie.

Most of you reading this article will be able to identify the efficiency pinch points in your laboratory, which I believe will include:   

  • Sample and eluent preparation     

  • Data processing

  • Data reporting

  • Instrument preparation 

  • Instrument (hardware) failures

  • Chemistry failures

  • Non-availability of equipment or reagents

  • Method development and validation

With the level of sophistication that is possible in automation (think of the complexity of some manufacturing plants for example), the power of modern data management systems and the interconnectedness of the Internet of Things (IoT), surely we must be able to overcome some of these challenges.

I wish I had some insightful answers for you, but frankly I have only questions and suggestions and will need to leave it to the specialists to build the solutions. But I can’t be the only laboratory dweller who yearns for answers to the bottlenecks outlined above?

Let’s take a brief look at some of the issues that I believe we should have tackled by now, in the hope that someone somewhere can let us all know that there is hope, or maybe even that the problem has been solved-we just haven’t found out about it yet.

 

Eluent Preparation

I’d really like to know why I can’t dial up an eluent recipe and have it delivered in whatever volume required and degassed. I appreciate that there several discrete steps such as weighing of additives, volumetric accuracy, and pH adjustment but all of these are capable of automation-aren’t they? Whilst some laboratories will have a very diverse set of reagents, solvents, and additives making it very tricky to store them all in reservoirs or silos ready for use, a large number use a fairly standard set of reagents, which would not require such a large number of containers. I also believe that automated eluent preparation would reduce the number of errors in manual eluent preparation. I’m also aware that prep‑stations of this type have been attempted in the past, but I don’t see a ready source of this equipment and wonder what were the issues with the implementation or commercialization?

Sample Preparation

I admit that in some industries I do see great advances in automated sample preparation and there are some vendor companies that really excel at automating these processes. But if I take the pharmaceutical industry as an example, there are very few systems that I know of which can take tablets, sterile injectable solutions, or powders at one end and come out of the other with a sample ready for injection into the chromatography system, either in a bulk format or with a just-in-time approach. Why is this? What are the barriers to implementation? I really can’t believe it’s a technology hurdle because these automation companies can do pretty much anything with their robots including weighing, accurate volumetric operations, centrifuging, solvent extraction, and solid‑phase extraction (SPE). I must admit I haven’t come across a solution for grinding tablets, but I’m sure they will let me know if this is possible. So why so little automation of sample preparation in the pharmaceutical industry?

Instrument Preparation

Why do I need to “set up” the instrument? Why haven’t we come up with a solution for automated (robotic?) column changing (high performance liquid chromatography [HPLC] or gas chromatography [GC]) that can load an acquisition method and flush the instrument with the eluent that has been automatically prepared for me, then inject the generic system suitability standard to check that the system as a whole is performing to the required standards?

I know that various parts of what I have described are possible, but why have we not yet combined these into a viable solution?

In terms of telemetry, surely it would also be possible to test the various parts of the system for leaks and go through a self‑check routine to not only check the detector function (as is possible in most instruments today), but also for susceptibility for leaks as the eluent viscosity changes and pressure increases, or the likelihood of a column to fail during an analysis based on the back pressure history of that column.

 

Automated Troubleshooting

If an instrument fails, I would like it to do some more advanced diagnostics to tell me exactly why it failed. Where is the leak and why did it occur? Why is the sensitivity of the detector not as it should be? If there is pressure ripple, where is it coming from? Why won’t the flame ionization detector (FID) flame light or why has it gone out? Why is the GC–mass spectrometry (MS) system not seeing any peaks?

If we were to reach advanced levels of automation, it should be possible to simply swap the faulty component from an HPLC system (pump, degasser, detector) and get on with the job rather than having to come into the laboratory the next day only to be disappointed that the “batch has failed” overnight because of an instrument error. My pet hate is that the autosampler has failed to recognize or pick up a vial by the way-surely this should be a trivial fix?

In a similar fashion, with the IoT so much in focus these days, why is it not possible to discover an issue with my system via my smartphone and then, in conjunction with the fully automated (multiplexed if you will) system, simply divert the eluent flow via another pump or detector within the matrix so that the system continues to collect data and not leave me fretting about all the remedial work I’m going to have to get through the next day?

Furthermore, when my problems are associated with “chemistry” issues, why haven’t we employed machine learning to better interpret and solve these problems? In troubleshooting classes we teach folks how to recognize “symptoms” related to baseline appearance, peak shape, changes in selectivity, and drifting retention times and then relate them back to the issues with the chemistry of the separation. There will be very little which has never been seen before, and so why can we not harness the power of “big data” to associate “pictures with causes” and at the very least give suggestions for the causes or perhaps even one day to fix the issue on the fly. Making up a new batch of eluent or changing the column would fix many of the issues that I see. We are constantly sending data back to Microsoft so that they can improve our experience on the Windows operating system-is this really so far removed from what we are trying to achieve here in terms of harnessing the power of big data?

Again, I realize that some equipment manufacturers have begun to implement some ideas around remote diagnostics and telemetry, but what exists is still a long way short of the ideal described above and there isn’t anything that I’m aware of which can take a chromatogram (or trends over a number of chromatograms) and tell us what might be wrong with our separation.

Data Management

I realize that data processing systems are capable of a very high level of automation. Integration algorithms are very advanced and the level of sophistication in terms of “custom calculations” that can be performed is high. Why then do I still see people using spreadsheets for calculation of results or collation of data into useable tables? Perhaps this says more about our adoption of the technology rather than its availability.

However, if one of my quality control (QC) results is out of specification or my system suitability contains a problem or the calibration function does not meet specification limits, it’s usually off to the spreadsheet and some head scratching over whether there is enough evidence for me to scientifically and statistically justify that, actually, I have fit for purpose data. Can we not build algorithms that can interrogate the data within some basic statistical framework to do this job for us? Do I have an outlier or a batch failure?

Most of us will have a protocol for method validation and anyone doing method development work will be quite aware of what we consider to be satisfactory in terms of the number and types of data that we need to properly validate a method to the regulatory framework in which we operate. So why is it that I seem to have so many conversations about what data is required and the experiments needed to generate the data, in order to properly validate a method to, say, ICH Q2 standards? Why can I not simply load an autosampler with samples and standards and press the “ICH Q2” button on my CDS and come back some time later to see the data collected and collated? I know there will be challenges with the production of data for intermediate precision and robustness, but surely we should be able to automate a design of experiments (DOE) program that automatically varies the parameters according to a generated factorial design in order to carry out an analysis of variance (ANOVA) and show me the quality by design (QbD) type map of the method design and control spaces? Could an “instrument matrix” be used alongside automated eluent preparation equipment and column changers to produce satisfactory data on intermediate precision? One for debate I feel.

I know that some pieces of this puzzle have already been solved and that software and equipment are available to partially solve these challenges, but I still don’t see the “validate to ICH Q2” button in any data systems.

 

Method Development

I believe we are “almost” there in terms of analysts being able to put a sample or standard onto the autosampler and return to a fully developed method. But when I say “almost” there I mean exactly that. Some laboratories are pretty close to full automation, as are some software and equipment combinations, but I don’t think we are “completely” there yet. The methods created, in my humble opinion, are sometimes more complex than they need
to be and a fully viable method is not “always” reached. Again, if you know differently I’m sure readers would love to hear about it.

Further, whilst I refer to HPLC applications above, I don’t see any fully formed solutions for LC–MS (automated optimization) or for GC or GC–MS method development.

Why are we not as fully automated as we need to be? Perhaps the answer lies within the question-do we actually need this level of automation?

Is it truly easier or more cost-effective for outsourced (and therefore less expensive?) workers to perform some of our more menial tasks in the hours we are not present? If that is the case, why are these folks less expensive? Are they less qualified or less well trained? Is this a situation that we can live with from a quality perspective? Are these people not who we used to call technicians and if so, why do we not have technicians anymore?

Whilst automation companies are fantastic at engineering and coding, do they truly understand our scientific requirements or applications enough to deliver a fully formed solution that truly meets our requirements? Is the investment required in getting someone “on the inside” trained and developed to properly integrate the automation solution the real barrier to adoption?

I apologize for the number of unanswered questions in this article, however, I’ve been struggling with these automation concepts for a long time and wanted to get all of my thoughts down on paper in the hope that perhaps I can spark a debate on the “big picture” solutions and bring some of the existing solutions together to solve the bigger issues-if indeed they are issues!

 

Contact author: Incognito

E-mail: kate.mosford@ubm.com

Related Content