LCGC Europe
Comprehensive two-dimensional liquid chromatography (LC×LC) is a powerful technique for separating highly complex samples. However, the proliferation of this technique is hindered by a range of challenges, including the possible impact of the additional separation on the detection sensitivity, concerns that mobile phase incompatibility problems will limit the applicability, and the complexity of the system and associated method development costs. This article addresses these issues and describes how modern modulators and software tools are overcoming the barriers associated with this technique.
Photo Credit: Kostsov/Shutterstock.com
Bob W.J. Pirok1,2 and Peter J. Schoenmakers1, 1University of Amsterdam, van ‘t Hoff Institute for Molecular Sciences, Amsterdam, The Netherlands, 2TI-COAST, Amsterdam, The Netherlands
Comprehensive two-dimensional liquid chromatography (LC×LC) is a powerful technique for separating highly complex samples. However, the proliferation of this technique is hindered by a range of challenges, including the possible impact of the additional separation on the detection sensitivity, concerns that mobile phase incompatibility problems will limit the applicability, and the complexity of the system and associated method development costs. This article addresses these issues and describes how modern modulators and software tools are overcoming the barriers associated with this technique.
Comprehensive two-dimensional liquid chromatography (LC×LC) is an extremely powerful technique that is becoming attractive for the analysis of complex samples because it (i) benefits from a greatly increased potential peak capacity relative to one-dimensional 1D-LC, (ii) exposes two chemical properties of the sample if the two separation modes are sufficiently different, that is, “orthogonal”, (iii) allows highly complex samples to be separated, and (iv) hyphenates separation modes with normally incompatible detectors, potentially providing additional sensitivity.
Despite all of these favourable traits, scientists and manufacturers struggle to bring the technique to the analytical laboratory in industry. The potential that LC×LC offers is often shrouded by a widespread perception that the technique is not sufficiently mature and that its application is complicated by a range of seemingly unsurmountable challenges. The typical challenges include (i) insufficient sensitivity as a result of the additional dilution factor arising from the secondâdimension separation, (ii) limited applicability because of (solvent) compatibility issues, and (iii), perhaps the largest of these hurdles, the impracticality as a result of the complexity of the instrumentation and method development.
As an alternative to LC×LC, a hyphenated LC–mass spectrometry (MS) system is often proposed. Indeed, MS is an extremely powerful technique that is indispensable for any modern analytical laboratory. MS is often the detector of choice in LC. Yet, reliable deconvolution and quantification of chromatographic peaks requires that (i) only a few analytes are introduced simultaneously and that these are present in similar concentrations to avoid matrix and suppression effects, (ii) the analytes and mobile phase are compatible with the MS system, allowing detection and avoiding instrument pollution, and (iii) coeluting analytes yield satisfactorily different MS or tandem MS spectra.
While it is true that application of LC×LC is not always straightforward, this is arguably also the case for MS. Ironically, as we will see in the following sections, LC×LC and MS may alleviate each other’s shortcomings when hyphenated.
In this article, we will discuss LC×LC as a powerful technique suitable for practical use in industry laboratories, and how modern modulators and software tools overcome difficulties commonly associated with the technique.
Principles of LC×LC
In LC×LC, two 1D-LC separations are combined using a modulation device. Typically, this concerns an eight-port or 10-port, two-position valve with two sampling loops. Alternating, for a given length of time, one loop is connected with and filled by the first-dimension (1D) effluent, while the contents of the other loop are injected in and separated by the second-dimension (2D) column. The time between each valve switch is referred to as the modulation time and this equals the analysis time of the second dimension. For the LC×LC system to reach its full potential in terms of peak capacity, several requirements must be met.
First and foremost, the two separation mechanisms must be as different, or “orthogonal”, as possible. This can be accomplished by targeting different sample dimensions, for example, charge and hydrophobicity, in each dimension, through selection of the stationary phase and mobile phase with respect to the analyte mixture. While ideally the two mechanisms are statistically independent, there is almost always some correlation between the two separation dimensions and 100% orthogonality is not commonly achieved.
Secondly, to allow the first-dimension separation to be sampled and cross-separated by the second-dimension separation in a timely matter, the latter must run as fast and efficiently as possible and, therefore, often utilizes ultrahighâpressure LC (UHPLC) conditions. However, to avoid loss of information as a result of undersampling, the first dimension must be suitably slow to allow each peak to be sampled at least two or three times. Moreover, the sample loops must be able to store the fractions of the first-dimension effluent, but should not be so large as to deteriorate the second-dimension separation through injection effects. As a rule of thumb, the loop volume should be twice the modulation volume, but less than 15% of the secondâdimension column dead volume.
Finally, the separation should be optimized to utilize the two-dimensional separation space as efficiently as possible. This involves extensive method development relative to 1D-LC, with the gradients in both dimensions (if applicable) being tailored to the sample.
Challenges Associated with LC×LC
LC×LC and Reduced Sensitivity of the Detector: The superior information potential of the mass spectrometer justifies the quest for its application wherever possible. Unfortunately, for analysis of complex mixtures the issues described above require us to regard the intensity axis of a mass spectrum with due circumspection. Schoenmakers and de Koster (1) described the mass spectrometer in this context as “the lame”, because several secondary effects disturb ionization. Conversely, LC is unable to present detailed, qualitative information, for example, structural information on the separated sample constituents, and was thus labelled “the blind”. It is, therefore, not surprising that the combination of the two techniques has been a great success. Thanks to the great synergy between the two techniques, LC–MS is now firmly established in many analytical laboratories. By providing the potential means to analyze each component individually, LC aids in circumventing many of the issues faced by mass spectrometrists. Yet, the prospect of LC×LC–MS has been met with scepticism.
This brings us to the first obstacle preventing the proliferation of LC×LC in practice in many laboratories: the addition of a second separation dimension introduces an additional dilution step. Analyte zones are presented to the MS system (or any other detector) in a significantly more dilute form in LC×LC in comparison with 1D-LC. Consequently, the task of the mass spectrometer to detect the analytes is made harder by inserting an additional LC stage in an LC–MS instrument to perform LC×LC–MS experiments. However, because the zones are better separated, accurate identification and quantification may become easier.
Fortunately, active-modulation techniques provide means to alleviate this problem (2,3). Examples include activeâsolvent modulation (ASM [3]) and stationary-phase assisted modulation (SPAM [2]). Contrary to passive modulation, where the 1D-effluent simply is stored in its entirety in the sample loops of the 2D-LC modulation valve, SPAM involves the use of small cartridges (or “traps”) of stationary phase (for example, guard columns). After early reports by Oda et al. (2), the usefulness of the SPAM approach for LC×LC has been reported by several groups. Upon elution from the 1D, the effluent is essentially filtered (through SPAM filters) and the analytes are retained by the traps. Because the volumes of these cartridges and the required tubing typically comprise only a few microlitres of dead volume, the analyte plug is essentially concentrated and most of the solvent is flushed away. This effect can be enhanced through the use of gradients in the second-dimension separation, which can elute the retained components in sharp bands. To ensure that the trap retains all analytes from the first-dimension effluent and releases them in the second-dimension mobile phase, the traps are typically packed with a material similar to the second-dimension stationary phase. Often a weak solvent is admixed with the first-dimension effluent to enhance analyte adsorption. Gargano et al. (4) reported quantitative data on the improvement in terms of dilution factors.
Compatibility Issues Associated With LC×LC: The second challenge that has confronted chromatographers working with LC×LC is that some of the most orthogonal combinations of retention mechanisms are difficult to implement because of compatibility issues. Typically, such issues concern a lack of miscibility of the first- and secondâdimension mobile phases or the effect of the first-dimension solvent system (and injection solvent) on the second-dimension column. The classical example to illustrate this is the combination of hexane-based normalâphase with water-based reversed-phase LC, or the latter with an organic size-exclusion chromatography (SEC) separation. In all cases (normal-phase LC × reversedâphase LC, reversed-phase LC × normal-phase LC, SEC × reversed-phase LC, or reversed-phase LC × SEC), the combinations of two very different solvent systems are likely to result in performance-deteriorating effects such as demixing, viscous fingering, breakthrough phenomena, and adsorption.
Viscous fingering arises from significant differences in viscosity between the solvent systems used in the two dimensions. Breakthrough phenomena occur when the sample solvent is too strong for the present analytes and no retention is obtained for a section of the separation dimension. The opposite occurs with adsorption effects in SEC, when a weak injection-solvent renders analytes undesirably retained, resulting in shifted retention times that can no longer be correlated to the analytes’ corresponding molecular weight.
The problem has sparked a series of studies that focus on either the modulation interface or the intermediate adaptation of the solvent systems. A good example of the first case is the vacuum modulator interface developed by Tian et al. (5), in which the solvent of the 1D effluent is evaporated from the sample loop. The analytes are deposited on the wall and redissolved in the 2D mobile phase after switching the modulation valve. Other examples focus on the use of temperature to achieve on-column focusing (6) and concomitant solvent switching (7).
Intermediate adaptation of the solvent system involves the use of active-modulation techniques combined with admixing diluents to achieve retention. This approach has also recently been applied to transform the sample analytes during modulation, yielding different kinds of information (particle size and molecular weight distributions) from a single experiment (8).
Practicality of LC×LC: All of the issues discussed so far have only increased the complexity of LC×LC and this raises the valid question of whether the advantages are worth the intricacies and related costs. This brings us to the third obstacle to using LC×LC in practice. One common opinion is that an entire new system must be acquired to perform LC×LC experiments. While the contemporary complete systems do offer attractive features, a relatively minor investment may suffice to upgrade a 1D-LC system. A second (binary) pump is required for the second-dimension separation. This should ideally be a UHPLC pump to allow fast and efficient separations. Moreover, a modulation valve should be added to allow fraction transfer from the 1D to the 2D system. Assuming that the processing power of the detector allows sufficiently high sampling frequencies, the final requirement is suitable data analysis software. The total cost of these hardware adjustments are minor in comparison with the cost of an MS system. Therefore, bringing the hardware into the laboratory does not necessarily require a large investment. Yet there is some truth to the statement that LC×LC is too expensive for the general analytical laboratory. However, the justification for this statement is not related to the hardware. Obtaining and implementing LC×LC hardware is not a “real” obstacle. The main “real” obstacle is limiting the time and effort required from the operator.
A competent liquid chromatographer can be expected to develop and apply 1D-LC methods. An LC×LC method is a product of two 1D-LC separations. The knowledge of the operator must be expanded with a general understanding of LC×LC principles and techniques. Several groups have written guidelines to facilitate this and many applications have been described. Dwight Stoll has been particularly active in this respect (9–11). Recently, our group published a practical guide (12). The 1D-LC separations that are to be combined into an LC×LC method must be carefully matched in terms of selectivity, compatibility, orthogonality, and resolution. As a result, the operator will typically spend more than twice as long to develop an acceptable LC×LC method than to develop two 1D-LC methods. This dramatically increases the method development costs of LC×LC relative to 1D-LC and other analytical techniques. For new, complex samples, the method development time required may be in the order of months, which is too high a barrier if one is faced with an urgent analytical question.
Perhaps even more unsatisfactory is the realization that the resulting LC×LC method may in many ways be suboptimal. Take, for example, the separation of an industrial-surfactant mixture shown in Figure 1. The separation of the various different series of analyte components has been achieved through a pragmatic approach. Such a result may be accepted in practice. In an industrial context suboptimal conditions are often accepted, as long as the method is able to provide an answer to an incidental question. However, if the method is to be applied multiple times to related samples, optimization is highly desirable. For example, in various parts of the LC×LC chromatogram the separation space is inefficiently used. This can be improved using variable second-dimension gradients. Moreover, the relatively long analysis time and broad first-dimension peaks leave room for improvement. The method can be further improved by using the information provided by each intermediate result.
To obtain acceptable analysis times and high peak capacities additional efforts are required, further increasing the method development time. This is a major challenge. How can a technique that requires such large method development efforts for every new type of sample type ever find its way into contemporary (industrial) laboratories?
To overcome this problem, we must revert to computer algorithms and retention theory. Chromatographic theory has yielded models for a number of retention mechanisms, including reversed-phase, normal-phase, ion-exchange, and hydrophilic interaction chromatography (HILIC). A relatively simple example is the linear solvation-strength (LSS) model, developed by Snyder (13), which can be used to describe the retention factor, k, in reversed-phase LC as a function of the organic-modifier fraction of the mobile phase as is shown in Figure 2.
The figure shows that measuring the retention time at two different mobile phase compositions allows the linear retention model to be drawn (or fitted) and the retention factor at any mobile phase composition to be predicted. A limited number of (“scanning” or “scouting”) experiments will yield the retention parameters of all identified analytes and allow prediction of the separation at any mobile phase composition. By integrating the gradient equation, the scouting principle can be extended to gradient-elution experiments. A key realization is that this scouting principle can also be applied to LC×LC, without increasing the number of experiments required. If two-parameter (linear) retention models are applied in both dimensions two LC×LC experiments suffice to establish all parameters for all analytes tracked.
Using the retention parameters for all analytes, a computer algorithm to evaluate a vast number of possible LC×LC methods can be developed. The resulting separations can be assessed by one or more suitable quality descriptors, such as orthogonality, resolution, or analysis time. This is illustrated in the Pareto-plot shown in Figure 3, which was generated using the PIOTR program (14). Such a plot depicts the values of two quality descriptors, for example, resolution score, orthogonality, or analysis time, against each other for all evaluated separations. The optimal methods are positioned on the so-called Pareto-optimal front. This rapid and efficient process means the analyst no longer needs to waste valuable time on “trial-and-error” experiments. For LC×LC the above process may be facilitated by the PIOTR program (14). Potentially, the method-development times for LC×LC methods are reduced from several months to several days (12).
Analysts using LC×LC will still encounter common problems such as injection and transfer effects, but these are no more difficult than those commonly faced by liquid chromatographers.
Conclusions
Successful implementation of comprehensive twoâdimensional liquid chromatography is ostensibly thwarted by three main challenges and we have presented strategies to overcome these obstacles. First of all, stationary phase-assisted modulation techniques can alleviate detector sensitivity problems arising from the additional dilution factor. These active modulation techniques can also be adapted to minimize effects from our second challenge, the compatibility between two dimensions with very different mobile phase systems. In the case of normal-phase × reversed-phase, special modulators have been developed. Finally, the potentially cumbersome method development is arguably the most daunting of these challenges because it prospects additional time and, consequently, costs. The use of software tools that use algorithms based on retention theory allow this final challenge to be overcome. With a few guidelines and helpful tools all good chromatographers can productively use LC×LC in practice.
References
Bob Pirok worked at Shell after obtaining his M.Sc. degree in 2014. He now is a PhD student in the MANIAC (Making ANalytically Incompatible Approaches Compatible) project at the University of Amsterdam. He has published on fast size-exclusion chromatography using core–shell particles, comprehensive integrated analysis of nanoparticles and the constituting macromolecules by HDC×SEC, and the application and optimization of LC×LC separations. Bob was decorated with a Shimadzu Young-Scientist Award at HPLC2015 Beijing, the Young-Scientist-Award Lecture during the SCM-8 meeting in Amsterdam in 2017, and the Csaba Horváth Young-Scientist Award at HPLC2017 Prague.
Peter Schoenmakers is professor of analytical chemistry at the University of Amsterdam. He investigates analytical separations, focusing on multidimensional liquid chromatography. He studied in Delft, The Netherlands (with Professor Leo de Galan) and in Boston, Massachusetts, USA (with Professor Barry Karger). He worked for Philips in Eindhoven (The Netherlands) and for Shell in Amsterdam and in Houston, Texas, USA. He was awarded the AJP Martin Medal in 2011, the John Knox medal in 2014, and the Csaba Horváth Award and the CASSS Award in 2015. In 2016 he obtained an Advanced Grant for Excellent Research from the European Research Council.
AI-Powered Precision for Functional Component Testing in Tea Analysis
October 11th 2024Analyzing functional foods reveals numerous health benefits. These foods are rich in bioactive compounds that go beyond basic nutrition, boosting the immune system and improving overall wellness. However, analyzing these compounds can be challenging. This article discusses AI algorithms to support automated method development for liquid chromatography, simplifying the process, enhancing labor efficiency, and ensuring precise results, making it accessible to non-experts for tea analysis.
Characterizing Cooked Cheese Flavor with Gas Chromatography
October 11th 2024A joint study by the Department of Food and Nutritional Sciences at the University of Reading and Synergy Flavours aimed to identify volatiles that contribute to the aroma of cooked cheese, including the role of fat content in development during cooking.