
What Matters Most for LC×LC Orthogonality? A Systematic Look at pH, Modes, and Modifiers
Key Takeaways
- Chromatographic mode and pH shifts significantly impact orthogonality, with RPLC×RPLC and HILIC/RPLC pairings showing high scores.
- Managing large datasets was challenging, requiring MS detection and manual peak tracking, but scripts helped minimize errors.
Discover how to optimize LC×LC orthogonality for organic micropollutants analysis, enhancing method development with a new open-source Python tool.
Orthogonality is often described as the cornerstone of effective two-dimensional liquid chromatograph (LC×LC), but quantifying and optimizing it has remained a challenge. In a recent study, Soraya Chapel and her collaborators systematically dissected how chromatographic mode, stationary phase, mobile phase composition, and pH shape orthogonality outcomes when profiling diverse organic micropollutants. Their findings not only highlight the outsized influence of pH shifts and chromatographic pairing but also point to new strategies for making LC×LC method development more systematic, reproducible, and accessible through an open-source Python tool.
Which aspects of the LC×LC workflow—stationary phase selection, mobile phase composition, modulation strategy—had the greatest impact on orthogonality score improvement?
To influence the final orthogonality score, the parameters that matter most are those that directly affect separation selectivity, namely chromatographic mode, stationary phase chemistry, mobile phase composition, and pH. Although the modulation strategy has a huge impact on the overall quality of the 2D separation (for example, peak shape or transfer efficiency), it does not play a major role in orthogonality improvement and was therefore not considered in this part of the workflow.
In our study, the organic micropollutant (OMP) mixture we investigated was chemically very diverse and covered a wide polarity range. We tested the most widely used chromatographic modes for such analytes, hydrophobic interaction liquid chromatography (HILIC) and RPLC, spanning a variety of stationary phases, two organic modifiers, and three pH levels. Looking at orthogonality alone, separate from overall performance, the data showed that the highest scores were obtained either for RPLC×RPLC with the largest pH switch (pH 3 vs. 8) or for HILIC/RPLC pairings. Within RPLC×RPLC, pH had by far the strongest influence, with larger differences in pH between dimensions consistently yielding higher orthogonality scores. This is because pH shifts alter analyte ionization states, which in turn significantly affect retention behavior, often moving compounds into or out of overlapping regions and thereby changing how evenly they spread across the separation space.
In short, chromatographic mode and pH had the strongest effect on orthogonality, followed by organic modifier, and finally, the stationary phase chemistry. Of course, these results cannot be generalized to all cases, since orthogonality is inherently sample dependent. For example, in the analysis of neutral compounds, pH switching would have no effect, while mixed-mode pairings and stationary phase chemistry might dominate instead. This is precisely why systematic studies like ours are necessary, to identify which parameters matter most for a given analyte set.
What challenges were encountered in collecting and processing retention time data for 176 compounds across 38 conditions, and how were these addressed?
Managing such a large dataset was one of the most demanding aspects of the study. We initially had to track 303 analytes across 38 different LC conditions, each with varying mobile phase compositions, stationary phases, and pH values. Several key pieces of information (retention times and peak widths) had to be extracted from the resulting chromatograms, which created multiple challenges.
First, mass spectrometry (MS) detection was indispensable. Tracking hundreds of compounds simultaneously would have been practically impossible, or at least tremendously cumbersome, using UV detection alone. The main challenge was manually confirming the retention times of all target compounds, despite automated MS post-processing to extract all ion chromatograms (EICs). This quickly became the most time-consuming part of the work: following 303 analytes across 38 conditions with replicates meant dealing with thousands of chromatograms. There was, unfortunately, no shortcut other than patience. We chose to investigate such a large dataset to provide a study as comprehensive as possible, but for future users, I would recommend limiting the tested conditions to those most likely to be informative, based on prior knowledge of the sample. Of course, the more conditions tested, the more insight one gains, but time and resources must be balanced. For example, starting with 50 compounds, 5 columns, 2 mobile phases, and 3 pH levels could already provide meaningful insights, though the choice ultimately depends on the application.
Another difficulty was that not all analytes were well retained or detected by MS under every condition, which sometimes resulted in missing values. This reduced our dataset from 303 to 176 compounds, as we decided to focus on compounds consistently detected across all conditions. With so many chromatograms to evaluate, the risk of human error in peak assignment or data handling was also significant, so patience and repeated checks were necessary. To mitigate this, we relied on duplicate analyses for retention time consistency and developed scripts to flag and automatically handle missing values, organize results across all 38 conditions, and reduce manual intervention. These scripts helped minimize error and bias while making the analysis more scalable. They are not yet part of the publicly available version of the tool, but will be included in future updates.
Although this study required a great deal of manual peak tracking, it is worth noting that this is really the only labor-intensive step of the process: once peak information is collected, the tool handles the rest in an automated and reproducible way. Reducing this burden further would undoubtedly benefit future users, but at present, there is no real shortcut, as careful peak assignment remains essential to generate reliable input for the tool. In future developments, automated peak tracking could help speed up this step and simplify data processing, although its feasibility remains to be explored.
How adaptable is the orthogonality scoring tool to other analyte classes and complex matrices outside of wastewater OMP profiling?
The tool was designed to be broadly applicable and is not tied to a specific analyte class. It only requires retention time (and optionally peak width) data, which makes it suitable for virtually any type of sample that can be analyzed by LC×LC. This means that our approach could be applied to areas as diverse as pharmaceutical screening, food analysis, natural product profiling, petrochemicals, or metabolomics.
A key strength is that the tool does not impose a fixed definition of orthogonality but instead integrates multiple descriptors that capture different aspects of separation complementarity. This makes it inherently flexible: depending on the sample type and research question, users can choose to emphasize certain metrics or rely on the consensus score as a default. For example, in metabolomics, where dense clustering is common, one might pay closer attention to distribution-based metrics, while in petrochemical analysis, coverage across the separation plane might be more relevant.
In terms of computational design, how is the orthogonality score calculation in the Python tool structured to handle data heterogeneity and missing retention values?
The tool uses a pipeline approach. It begins with data import and preprocessing, where retention times are normalized to account for variations in scale between dimensions. Each orthogonality metric is then calculated independently using the available data. Missing values are flagged early in the process and can be addressed in several ways depending on user preference: excluded completely from the study, as we did in our work, where only compounds consistently detected across all separation conditions were included; retained but treated as blank entries, so that the absence of a single compound in a given pairing does not eliminate it from the dataset, which is particularly useful when only a few values are missing; or corrected by imputation and reimport if the missing entry is due to an error in the original data file. This ensures that the absence of a single compound in one condition does not disrupt the calculation of the overall score. Once all metrics are calculated, they are grouped based on statistical similarity, which prevents redundant descriptors from disproportionately influencing the result. Finally, the metrics are averaged across groups to yield a consensus orthogonality score. This design makes the tool resilient to heterogeneous input and ensures that the result reflects independent and complementary aspects of orthogonality rather than being dominated by any single metric or by missing values.
Is there anything else you’d like our readers to know about your research?
One of our main goals was to make the selection of suitable pairings during LC×LC method development more systematic, consistent, and less time-consuming. Orthogonality is a central concept in two-dimensional separations, but it has long been difficult to quantify in a way that is both comprehensive and easy to apply. By providing a practical, tool-based approach that rapidly identifies promising 2D combinations, implemented in a freely available Python-based tool, we aimed to lower the barrier to adopting LC×LC and to make condition screening more efficient and transparent.
Beyond wastewater analysis, we hope this approach will encourage researchers in many fields to explore 2D separations with greater confidence, knowing that orthogonality can now be evaluated more objectively and reproducibly. We also see this tool as a foundation that can evolve further, for example, by integrating additional metrics, automated peak tracking, and expanding options for real LC×LC data visualization. Feedback from users will be very valuable in shaping these future developments.
About the Interviewee
Soraya Chapel is a Marie Skłodowska-Curie postdoctoral fellow at the University of Orléans and is currently based at Kyushu University in Fukuoka, Japan, where she works on a collaborative project focused on the development of greener multidimensional chromatographic systems using supercritical CO₂ for the study of bioactive compounds in natural products. She previously held an ATER (temporary assistant professor) position at the University of Rouen, working on PFAS analysis in post-fire samples, and completed a postdoctoral fellowship at KU Leuven, focusing on innovative multidimensional liquid chromatography approaches for profiling organic micropollutants in environmental water samples. She earned her Ph.D. in analytical chemistry from Université Claude Bernard Lyon 1 (France) in 2021, where her thesis focused on developing sub-hour online comprehensive 2D-LC methods for the separation of complex peptide and protein samples.
Newsletter
Join the global community of analytical scientists who trust LCGC for insights on the latest techniques, trends, and expert solutions in chromatography.





