News|Videos|September 10, 2025

Human Expertise and FAIR Data Standards in AI-Driven Chromatography Workflows

Fact checked by: John Chasse

High-quality, well-labeled data is essential. What steps are being taken—or should be taken—to create standardized, shared chromatographic datasets for AI training? Dave Abramowitz of Thermo Fisher Scientific shares his insights.

Dave Abramowitz, product lead for all mass spectrometry (MS) products at Thermo Fisher Scientific, discusses the evolving role of human expertise in chromatography workflows as artificial intelligence (AI) becomes increasingly integrated. He explores how experts remain critical for reasoning, data labeling, and ensuring the quality and traceability of the data that powers AI-driven analysis.

Abramowitz: Many labs today are building data interoperability and other FAIR data standards into their RFIs, RFPs, and RFQs for instrumentation and software. When we’re talking about FAIR, we’re referring to Findable, Accessible, Interoperable, and Reusable. These are some of the data standards being propagated and communicated across the entire industry. There are efforts to standardize as much as possible, such as Allotrope, Pistoia, and AnIML, but there are so many standards that only work best in certain modalities, functionalities, and workflows, and don’t perform as well in others. Some vendors are building data catalogs based on generalized ontologies, ensuring that labeling and the ability to build bridges between other ontologies and data standards are maintained. So, we’re not necessarily looking for the standards themselves—we’re looking to build something that will actually link to the standards.

Newsletter

Join the global community of analytical scientists who trust LCGC for insights on the latest techniques, trends, and expert solutions in chromatography.