To Validate or Not to Validate

Article

The Column

ColumnThe Column-02-20-2014
Volume 10
Issue 3

Incognito investigates if changing the carrier gas really requires a full method revalidation.

Incognito investigates if changing the carrier gas really requires a full method revalidation.  

Photo Credit: George Marks/Getty Images

I just googled “chromatography method revalidation” and of the first 20 or so legitimate hits, only four specifically mentioned revalidation of (GC) methods. Maybe this is to be expected as the top hits mostly relate to validation in a pharmaceutical context, where GC is typically used much less than high performance liquid chromatography (HPLC).

It’s not just the pharmaceutical industry that validates methods, and HPLC methods are not the only methods that need to be changed or updated occasionally. The helium crisis has abated somewhat, but helium still isn’t a long term proposition for chromatographers. I would go as far as to say that it’s probably not even a medium term option from about 2016 onwards when considering its price and availability in North America and Continental Europe. We are going to have to make the move to a different carrier gas, so ask yourself the question I have asked (and have been asked) numerous times over the past year: “Does changing carrier gas require a full method revalidation?” My automatic response is yes, it has to, doesn’t it? Am I the only one who sees train headlamps rapidly approaching down the track? Am I the only one preparing for the inevitable by scheduling many hours of revalidation work into the planner?

Here lies a problem that applies across all of the regulated chromatography industry. Ask the question “Does this change require a method revalidation?” and the stock response will be “That is a question to be settled between your QA /regulatory department and your regulator”. This nicely avoids anything remotely controversial or opinionated!

There are over 200 United States Pharmacopeia (USP) GC methods and the allowed changes to GC methods, as defined by the USP1 and European Pharmacopeia (EP)2, are summarized in Table 1. Can anyone spot any reference to changing the carrier gas? Is that a no? Good, we are all paying attention, lets continue.

There is one additional parameter for which “allowable changes” are not defined by the EP but which are defined as follows by the USP: “Oven Temperature Programme (GC): Adjustment of temperatures is permitted as stated above. When the specified temperature must be maintained or when the temperature must be changed from one value to another, an adjustment of up to ±20% is permitted.”

Do you know what this means? Has any reader ever asked the USP what this means? We presume that it means ±20% of the temperature programme rate value, for example, 20 °C/min has an allowable range of 16–24 °C/min. But who knows! I may just have opened a very large can of worms for myself!

 

Time for some chromatography. It’s my personal empirical experience that changing carrier gases can lead to changes in selectivity, which is what I really worry about when optimizing chromatographic methods. I know that the caveat to the “allowable” changes in the Pharmacopeial methods is that system suitability tests defined in method monographs must be passed - fine. What happens if peaks invert and resolution of the peak pair actually increases but you end up measuring the wrong thing! GC analytes with widely differing polarities on a temperature can do some very strange things when switching from helium to hydrogen carrier, which is why I get a little nervous relying on System Suitability Test criteria when changing parameters that can have an effect on chromatographic selectivity.

When changing from helium to hydrogen one typically enters one of two scenarios: a) Just switch the gas and whatever else you need to do and get on with the work of looking at a chromatogram that is identical to the one before; or b) take the opportunity to optimize the method and see how we might improve throughput by using reduced column dimensions or increased carrier linear velocity.

Either way substantial changes have to be made to the methods, including to the temperature programme in both cases as well as to the carrier gas flow rates when changing gases, and to the column dimensions in the latter case. My concern is that the changes may fall within the so-called “allowable limits” and that the guidelines might be misinterpreted and assumed to be compliant. Or, even worse, the validation status of methods when switching gases may not even be considered. My personal recommendation is that you revalidate your method when switching carrier gas. This is where we get into the full versus partial revalidation argument and although I’m tempted to refer you to your QA or regulatory department, I won’t. Revalidate the method in full. Seems quite clear cut, right? Wrong.

Let’s take a look at some of the wording from USP method 467 for Residual Solvents, Procedures A, B, and C (widely used in some of the most heavily regulated industries): “The carrier gas is nitrogen or helium with a linear velocity of about 35 cm per second, and a split ratio of 1:5.”3

Wow, so we can use either carrier gas as long as the resolution between the acetonitrile and methylene chloride peaks is greater than one, and the signal-to-noise ratio for the 1,1,1–trichloroethane peak exceeds a certain value? Surely not?

Does this indicate that changing carrier gas does not always require a method revalidation? Will nitrogen really have the efficiency at 35 cm/s over the range 40 °C to 240 °C to give a suitable separation?

Well, the answer is that I don’t know as I haven’t tried it but it does raise a very good point. I know that many of the methods in my own laboratory are “over‑resolved”. This means that I could easily lose some resolution before the quality of the analysis is compromised. Therefore, instead of switching to hydrogen, would it be possible to move to nitrogen as a carrier? Many of us may already have a nitrogen supply of the required purity (if we don’t we can fit gas scrubbers). The method changes required are much less radical than when switching to hydrogen, as the density of nitrogen and helium are much more comparable.

OK, so the flow rates that we operate at with helium may result in working beyond the Van Deemter minimum of nitrogen carrier, but will this really compromise our analyses? However, just because either carrier is possible does not mean that one could, or should, make a choice based on ease of use. Has anyone validated that the results are fit for purpose using either carrier gas at 35 cm/s, or that there are no appreciable selectivity changes? I’d be very interested to hear if that was you, or if you are successfully using nitrogen as the carrier for your USP 467 analysis.

I thought I had reached a steadfast position regarding the transition of methods in our laboratory. Starting with the most used, gradually switch methods over time to hydrogen carrier gas. Avoid switching methods that use mass spectrometric detection for now - this will be the subject of an upcoming Incognito column. Take the opportunity to optimize the method and improve throughput. And then revalidate.

I bet I’m not alone in this general strategy, but now that I have had time to reflect, perhaps this is too much? Perhaps I should switch to nitrogen for some methods - and for my USP 467 residual solvents methods, I may not need to revalidate at all…….right?

References

  1. General Chapter <621> “Chromatography,” in United States Pharmacopeia 36–National Formulary 31 (United States Pharmacopeial Convention, Rockville, MD, USA, 2012).
  2. European Pharmacopeia 8, Section 2.2.46, (2014).
  3. General Chapter <467> “Residual Solvents,” in United States Pharmacopeia 36–National Formulary 31 (United States Pharmacopeial Convention, Rockville, MD, USA, 2012).  
Related Content