The Importance of Specifications for Method Transfer

Apr 27, 2016
Volume 12, Issue 9

Photo Credit: PhotoAlto/ Ale Ventura/ Getty Images

Incognito shares his thoughts on the importance of specifications for method transfer in gas chromatography (GC).


I’m often unpleasantly surprised at the lack of essential detail, or just plain inaccuracy, in the way chromatographic methods are specified — often rendering a validation exercise pointless because of the large amount of subjectivity possible when implementing the method. My fear is that these “missing” instrument or method parameters may be simply invented or system defaults accepted without due consideration of the impact on the data produced — possibly resulting in unnecessary repeat analyses or, in the worst cases, results that are not fit for purpose. Certainly these “missing” values are often the cause of problems when transferring chromatographic methods between laboratories.

As we move towards the situation where there is less and less need to “get under the hood” of our instruments, it’s vital that through the method itself, we properly specify the instrument acquisition and data collection parameters.

I’ve recently been transferring several gas chromatography (GC) methods and thought it might be useful to show how we might properly specify a GC method, to serve both as a template for the level of detail required in method specifications but also to allow readers an insight into all of the parameters that are required for GC method development and validation.

Our “ideal” GC method is shown below with my commentary on each section following:

Carrier Gas 

Carrier gas: Helium (99.999% purity or greater) 

Flow-rate: 1.00 mL/min (35 cm/s) 

Mode: Constant flow

Sample Introduction 

Injection volume: 1 µL 

Injection solvent: Dichloromoethane (cite also grade and purity) 

Syringe size: 10 µL, cone tipped 

Autosampler routine: 

Wash needle: Solvent A (3 x 10 µL) pre-injection 

Sample priming: Aspirate sample (3 x 10 µL) and dispense to waste

Sample pumps: (3 x 10 µL) with 5 s viscosity delay 

Wash needle: Solvent B (3 x 10 µL) post-injection

Sample Inlet 

Mode: Splitless 

Temperature: 250 oC (no oven tracking) 

Pressure: Track column pressure (no pressure pulse) 

Splitless time: 35 s 

Split flow: 100 mL/min 

Split ratio: 101:1 

Split (gas) saver: 15 mL/min after 2 min

Liner: Single lower gooseneck 4 mm i.d. splitless liner, deactivated, containing 1 cm deep plug of deactivated quartz wool packing, positioned to wipe the needle tip

Column

Phase: 14% Cyanpropylphenyl methylpolysiloxane

Length: 30 m 

Internal diameter: 0.32 mm 

Film thickness: 0.25 µm 

Phase ratio: 320 

Upper temp. limits: 280 oC (isothermal)/300 oC (gradient)

Oven

Initial temp.: 40 oC 

Initial time: 1 min 

Ramp 1: 20 oC/min; temp. 1: 250 oC, hold 1: 5 min 

Ramp 2: 50 oC/min; temp. 2: 300 oC, hold 2: 2 min 

Equilibration time: 1 min

Detector

Type: Flame ionization detector 

Temperature: 300 oC 

Fuel gas: Hydrogen @ 30 mL/min 

Oxidizer gas: Air @ 400 mL/min 

Make-up gas: Nitrogen @ 35 mL/min 

Make-up mode: Constant flow 

Attenuation: Specify if required

Ideally the required carrier gas purity should be specified; I have also seen statements relating to the necessity to filter out moisture, hydrocarbons, and oxygen from the carrier for optimum performance. Flow rate or carrier linear velocity should be specified for the carrier rather than the column head pressure (or pressure drop across the column). The pressure drop across the column is a function of the carrier flow, oven temperature, and column dimension and any slight deviation in these parameters (poorly calibrated oven thermocouple, trimmed column length) will require an alteration in the column pressure drop to achieve a given flow rate (or linear velocity). Further, the column dimensions and carrier gas type must be correctly entered into the instrument for flow rate to be correctly calculated when using systems with computerized pneumatics — this is especially important after column maintenance where the column inlet has been trimmed to improve chromatographic performance. Specifying the mode of carrier operation is also important with modern computerized pneumatics. In this instance we have specified constant flow, which raises the carrier gas pressure as the column temperature increases to achieve a constant carrier gas flow rate into the detector. This mode has several advantages such as increasing the signal-to noise ratio of late eluting peaks, decreasing overall analysis time, and producing flatter baseline profiles when using mass-flow sensitive detectors.

It’s vital to specify both the sample volume and the solvent used. Apart from the obvious reasons for this, it’s important to be able to calculate the sample vapour volume produced within the liner to asses if “back flash” — a liner overfill problem, which can lead to carryover and insidious baseline artifacts — will occur. Many autosamplers use a fixed volume syringe with a stepper motor to measure the sample volume (with a 10 µL syringe installed, one step will represent a 1 µL injection). It’s therefore important to specify the syringe size to avoid injecting the wrong volume. One step of the plunger driver motor will result in a 0.5 µL injection when a 5 µL syringe is installed, so check the installed syringe on the instrument as well! Specifying the cleaning regime of the needle is also useful to ensure minimized carryover — usually two separate solvent bottles are used, one pre-injection, the other post injection. In addition, the requirements for sample priming into the syringe and the speed of the plunger used to avoid cavitation with more viscous samples should be specified. These needle washing parameters are very often omitted from methods — however, without them, the performance of the analysis may be irreproducible between instruments or operators.

Sample inlet conditions are often incorrectly specified, which is problematic because they are arguably most important in terms of analytical reproducibility and performance! The major variables are usually well specified and the mode of injection, inlet temperature, and split ratio (for split injections) appear in most documents that I review. It’s important to remember that both the split flow and split ratio should be specified for clarity — remember that the split ratio is calculated as (split flow + column flow)/column flow):1. However, I often see splitless injection conditions specified without a “splitless” time (also called “split on” time or “purge” time), which represents the time after injection when the split valve is opened to allow lingering components to escape the inlet, leading to better solvent and early eluting analyte peak shapes as well as reducing the amount of baseline rise during the analysis. It is vital that this parameter has been established for the method and is clearly specified. Further, it’s rare to see “gas saver” times and flows specified, but these are important because some instruments are capable of automatically reducing the split gas flow after the inlet has been flushed to save gas — why waste 100 mL of helium every minute for the length of your analysis when a much more moderate flow is possible? Further variables may also be important such as inlet temperature, pressure tracking, or the use of pulsed pressure when injecting larger sample volumes and should also be carefully specified. Remember that the inlet pressure is likely to increase during programmed temperature analysis if “constant flow” mode is chosen. There are over 300 different liner designs currently available and the liner volume, construction, deactivation, and packing can have a drastic effect on both the quantitative and qualitative nature of the resulting chromatogram. These parameters should be fully specified and in some instances I have even seen manufacturer part numbers quoted to maintain absolute reproducibility of the method.

The minimum requirement for specifying a GC column is the information required to purchase it. That is, the nature of the stationary phase, the column length, internal diameter, and film thickness. I have seen a multitude of methods in which the film thickness is omitted. It’s also good practice to specify the phase ratio in case the method needs to be adapted or changed, in which case the use of an equivalent column (say of lower internal diameter) can be easily specified by adjusting the film thickness to maintain the required resolution. Showing the correct upper temperature limits of the column is also useful for column conditioning purposes. Of course, when installing a column into a GC system, the exact column length is often unknown (even your new 30 m column is unlikely to be exactly 30 m long!), and in order for the instrument to correctly set a pressure to generate the required flow-rate, the column should be calibrated using the retention time of an unretained component. Most data systems will calculate the column length if the retention time of methane or air is known (take care that your column does not retain these species!).

Oven temperature programmes are usually well specified and the parameters are often shown in the form of a table for ease of understanding. One parameter that I often see omitted, however, is the “oven equilibration time”. This parameter is used to account for the time delay in the carrier gas achieving the same temperature as the air inside the GC oven, which is created by the thermal lag of the GC column walls. Not including an equilibration of at least 30 s (in my experience) can lead to significant retention time variability, especially with earlier eluting peaks. Obviously the wider the column bore and the thicker the stationary phase film thickness, the longer the equilibration time that is necessary.

The main detector operating parameters such as temperature and flow rates of the fuel and oxidizer gases are generally given — note for most flame ionization detectors (FID) the optimum flame stoichiometry is approximately 10:1 oxidizer:fuel. However, I have seen a number of cases in which the make-up gas type or flow rate have not been specified. This is a major omission because both of these variables can affect the flame stoichiometry and hence the response or sensitivity of the FID detector. I also specify (usually as a caveat to the conditions) that the gases should be filtered and the nature of the filter material — just to remind operators in the laboratory of the need to check the viability of the gas traps prior to analysis. Some modern instruments offer the option to ramp the makeup gas flow to “mimic” a constant carrier flow into the detector even though the instrument is operating in constant pressure mode. If this option is available, then the required operating mode should be clearly specified. Most modern detector and data system combinations are self-attenuating and as such the specification of absolute attenuation is not required, however, if your detector type requires an absolute attenuation, it should also be clearly specified.

This imaginary specification is written entirely from a personal perspective and in the form that I personally like to see methods set out. Your own preferences or those of your employer may differ; however, unless we properly specify methods both on paper and within the data acquisition settings of our instruments, we will continue to have unnecessary difficulties with instrument and operator variability, especially during method transfer exercises. Next time you use a GC (or high performance liquid chromatography [HPLC]) method, have a look at the specification and ask yourself how many parameters are left unspecified and whether these “assumed defaults” may be the cause of any issues you have with your analysis? If you can influence the quality of your method specifications lobby for an improvement – it can be your good deed for the day!

lorem ipsum