CONTINUE TO SITE

OR WAIT null SECS

Advertisement

- About
- Advertise
- Author Guidelines
- Contact MJH
- Editorial Advisory Board
- Ethics Policy
- Do Not Sell My Personal Information
- LCGC Staff
- Privacy Policy
- Permissions
- Subscriptions
- Terms and Conditions

© 2022 MJH Life Sciences^{™} and Chromatography Online. All rights reserved.

*Incognito talks uncertainty.*

**Incognito talks uncertainty. **

How many results did you report in the last week? How many of them were reported as X ± Y (units)?

If the answer is “none”, ask yourself why the estimation and reporting of measurement uncertainty is not important for you or your customers. I postulate that the real reasons may include assumption; poor understanding of the nature of chemical analysis; and a degree of laziness associated with the phrase “this is a validated method”.

I’ve argued this point several times and, to be honest, the relevance changes depending upon the audience and their particular “types” of analysis. Someone routinely generating or receiving “assay” results in which a label claim, for example, is being substantiated by analytical measurement are perhaps a little less bothered than someone generating clinical data to measure biological markers for disease state against a “limit” or to compare against a “normal range”. However, I’d argue that in most cases it’s actually important to be able to state the degree of measurement uncertainty to assess the validity of the measurement and the possible range in which the actual (true) result might lie. So that tablet assay that states the amount of active ingredient to be 97 mg and the “acceptable range” 96.8–103.2 mg - does that batch pass or fail? Oh, right, you aren’t sure because you’re not aware of the degree of measurement uncertainty. But this has all been built into the determination of the acceptable range based on the method validation data, right? Well, has it? Did you check?

I very often encounter situations in which methods that have been validated to, for example, ICH Q2 (R1) standards are often assumed to have acceptable performance characteristics. So ask yourself “How is systematic error (bias) measured in ICH method validation?”, and for the method you just ran today or yesterday, what is its contribution to the measurement uncertainty? Also ask yourself “What is the random error associated with the method and how was it quantified during method validation?” Then try to determine how, under the auspices of this method validation, you would quote a ± figure to allow someone to understand the range “around” your reported result in which the true value might lie at the 95% level of confidence. Then, if you find yourself tempted to reply that you don’t need to do any of this because the nature of the validation ensures sufficient accuracy and precision and the limits of acceptability for the measurand are based on this data, please go away and find out, even for one of your regular tests, if this is actually true.

Those of you who are working to standards such as ISO17025 may be more familiar with estimation of measurement uncertainty and also, perhaps, with reporting the range of the result to the 95% level of confidence.

For those of you who have got to this point and aren’t sure about the standards applicable in your work or how to estimate and present measurement uncertainty, I suggest you ask someone within your organization. Please do this even if you work in “research” or your methods are “look-see” or your work involves a large degree of “discovery”. All of these areas need some degree of rigour in the way in which data are derived and reported.

I’ve no intention of turning this article into a tutorial on measurement uncertainty; however, I do want to pose some questions to stimulate thought and take an alternative look at what is accepted as normal or fit-for-purpose.

In your work, where are the sources of error and can you estimate the contribution of those errors to the overall level of uncertainty associated with the result you produce? If at this point you think you don’t need to consider this because the method validation ensures all of these errors are taken care of, again please step out of your comfort zone to actually think about what you are doing, because I bet that you still can’t give a ± figure on the result you just generated! In my work, I can estimate that errors arise from:

• Me - and the way I implement methods and all of my bad habits and inaccuracies.

• My sample and my laboratory environment - the chemical nature of the sample (matrix effects) and the way it changes over time during transport, storage, and in various laboratory environments.

• My sample preparation - degree of recovery from extraction, absorption on filters, etc.

• My instrument - its calibration, long- and short-term drift

• My data analysis - the way I perform the calibration, the repeatability of the way I integrate my chromatograms, the way in which I perform the calculation to determine the result.

• Random effects - variation in instrument response, variation in mass or volumetric measurements, etc.

Let’s take a simple example of preparing a calibration solution. What error might be associated with this process and what might the contribution be to the overall “uncertainty budget” for the method?

• The purity of the reference material being used and the uncertainty associated with that purity.

• Measurement of the mass of standard material, which also includes the balance calibration and the precision of the balance.

Advertisement

• Making the solution to volume, the precision of the flask filling, temperature effects, and glassware grade/flask calibration.

Some folks will use a Fishbone (Ishikawa) diagram to represent these errors, and will estimate the uncertainty associated with each and translate into a standard uncertainty by the application of a coverage factor. These standard uncertainties can then be combined, using the final calculation within your method, as a “template” to derive the uncertainty associated with your method using the “root sum of squares” methods. Again, I’m not here to tell you how to apply these methods, only to point out that, as a good analytical chemist, you ought to be able to give the uncertainty associated with your method. If you are unfamiliar with terms such as measurand, coverage factor, standard uncertainty, and root sum of squares, you might want to look up a basic text on measurement uncertainty and find out how you might use them.

The source and size of error from various instrumental aspects of our work are often the most difficult to ascertain, and here I use the ISO GUM (Guide to the Expression of Uncertainty in Measurement) documentation^{1} and associated materials (the short guide here is also very useful http://www-1.ut.ee/katsekoda/GUM_examples/) as well as the EuraChem/CITAC Guideline.^{2}

Calibration is another interesting and often contentious aspect of our work that confounds our operations, with as many opinions on the right and wrong way to do things as there are folks within the laboratory! The questions that you need to be able to answer are:

• What is the correct range for the calibration?

• What spacing do I need for the calibrants within that range?

• What regression do I apply (linear/polynomial etc.) and how do I test that the correct regression equation is being generated (can I use statistical tests or charts to evaluate the goodness of fit)?

• Do I need to apply a weighting to the calibration to deal with measurements at the low or high end of my range?

• How do I treat the “origin” when building the calibration curve? Include, don’t include, ignore are the typical options offered on data systems.

• What is the error associated with any measurement generated through interpolation of the response of an unknown sample.

All of these questions need to have solid answers that you can justify. If you don’t, then how can you perform the analysis correctly? Does the method specification state how to treat the origin? Does it give guidance on how to assess the fit or investigate bias in the instrument response? Does it allow a calculation on uncertainty? If not, ask yourself why not? A nice reference to begin your investigations into best practices for instrument response calibration is the UK Laboratory of the Government Chemist Publication, Preparation of Calibration Curves - A Guide to Best Practice.^{3}

There is a good chance that you will be reeling with the number of unanswered questions I’ve included here and the range of what is discussed in very general terms but we should all be aware of the uncertainty within our methods. If your method is “validated” so what? What can you then assume about the measurement? What is the ± associated with the data you just generated?

Everyone, no matter what situation you work in, should undertake a simple exercise to estimate the uncertainty in a method, at least once in their career. Even if this is done in a very “rough and ready” manner, it will really open your eyes to the sources of variation in your work and the relative contribution of each of these sources to the overall “range” that contains the true value of the measurand at the 95% level of statistical confidence. Go on - you know you want to!

**References**

(1) ISO/IEC guide 98-3:2008, Guide to the Expression of Uncertainty in Measurement, ISO, Genf (2008).

(2) S.L.R. Ellison and A. Williams, Eds., *Eurachem/CITAC guide: Quantifying Uncertainty in Analytical Measurement* (Third edition, 2012) ISBN 978-0-948926-30-3. Available from www.eurachem.org

(3) LGC, Preparation of Calibration Curves A Guide to Best Practice September 2003 http://www.lgcgroup.com/our-science/national-measurement-institute/publications-and-resources/good-practice-guides/preparation-of-calibration-curves-a-guide-to-best/#.Vf6BKflViko