Analytical Method Validation in Proteomics and Peptidomics Studies

Article

LCGC North America

LCGC North AmericaLCGC North America-11-01-2008
Volume 26
Issue 11
Pages: 1110–1117

While the "Validation Viewpoint" column has focused on conventional and recombinant pharmaceutical products, and at times, bioanalytical methods, we have just begun to think about method validation as it relates to -omics type studies.

While previous "Validation Viewpoint" columns have focussed on conventional and recombinant pharmaceutical products, and at times, bioanalytical methods, we think it is also appropriate to address method validation as it relates to -omics-type studies. Over the last decade, there have been enormous advances in the technology for such methods by which large numbers of analytes are identified and determined. These fields have matured to the extent that there are new journals focused on them. Most such publications and presentations make little to no mention of analytical method validation. It does not appear that the US Food and Drug Administration (FDA) or its European or Japanese counterparts have promulgated firm guidelines to be followed when performing method validation in any -omics studies, likely reflecting the fact that such studies are not yet widely included in regulatory filings. Today's journals are very protective of limited pages, and quality control components expected of good scientific practice frequently are not included in publications. It is assumed that pipettes, thermometers, and pH meters have been calibrated. It is assumed that extraction recoveries and linear ranges have been established. In the good laboratory practices (GLP) and current good manufacturing practice (cGMP) worlds, it is expected that laboratories can withstand a rigorous audit covering method validation, very little of which is ever expected to be published, nor need that be required. Good validation practice is established and is expected to be followed.

On the other hand, for -omics methods we are not at the state of maturity where such assumptions can be applied with confidence. The number of analytes is huge. Often there are no absolute reference standards for each compound. In many cases, the compounds detected are not of known structure and are quantitated in a relative way comparing biological cohorts such as disease vs. health or treatment versus not or cell line A versus cell line B. Given the large number of substances varying over many orders of magnitude, we can be totally sure that analytes are behaving independently of each other, and that their absolute or even relative concentrations are preserved by the sample preparation procedure. There are no single methods or technologies capable of a linear response over 10 orders of magnitude, given that four orders of magnitude is often described as optimistic. In fact, rarely is anything reported about linearity, lower limits of quantitation, or even the precision for various analytes when samples are repeated within-run or day-to-day or between laboratories. It is also clear that many types of instrument platforms and software are used for -omics and it is not even known how much difference that makes. No doubt these are challenges. It is about time that this area of bioanalytical chemistry receives some attention so that collectively, government, academia, and industry can begin to establish validation standards. It is encouraging that the Human Proteome Organization (HUPO) and others are beginning to address this issue (1). We encourage the International Conference on Harmonization (ICH) and similar organizations devoted to analytical method validation to also give attention to this important, albeit difficult topic, in its workshops and guidelines (2–5).

In addition to these challenges, there also are the related challenges of "valid structural assignments" and "biological validation." We say related because without reliable methods, it is likely that biological validation will be more noise than signal. No doubt we need good numbers for known molecules that can be reproduced, and proteomics has come under this scrutiny as of late. Proteomics has come under scrutiny of late. It clearly has disappointed vs. the expectations that were declared as the millennium turned. It is no doubt early and approaches are still being refined, but it is not too early to ignore basic principles of method validation.

The published scientific literature in an emerging field begins with ideas tossed out and instruments developed on speculation that are not tested rigorously at the first blush. It is not desirable to inhibit this creative work by requiring the laborious effort and considerable expense of validating the innovation. After all, little is at stake with the data reported by such papers, which are typically academic. On the other hand, as time goes on, innovative tools become routine and routine implies rigor in the hands of nonexperts where the data is more important than how it was obtained. Lives and fortunes are now at issue.

This column was conceived in the hopes that we might be able to shed some light on how and when analytical methods could or should be validated in the proteomics and peptidomics areas. These same suggestions apply to low molecular weight (MW) analytes, as in the metabolomics field, but in that case a greater percentage of the analytes are known and available in reasonable standards. We are working from the current USP and ICH guidelines on analytical method validation for conventional, low MW pharmaceuticals, as discussed in past "Validation Viewpoint" columns (6,7). What we are suggesting is that the proteomics–peptidomics communities-at-large might start considering how they could begin to institute method validation protocols or guidelines for future studies, no matter what the sample type or analytical method employed.

Biomarker Development and Measurement vs. Proteomics and Peptidomics Measurements

There is an existing literature that does discuss analytical method validation for successful biomarker development (8,9). Workshops and their resultant reviews present overviews of what should be done when developing biomarker assays for pharmacology, toxicology, or diagnosis. A key difference here is that only one or a small "panel" of analytes is defined. More often we hear the term "targeted proteomics" to distinguish from global proteomics. In such cases, much of the rigor applied to validation of conventional clinical chemistry and the bioanalytical chemistry of drug substances can be employed.

In 2003, the American Association of Pharmaceutical Sciences (AAPS) sponsored a Clinical Ligand Assay Society Biomarkers Workshop at Salt Lake City, Utah (October 24–25). This workshop addressed major issues in biomarker research, with an emphasis on validation and implementation of biomarker assays from preclinical discovery of efficacy and toxicity through clinical and postmarketing implementation (10,11). In their more recent publication, a practical, iterative, "fit-for-purpose" approach to biomarker method development and validation was proposed, keeping in mind the intended use of the data and the attendant regulatory requirements associated with that use. These guidelines for biomarker research were not aimed at global -omics methods that might lead to biomarker discoveries in the future. In the -omics fields, the most generally applied methods are those involving multidimensional liquid chromatography (MDLC), multidimensional capillary electrophoresis (MDCE), two-dimensional gel electrophoresis (2DGE), variants of 2DGE (DIGE), preparative isoelectric focusing (IEF) (Rotofor)–reversed-phase LC, and other two-dimensional separations, almost always then followed by some form of mass spectrometry (MS) — matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF)–MS, electrospray ionization (ESI)–MS, ESI–MS-MS, and others. These multidimensional separation approaches are begging for a validation approach. Other multianalyte proteomics–peptidomics studies involve tissue imaging with MS using MALDI, secondary ion mass spectrometry (SIMS), and desorption electrospray ionization (DESI) MS. These are too new to expect routine applications in the short term, and are even more complex with respect to absolute quantitation due to ionization as analytes are lifted off surfaces of variable structure.

Sampling Issues

One of the more pressing issues for -omics studies relates to sample accession protocols, including conditions for storage and shipping. In a very recent "Validation Viewpoint" column by Kissinger and Kissinger (12), the issue of sampling was addressed, though not specific to proteomics–peptidomics. Because such samples are incredibly complex, often having thousands of analytes that need, ideally, to be resolved, identified and then quantitated, small changes in the nature of the donor, the donor's age, sex, weight, diet, health and other properties all affect the nature of a given sample taken on a specific day, date, and time. The argument can be made that it is impossible to obtain the same sample for doing proteomics from day-to-day, even from nominally the same person, animal, or plant. The matrix for any one analyte includes all the other analytes as well as major proteins that typically are removed to a large degree in sample preparation. It is, therefore, difficult to imagine a "representative" background matrix that can be counted on in method development.

Interlaboratory Studies

We have yet to see any successful, interlaboratory collaborative study (ILCS, or round-robin) in global proteomics. It is our understanding that this has been attempted but that the results were not encouraging. Sample instability and instrument variations mentioned previously are no doubt crucial. There is no National Institute of Standards (NIST) certified reference material for doing ILCS in proteomics–peptidomics, as yet (13). Virtually all reported proteomics–peptidomics studies have emanated from a single laboratory. We have not been able to find reports of split samples being transferred along with the original analytical methods to other laboratories, similarly equipped. It is rarely clear in -omics reports that data are repeatable even within a single laboratory over time, but it might be the case that such data were not reported? We speak here of quantitative data, not only the identification of analytes, which also can be challenging. While clearly more difficult than for small molecules, the goal of comparable quantitative data from laboratory to laboratory for -omics methods would certainly build more confidence in them.

In the Literature?

What then has appeared in the literature related to analytical method validation in proteomics–peptidomics? The same analytical method has been applied to the same biofluid samples more than once, in many studies, but not all. That is, some repeatability studies have appeared, but not involving other laboratories and operators and instrumentation on the same samples. It is also the case that several biofluid or tissue samples from the same source were analyzed in replicate, though the number of repeats rarely has been greater than two or three (11). Thus, repeatability is sometimes demonstrated but rarely precision or robustness, and almost never with statistical data, too.

Small molecule synthetic drug analytical method validation requires absolute quantitation of the analytes of interest for some method categories (6,7,14). In proteomics and peptidomics, absolute quantitation is a difficult business, as there are no authentic reference materials at known concentrations to use for most proteins and peptides. Though isotopically labeled proteins and peptides can be generated by, for example, stable isotope labeling by amino acids in cell culture (SILAC) and are available commercially as a service, this is a very tedious and expensive approach to quantitation. The vast majority of already reported proteomics–peptidomics studies have not reported absolute quantitation, not because they do not want to, but because they can not. Relative quantitation, up- or down-regulation of specific proteins–peptides is a generally accepted approach today to show changes from one sample group to another. This is no doubt a useful screening approach to discover which molecules should be explored in more depth with targeted assays that have been validated.

Though there are now commercial software packages that will allow for absolute protein and perhaps peptide quantitation, these do not appear to have been applied, as yet, to global proteomics or peptidomics studies (15,16). The standard requirements for analytical method validation include accuracy and precision, linearity of calibration, range, robustness, limit of detection (LOD), lower limit of quantitation (LLOQ) (6,7,17). While we may never be able to match the depth of method validations routinely expected today for low MW pharmaceuticals and, to some extent, biopharmaceuticals, -omics methods can now start moving in that direction.

It is a long-held tradition that scientific studies not be published until they have been shown to be repeatable within a single laboratory. In most reported proteomics–peptidomics studies published today, this is either not the case or is not made evident. This lack of repeatability no doubt contributes to the skepticism about the value of proteomics and other -omics. One can argue that "it's still early, be patient" but that approach might not be persuasive to the funding sources needed to continue toward biological relevance. Therefore, what should proteomics and peptidomics laboratories be doing in the way of analytical method validation as the field matures?

Basic Method Statistics

If proteomics–peptidomics are to be of real value, then it will become necessary to perform absolute protein and peptide quantitation (18). Only in that way can numbers be compared from laboratory to laboratory around the world. Relative amounts are useful to uncover potential markers, but once these few are established, good biological validation will require good quantitation. If we cannot make comparisons of studies performed from one laboratory to another using the same or different analytical methods, then that will require everyone to do comparative, relative quantitation studies (again), which is a redundancy of effort. Publications on traditional pharmaceuticals report absolute concentrations of analytes. Only by having absolute numbers can we then approach true method validation involving accuracy, precision of quantitation accuracy, linearity of calibration plots for absolute quantitation, range of linearity, intermediate precision and ruggedness. Finally, conventional method validation requires a demonstration of limits of detection and quantitation.

Robustness

An assessment of robustness is required by ICH and FDA for all low MW pharmaceutical studies and submittals (6,7,19). Robustness asks how sensitive a method result is to small changes in the parameters of the method, including sample preparation. These typically include sensitivity to mobile phase pH and composition in high performance liquid chromatography (HPLC), column temperature, flow rate and gradient changes, and detector settings. Successful method transfer from instrument to instrument and laboratory to laboratory as demonstrated by intermediate precision is another component of building confidence in a method and its documentation. In our experience, the method transfer process is often where missing elements in the documentation come to light.

System Suitability

Perhaps, finally, we should mention system suitability samples and methods as a generally accepted part of method validation (20). System suitability is expected in any submittals to the FDA for INDs and NDAs. We have never seen a single publication in proteomics or peptidomics that even used the words- system suitability. It is, of course, possible that good method validation has been done in pharma–biopharma–medical laboratories, but that this has not been disclosed. System suitability is a way of demonstrating quality control (QC) procedures that will then be used when samples are analyzed after method validation is completed. The system suitability sample and methods basically show that the instrumentation and method are performing as expected and is ready for samples. In an immature field where data is not subject to audit, system suitability or QC procedures are virtually never reported. Without such care, the likelihood of rushing to judgment on an incorrect biological conclusion is manifest. Biology has long been subject to premature assertions followed by corrections. But, do we really need more of these in the future, forever?

Financial Considerations

It is time we discussed financing issues around validation studies for -omics methods. Pharmaceutical firms spend a huge amount of money every time they have to validate analytical methods for drugs they wish to bring to market. In an academic setting, funding a system based upon peer review and the limited time for students to stay on task, make it very difficult to fully validate methods and, in fact, discourages doing so. The work is laborious and some will conclude, "I didn't go to graduate school for this, I want to invent." This is a long-standing problem and we do not have a solution. It is fair to say though that many academics now rely on so-called "core facilities" that supply results as a service. The facilities are operated by professionals and they should be given the resources to provide their clients with reliable data. Many in biology and medicine conclude that tools like chromatography and MS are highly trustworthy and can be operated by squirrels. This is very naive. Ronald Reagan is said to have stated "Trust, but verify." By analogy, "Trust, but validate." In the -omics world, not every single analyte in a sample need be validated, rather just those that are changing to a significant degree under certain stressors and will be relied on for a conclusion. It will only be those that appear to be biomarkers or drug targets that eventually should be economical to validate.

Conclusions

It is clear that we are of the opinion that -omics methods with no quantitative validation parameters are giving the field a bad name. Good analytical chemistry requires analytical method validation, according to some predetermined and community-accepted set of criteria. Without this, skeptics will question the validity of the conclusions drawn from all -omics data, as they should and must. Most debacles in science have arisen because the scientists involved did not pay strict attention to the limits of their methodologies or they cheated. Method validation studies establish those limits. Later it will be clear if they cheated.

At a minimum there should be statistical treatment of all numbers, which suggests that there must be a minimum number of repeated studies on the same samples (n => 3) within a run and between days. There should be tables of supporting data, together with an indication of relative standard deviation (RSD) or coefficient of variation (CV) or percent RSD. In the -omics world there can be a range of approaches to build confidence. Given that detailed work on all the compounds my be impractical, selecting a small number, such as 10, over a range of MW and hydrophobicities clearly is an advance over no attempt at all. One might consider such a selection as "validation markers."

Whenever one receives a manuscript for review for a journal, an important thing to look for are tables of data with appropriate statistics (see sidebar). Many journals will require that such information be included in supplementary material. This makes sense to save paper, but it is helpful that reviewers understand what was done. It is important that faculty convey to their students the expectation that numbers should be trustworthy. While few students collect data that are crucial to a $100M investment or the next step for a cancer patient or neonatal infant, they will down the road. At the end of the day there is no difference to that cancer patient between a lack of integrity (willful neglect) and sloppiness.

The authors of this column are not in complete agreement. One of us (ISK) is insistent that all published numbers should be backed by thorough validation studies, while the other author (PTK) feels that validation in the now classical GLP/cGMP/ICH sense can be too much of a burden for academic papers where the intent is to innovate with instrumentation or methodology and not to develop methods for others to use for routine purposes. Unfortunately, it is often the case that these innovative papers imply they are fit for purpose when, in fact, they are just suggestive of what might be so down the road. Admitting this would go a long way to balancing our two views. A similar situation occurs within pharma– biotech between discovery -omics and data to be filed with a regulatory agency. In discovery, we take some chances and limit the documentation that would be needed for an audit of our results. Recognizing the balance of risks vs. costs (including time) is something we all must do. There is, after all, such a thing as a number that can be too good. We do not need to know a peptide concentration is 2.678 ng/mL, given that 3 +/- 2 ng/mL might well be all we need. Too often, numbers in publications look like the former, when in reality they are no better than the latter.

Michael E. Swartz "Validation Viewpoint" Co-Editor Michael E. Swartz is Research Director at Synomics Pharmaceutical Services, Wareham, Massachusetts, and a member of LCGC's editorial advisory board.

Michael E. Swartz

Ira S. Krull "Validation Viewpoint" Co-Editor Ira S. Krull is an Associate Professor of chemistry at Northeastern University, Boston, Massachusetts, and a member of LCGC's editorial advisory board.

Ira S. Krull

References

(1) 2008 HUPO validation workshop for proteomics, http://www.hupo2008.nl/index.php?ID=251

(2) www.fixingproteomics.org/

(3) www.the-scientist.com/community/posts/list/129.page#360

(4) www.proteome-factory.de/plaintext/index.php

(5) www.rsc.org/ej/MB/2007/b705178f.pdf

(6) United States Pharmacopeia No. 31-NF 26, Chapter 1225, Validation of compendial methods (2008).

(7) International Conference on Harmonization, Harmonized Tripartite Guideline, Validation of Analytical Procedures, Text and Methodology, Q2 (R1), November 2005. See also: www.ICH.org

(8) J.W. Lee et al., Pharm. Res. 23(2), 312–328 (2006).

(9) J.W. Lee et al., Pharm. Res. 22(4), 499–511 (2005).

(10) www.aapspharmaceutica.com

(11) L. Martens and H. Hermajakob, Mol. Biosyst.3, 518–522 (2007).

(12) C.B. Kissinger, P.T. Kissinger, M. Swartz, and I.S. Krull, LCGC26(8), 708–711 (2008).

(13) www.nist.gov/

(14) I.S. Krull and M.E. Swartz, LCGC23(1), 46 (2005).

(15) I.S. Krull and M.E. Swartz, LCGC25(12), 1196–1201 (2007).

(16) J.C. Silva, M.V. Gorenstein, G.-Z. Li, and J.P.C. Vissers, Mol. & Cell. Proteomics 5, 144–156 (2006).

(17) M. Swartz and I.S. Krull, LCGC 26(5), 460–462 (2008).

(18) I.S. Krull and M.E. Swartz, LCGC23(1), 46 (2005).

(19) M.E. Swartz and I.S. Krull, LCGC24(5), 480 (2006).

(20) M. Swartz and I.S. Krull, LCGC25(8), 718–724 (2007).

Related Videos
Related Content