ABSTRACT
Commercial research antibodies are crucial tools in modern cell biology and biochemistry. In the USA some $2 billion a year are spent on them, but many are apparently not fit-for-purpose, and this may contribute to the ‘reproducibility crisis’ in biological sciences. Inadequate antibody validation and characterization, lack of user awareness, and occasional incompetence amongst suppliers have had immense scientific and personal costs. In this Opinion, I suggest some paths to make the use of these vital tools more successful. I have attempted to summarize and extend expert views from the literature to suggest that sustained routine efforts should made in: (1) the validation of antibodies, (2) their identification, (3) communication and controls, (4) the training of potential users, (5) the transparency of original equipment manufacturer (OEM) marketing agreements, and (5) in a more widespread use of recombinant antibodies (together denoted the ‘VICTOR’ approach).
Introduction
As we can now read genomes in hours, we may forget that studying proteins in cells with antibodies takes more time. I started making antibodies, against keratin, using toenails and a friendly white rabbit. Now one can select specific antibodies from vast synthetic display libraries (Simeon and Chen, 2018), and computationally design antibodies (Baran et al., 2017; Norn et al., 2017). Although we have come far down the intellectual river of affinity-binding reagents, we often find ourselves up-the-creek (i.e. in an awkward position) when we use commercial research antibodies (cr-Abs). We might not actually know what the cr-Ab we use binds to, or even what it actually is. And if that is clear, the next batch may differ, if it is even available (Baker, 2015a,b; Goodman, 2018)!
Most readers will likely use antibody-based methods to probe cellular machinery and use cr-Abs ‘out of the catalogue’, that is, not therapeutic or diagnostic reagents, to investigate issues such as the phosphorylation of a receptor, or the distribution or level of a protein in a cell or tissue. But cr-Abs can be among the most irritating of reagents to use – leading to a chant common amongst stressed-out laboratory scientists “My antibody doesn't work!”. In fact, it seems that much of the current ‘reproducibility crisis’ afflicting biological sciences can be traced to cr-Abs (Baker, 2015b; Begley and Ellis, 2012; Couchman, 2009). Here, I aim to alert especially those new to the field and the puzzled, to why there is and must be such a fuss about cr-Abs (Goodman, 2018). Many of my comments apply to all sources of research antibodies, but the scale of the cr-Ab problem is notably large. Here, I summarize the extensive literature on the topic and offer the mnemonic ‘VICTOR’ to highlight the major issues of: (1) validation, (2) identification, (3) communication and controls, (4) training, (5) original equipment manufacturer (OEM) agreements, and (6) recombinant antibodies, to help win our fight with the cr-Abs.
The scale of the problem
In 2003 there were ∼10,000 (104) antibodies available (Michaud et al., 2003). At the time of writing (March 2018), some 3.8 million (3.8×106) cr-Abs have been described (Helsby et al., 2013, 2014). Multiple suppliers may offer the same reagent, variously labelled (see OEM, below) (Goodman, 2018; Voskuil, 2014), but this remains a startling number. Assuming a constant rate of increase, cr-Abs have emerged at an amazing rate approximating Moore's law (for components on micro-electronic circuits: doubling approximately every 12 months) (Moore, 1965). But aprés Moore, so-to-speak, there has come the deluge of literature reporting that all is less-than-well with cr-Abs, with disastrous, costly and frankly frightening consequences: bad cr-Abs lead to bad data, incorrect conclusions, and a cascade of dubious literature based on false premises (Andersson et al., 2017; Gilda et al., 2015; Prassas et al., 2014). Estimates imply that worldwide $365 million–$2 billion a year are spent buying cr-Abs (Bird, 2012; Bradbury and Plückthun, 2015b; Freedman et al., 2015) – and ‘a billion here, a billion there, and pretty soon you're talking about real money’ (Dirksen, 1961).
In 2005, Clifford Saper, the Editor-in-Chief of the Journal of Comparative Neurology, noted that authors often incorrectly validated and characterized cr-Abs in immunohistochemistry (IHC), thus invalidating their research (2005). Since then, there has been a bloom of technologies that allow a rigorous cr-Ab characterization (Roncador et al., 2016). These include microcapillary digital western blots (WBs), high-density peptide- and protein-display arrays, coupled immunoprecipitation/mass spectroscopy (IP/MS) and high multiplex IHC. However, a survey by the Global Biological Standards Institute (GBSI) suggested that only amongst well-established investigators did a majority even see a need to validate their cr-Abs (Freedman et al., 2016). Furthermore, not all cr-Ab suppliers have the best interests of researchers entirely at the focus of their operations: vials may lack active or indeed any reagent (Couchman, 2009) or may have forged labels (Cyranoski, 2017), ELISA kits may be non-selective (Prassas and Diamandis, 2014), producer animal husbandry may be appallingly deficient (Lowe, 2016), and cr-Abs may bind molecules entirely divorced from the one advertised (Andersson et al., 2017; Gilda et al., 2015), to offer some examples.
In the past decade, efforts to reproduce many pivotal studies in oncology, on which drug developments were based, failed (Begley and Ellis, 2012; Freedman et al., 2015; Prinz et al., 2011), and these were data published in respected, well cited journals. This situation has been dubbed ‘the reproducibility crisis’ (Baker, 2015b; Begley and Ellis, 2012). Extrapolation of the effect of such work on the science funded by the US National Institutes of Health suggested that ∼$28 billion a year are, in effect, being directly wasted (Freedman et al., 2015). For that sum, even Everett Dirksen might sit up and take notice, and he's been dead for nigh on 50 years. Something is clearly awry. But how are cr-Abs linked to this crisis?
Antibody selectivity and sensitivity are sensitive flowers
The region on target proteins that antibodies recognize, the epitope, is 5 to 12 amino acids in size, so the ‘epitope space’, the sum of all possible epitopes, is large even when ignoring the epitopes generated by conformation or post-translational modification (Berglund et al., 2008a). Individual mammals make more than 1013 distinct antibody selectivities (see Box 1 for definitions) (Murphy et al., 2012), which are sensitive both to amino acid sequence and shape (Getzoff et al., 1988). However, it is easy to perturb protein shape and alter epitopes, and so prevent the binding of a particular antibody when using common biochemical techniques. For instance, during a WB or during paraffin IHC, say, some epitopes may be altered compared with those on the protein freshly translated on the ribosome. However, it is currently not possible to predict which epitopes alter, and case-by-case by measurements are needed. For example, an antibody that selectively binds a native protein may ignore it entirely on a WB or in IHC, or vice versa, or both.
Selectivity: the ability of a cr-Ab to bind the target of interest rather than other epitopes.
Sensitivity: the ability of a cr-Ab to select the target in increasingly diverse and/or diluted conditions.
Validation: proof of cr-Ab selectivity and sensitivity in a given experimental context (‘F4P’).
Identification: cr-Ab terminology giving unique name, target, origin and working concentration.
Characterization: identification and F4P validation.
Controls: well-defined samples constructed to prove positive and negative F4P characterization of a cr-Ab.
Training: teaching (fledgling) cr-Ab users how to identify, validate, use and report on cr-Abs.
OEM: Original Equipment Manufacturer – sometimes those who made the cr-Ab.
Recombinant: an entirely identifiable and reproducible engineered monoclonal antibody.
What is worse, experimental procedures may generate mimotopes, protein structures that resemble the true epitope so closely that they are recognized by presumably ‘specific’ antibodies (Geysen et al., 1986; Luzar et al., 2016). Mimotopes generate false positive signals in addition to, or instead of, signals from the epitope against which the cr-Ab has been raised (Kroening et al., 2012). Furthermore, a procedure may destroy an epitope on a weakly expressed target, but induce mimotopes on proteins present at high concentrations in your system, producing a positive signal that is irrelevant to the target of interest (Gilda et al., 2015). A ‘specific antibody’ binds to both its epitope and to various mimotopes, generally with lower affinity (Goodman, 2018). However, the crucial point to remember is that even highly selective antibodies are selective only in the context of the original screening process that defined their selectivity.
The selectivity of an antibody for its target is therefore related to the context in which the antibody is used (see Box 1 for definitions). In summary, an antibody may be highly selective, but, perhaps, not for the particular protein conformation that is created de novo by an experimental procedure. ‘The’ epitope may be destroyed but mimotopes may arise that generate irrelevant signals. In the following, I present a framework using the VICTOR mnemonic to highlight a possible path to robust cr-Ab usage.
V is for validate
Experimental manipulations perturb protein structure and may affect antibody binding so it is crucial to routinely verify that cr-Abs work as expected in experiments, to validate them as fit-for-purpose (F4P). Three validation questions define a cr-Ab: does it bind only to the target epitope (is it selective)? At what concentration can it do so (is it sensitive)? Are there conditions that allow high selectivity and sensitivity in my experimental protocol (is F4P)?
Who, then, is responsible for validating cr-Abs? This topic has been extensively discussed (Bordeaux et al., 2010; Goodman, 2018). Just as one might expect, say, that a new car runs, is steerable and is safe to drive (all F4P criteria), a newly purchased cr-Ab might reasonably be expected to have been verified as functional by the supplier, and to selectively bind its target under well specified conditions. However, reportedly, at best half of randomly selected cr-Abs do not bind as advertised (Andersson et al., 2017; Bordeaux et al., 2010; Bradbury and Plückthun, 2015a; Couchman, 2009; Gilda et al., 2015). Certainly, no supplier can test every cr-Ab in every experimental context attempted by the end-user. Nevertheless, the supplier must have defined conditions in which the cr-Ab does bind to its target. If the antibody then does not function as advertised, complain loudly. But even if it is does work as expected, it is still always necessary to prove a cr-Ab is F4P.
Detailed discussions of methodologies are out of the scope of this article, but it is worth noting that everyday techniques for antibody validation are full of traps for the unwary (Kim et al., 2016; Lund-Johansen and Browning, 2017; Manning et al., 2012; Roncador et al., 2016; Skogs et al., 2017). In addition, protocols vary widely, but as even small changes can strongly affect results, it is important to implement standardized validation protocols (Baker, 2016; Weller, 2016). In this regard, the European Antibody Network (EuroMabNet) has an excellent hands-on validation guide (Roncador et al., 2016).
One critical point is to attempt to use parallel supportive methods, independent of the antibody being examined, to validate cr-Ab specificity. Therefore, genetic methods, MS, tagged targets, independent antibodies and orthogonal methods have been suggested as constituting the ‘five experimental pillars’ for cr-Ab validation (Uhlén et al., 2016), from a compilation of other informed discussions (Bordeaux et al., 2010; Voskuil, 2014; Wardle and Tan, 2015; Weller, 2016).
Among commonly used validation methods are WBs, ELISA and IHC, which need prior knowledge of the target and reliable standards (Gilda et al., 2015; Landry and Gomes, 2016; Lund-Johansen and Browning, 2017; Saper, 2005). IP/MS works ab initio and is favored by the proteomics community (Lund-Johansen and Browning, 2017; Marcon et al., 2015).
However, even if a cr-Ab binds to a particular target, it is nevertheless crucial to ensure it is F4P, that it is both selective and sensitive in the precise experimental context in use. Never rely on the extrapolation of other data as found on suppliers' website, in the literature or even in the neighboring laboratory. The reader is strongly encouraged to test any and all cr-Abs themselves, in all relevant experiments and using all appropriate controls – of which more later.
I is for identify
Some of the reproducibility crisis is caused by our inability to unequivocally identify and/or re-use cr-Abs used in the literature (Goodman, 2018). On the one hand, this is caused by the many cr-Abs that are merely distributed by suppliers (see ‘O for OEM’) (Voskuil, 2014), and, on the other hand, suppliers may deliberately conceal the identity of a cr-Ab by re-labelling. This complicates the task of identifying cr-Abs in the literature (see also ‘The Scale of the Problem’ above). For example, consider integrin β6 (ITGB6), for which there are some 20 IHC-capable cr-Abs ‘available’. Of these, three have the same immunogen and the same ‘validating’ IHC image (catalog identifiers PA5-54848, NBP2-14136, and HPA023626; Thermo Fisher Scientific, Novus Biologicals and Atlas Antibodies, respectively). Evidently, ‘they’ are the same polyclonal antibody re-identified!
We, the users, become complicit in the resulting chaos by equivocally identifying the source, batch, identity, concentration and validation data of the cr-Abs in experiments we publish. The research reagent identifier (RRID) system provides obligate durable tags for cr-Abs (Bandrowski and Martone, 2016). Ask suppliers if they are compliant with this system – if not, encourage them to participate, and use an RRID whenever reporting on a cr-Ab. Even when a reagent is supplied by multiple companies, the RRID (or any future related system) may pinpoint where it was actually sourced. Perhaps more subtly, the identity of the cr-Ab as sold also depends on the nature of the reagent itself, as discussed in detail in ‘R is for Recombinant’, below.
C is for communication
We need to rapidly and routinely communicate the results of using a given cr-Ab, both to the community and to the suppliers, but it is not always easy to do this impartially and honestly. Some beginnings have been made with crowd-based applications for reporting (e.g. pAbmAbs; http://pabmabs.com/wordpress/). Such sites, which are independent of suppliers, seem transparent. The search engine CiteAb (Helsby et al., 2013, 2014) is valuable because it impartially lists 3.8×106 cr-Abs by frequency of citation, and hence by implicated effective usage. Another approach is taken by the Antibodypedia (Björling and Uhlén, 2008; Gloriam et al., 2010), a compilation of 2.8×106 cr-Abs that lists utility as reported by suppliers, together with user-comments on the cr-Abs. BenchSci (https://www.benchsci.com/) extracts 3.7×106 cr-Ab experimental data sets to show literature validations for cr-Abs. These sites offer a partially overlapping coverage, which is free for academic use, and are useful for finding and investigating cr-Abs.
However, caution is still necessary. Even much-used and much-reported antibodies can be inadvertently susceptible to what I term ‘supplier-driven scientific error’, basically due to inadequate cr-Ab validation. For example, Andersson et al., while investigating estrogen receptor β (ERβ), noted that of 13 ‘defining’ cr-Abs only one actually bound to ERβ, causing wasted years of effort, and leading to many misleading literature cascades based on incorrect primary target identification (Andersson et al., 2017).
In publications, it is thus critical to unequivocally identify both the cr-Ab and its validation. Given the availability of supplementary data sections, there is no excuse for not providing such detailed data. At the very least, we must routinely report the name and postulated specificity of the antibody, the supplier, the catalog identifier, the batch identifier and the antibody concentration used in the experiment (and not the dilution of what was provided in the vial) (Couchman, 2009; Cyranoski, 2017), together with either a summary or, ideally, detailed data on how the reagent was validated as F4P in the presented experiments.
… and Controls
cr-Ab controls should demonstrate that the amount and distribution of the target epitope is proportionally and accurately reflected by the amount and distribution of antibody binding in the experimental context. As antibody selectivity potentially changes depending on the technique used, data are irrelevant unless they are routinely accompanied by appropriate positive and negative controls (see Box 2). Such controls can be relatively easy to obtain (e.g. for IP and ELISA), or exceedingly difficult (e.g. for IHC on human specimens, or chromosome immunoprecipitation). It will thus spare the researcher much pain if they spend some time considering whether appropriate controls are available, before reaching for a cr-Ab.
To offer some examples, for sandwich ELISA or WB-type assays, if reliable and representative recombinant target proteins are available (see Box 2), they can be used to spike cell lysates in order to investigate the selectivity and sensitivity of a given cr-Ab on a complex background. For IHC, cell lines or tissues engineered to over- or under-express the target can be examined on IHC arrays to assess concordant changes in intensity and distribution of any ‘specific signal’ arising from a cr-Ab (Goodman et al., 2012; Mehta and Wolujczyk, 2009). Protein expression levels may be inferred from available mRNA expression of a target; however, it is worth keeping in mind that mRNA expression levels are not consistently representative of protein expression levels (Maier et al., 2009; Wang, 2008). Therefore, cr-Abs are potentially valuable for in situ target quantification or semi-quantification (Goodman et al., 2012; Toki et al., 2017). Cell extracts from engineered cells can be used to verify the biochemical expression and distribution of target, for example, in cellular fractionation studies. There are caveats here since overexpression can trigger an artifactual redistribution or complex formation, and such results should be confirmed in non-engineered systems (Lund-Johansen et al., 2016; Marcon et al., 2015). In addition, both over- or under-expression of proteins may be cytotoxic, resulting in dead cells, and hence this technique cannot always be used to make the required control negative or positive cell lines or tissues.
(1) Controls are experiment-dependent – and cannot be automatically be transferred from technique to technique, due to the differential loss of epitopes or formation of mimotopes.
(2) Native proteins may fold, form complexes, or have post-translational modifications not represented by, and differing from, recombinant forms used as standards or from the cr-Ab immunogen(s).
(3) mRNA levels are often an unreliable reporter of tissue and cellular protein expression levels.
(4) Knockdown is not necessarily knockout.
(5) Overexpressed cellular proteins may distribute differently from proteins expressed at native levels.
(6) Different ‘epitope retrievals’ for IHC retrieve different epitopes!
Cell and tissue arrays are valuable IHC and immunocytochemistry tools, as they allow the direct microscopic comparison of ‘background’ and ‘specific’ signals, which helps to support the veracity of any staining (Goodman et al., 2012). Once verified, a cr-Ab may be used on complex tissues that have been processed in the same manner as in the control arrays. The control arrays, or a subset of them expressing validated over- and under-expressing targets should be routinely tested in parallel to any experimental tissue. Epitope tags engineered onto the target allow the tag distribution to be compared with the imaged distribution of the cr-Ab. A comparison of anti-tag and cr-Ab signals supports the cr-Ab specificity via coincident distributions, levels of expression or biochemical distribution (Skogs et al., 2017). If cr-Abs to different epitopes on the target of interest are available, these can be compared to determine whether the data they produce coincide, with the caveat that although a positive result is supportive, due to differential epitope exposure or destruction, a negative result may be misleading (Maier et al., 2009; Wang, 2008; Goodman et al., 2012; Toki et al., 2017)! Leaving out the primary cr-Ab as a ‘control negative’ saves reagent, but is not an appropriate IHC control; species and isotype matched irrelevant antibodies must be used (Torlakovic et al., 2015; Ward, 2004).
The examples above emphasize the need to specifically design appropriate controls to confirm that any cr-Ab used is F4P in each and every experimental context it might be used in. For further details, the reader is referred to excellent discussions elsewhere (Freedman et al., 2016; Roncador et al., 2016; Torlakovic et al., 2015; Uhlén et al., 2016; Voskuil, 2014).
T is for training
Several features of cr-Abs make their use non-trivial. As discussed above, unequivocal cr-Ab identification, validation with appropriate controls and the full communication of these data are fundamental. Although many university biology courses are superb, it seems that only those researchers scared and scarred by experience routinely tend to be aware of this (Freedman et al., 2016). However, the issues we are facing in the use of cr-Abs are existential and we must address them (Baker, 2015b; Begley and Ellis, 2012; Goodman, 2018).
All antibody users must understand this. In fact, as cr-Abs and the data they are used to generate are so critical to the biological enterprise, I have suggested that graduate students be formally certified in the use of cr-Abs (Goodman, 2018). Trainee pathologists are routinely certified, so why not cell biologists? Perhaps more use should also be made of those organizations that run courses and authoritative workshops on cr-Ab selection and usage, such as those offered by EuroMabNet (https://www.euromabnet.com/meetings).
O is for OEMs
There are many excellent cr-Ab producers and suppliers, but there also appear to be others whose activities range from the genially incompetent to the fraudulent (Baker, 2015a; Cyranoski, 2017; Lowe, 2016). But how can one tell who is who?
First, surprisingly, the company supplying the cr-Ab may neither have made nor tested it, and, in fact, may not be permitted to tell you who did owing to an Original Equipment Manufacturer (OEM) agreement (Voskuil, 2014). In short, documentation describing the cr-Ab purchased may well not refer to the material you receive. In addition, an identical reagent may be sold by multiple suppliers under different identifiers (see I for Identity). Furthermore, even hypothetically identical reagents obtained from multiple suppliers have not necessarily been subjected to consistent and reliable storage and transport logistics, which may also modulate their performance (Voskuil, 2014). To avoid such situations, I encourage users to ask the supplier: (1) if they made the cr-Ab they are selling; (2) whether they have tested or renamed the particular lot they propose to ship, and (3) to supply that data; and, finally (4) how they maintain cold-chain delivery logistics. If this is considered as scare-mongering, the reader may while away a quiet afternoon by comparing the WBs, IHC images and FACS scans on the websites of various suppliers; it is quite easy to find remarkably high degrees of identity for some of the datasets, strongly suggesting that the suppliers have plagiarized information, rather than re-testing the cr-Ab described.
Second, although your interest is in producing good science, companies selling cr-Abs are for-profit entities that exist to make money for their owners and shareholders. Otherwise, they would not commercially survive to provide the many cr-Abs, which we are, at least initially, happy to find available. Of course, many producers are scientific leaders and proudly make first-class cr-Abs, to the benefit of all (Goodman, 2018). However, there are, one regrets, other entities who have suffered, to quote an antibody producer I know, ‘moral burnout’, as well as those who, apparently, have never had any morals to burn in the first place. Indeed, some suppliers have been found to abruptly leave the community of polyclonal antibody producers (Lowe, 2016; Reardon, 2016) Others, reportedly, forge product labels, scavenge empty antibody vials from laboratory waste bins and ‘re-use’ them by refilling with diluted cr-Ab, or with irrelevant fluids, while others market cr-Ab ELISA kits that do not detect what they claim to (Baker, 2015a; Cyranoski, 2017; Prassas and Diamandis, 2014). And such examples seem to represent only the tip on a mountain of dross (Berglund et al., 2008b; Couchman, 2009; Gilda et al., 2015; Goodman, 1989). Such ‘mythological reagents’ can result in direct costs to a laboratory that can amount to many thousands of dollars a year (Prassas and Diamandis, 2014), ignoring any effects on other users, and on any resulting ‘science’ that may trigger junk-data avalanches through the literature (Andersson et al., 2017), thus clearly impeding scientific progress. Finally, a supplier may be reluctant, perhaps due to an OEM, to disclose exactly what a cr-Ab has been raised against, citing commercial secrecy. However, such a reagent is scientifically mutilated. It is therefore crucial that a provider states precisely what the cr-Ab is (see I for identification).
What can be done about this? Well, one option would be to routinely inform purchasing and legal departments, as well as disseminate via social media, if a dysfunctional antibody has been obtained. However, this approach is limited both by possible legal repercussions and by abusers of the social media, who may bias any debate in one or other direction. Recall, we are talking quite large amounts of money here.
How can one tell whether a particular cr-Ab is suspicious, before use? We have seen that reliance on provider and/or supplier information may not be enough. A review of the literature to discover where an antibody has been used, and whether the validation and data are credible, would be valuable, and this is now made possible by analysis sites, such as CiteAb (Helsby et al., 2014), Antibodypedia (Björling and Uhlén, 2008) and BenchSci (https://www.benchsci.com/).
However, the horrors of antibody use do not stop with ‘OEM suppliers’ of cr-Abs. Providers of standard reagents (i.e. those perhaps needed to validate and characterize cr-Abs), may also mislead users and so cause distortions of the literature (see ‘C is for communication’ above) (Landry and Gomes, 2016; Quarmby et al., 1998). I urge readers to consider the scientific and economic implications of such imprecise reagents before using their next cr-Ab.
R is for recombinant
Antibodies come in three basic classes, polyclonal, monoclonal and recombinant. Polyclonal antibodies are derived using classical immunization and have a heterogenous binding specificity (Harlow and Lane, 1988). Immune response varies between animals, so polyclonal antibodies vary between batches, and, thus, they cannot be unambiguously identifiable. Furthermore, polyclonal cr-Abs can impetuously disappear from catalogues, replaced by the words ‘this reagent is no longer available’. Of the total of 3.8 million cr-Abs in the CiteAb and BenchSci databases, 500,000 (CiteAb) and 30,000 (BenchSci) appear to have been discontinued (personal communication, CiteAb and BenchSci). Some have shuffled to other suppliers via OEM, possibly even in an identifiable form, but many polyclonal antibodies have simply gone forever. A polyclonal cr-Ab has a scatter of binding affinities, with similar selectivities as defined by the screening and purification procedure, so they may favor mimotopes over epitopes, resulting in signals that do not necessarily arise from the protein of interest (Gilda et al., 2015). Certainly, polyclonal antibodies do have advantages. They are cheap to make and they are valuable for the detection of small molecules, such as haptens or drugs, where the ambiguities that arise from the diverse epitopes of proteins are less pronounced, and they may appear useful on paraffin IHC, where those ambiguities are greater, due to target, epitope and mimotope complexity. But cynics state that two things are inevitable in life: death and taxes; polyclonal antibodies come from individual living animals that die, so the tax paid here is that they are irreproducible. Based on this, I consider polyclonal cr-Abs as suboptimal reagents for the study of the cell biology of proteins.
Monoclonal antibodies are obtained from immortalized and cloned splenic B-cells of immunized animals. Hybridoma cell lines make single antibodies (Köhler and Milstein, 1975), and these lines can provide a reproducible and identifiable resource. They are moderately expensive to make, but they are in principle ‘immortal’ (Borrell, 2010; Freedman et al., 2016; Yu et al., 2015). However, rodent hybridomas frequently come from inbred murine populations with limited antibody diversity, which restricts cross-reactivity with mouse, a frequent object of analysis, and they can be destabilized by their unusual genetic complement (Barnes et al., 2003). Monoclonal antibodies from rabbits have a number of advantages (Pytela et al., 2008; Spieker-Polet et al., 1995). Rabbits are outbred, have large spleens, and produce high-affinity antibodies. Because they are evolutionarily distant from both rodents and humans, they may react strongly to proteins from both these species (Goodman et al., 2012; Kodangattil et al., 2014).
Recombinant monoclonal antibodies are made by clonal selection from phage, bacterial, fungal or mammalian antibody libraries, or by grafting cDNA encoding CDRs from an existing hybridoma into the DNA of a non-variable IgG framework (Hoogenboom, 2005; McConnell et al., 2012). The antibody clones can be expressed in stable producer cell lines (Lo et al., 2014). It is highly desirable, and no longer especially arduous, to sequence antibody genes and so immortalize ‘good’ hybridomas, before converting them into recombinant antibodies (Babrak et al., 2017; Chon and Zarbis-Papastoitsis, 2011). Although recombinant antibodies are relatively expensive to generate, compared to polyclonal and conventional monoclonal antibodies, recombinant antibodies are well defined and immortalized by their DNA sequences, thus fulfilling the criteria of being identifiable, reproducible and unique (Baker, 2015a; Bradbury and Plückthun, 2015b), potentially outweighing the cost disadvantage.
In short, in the case of a well-performing antibody, the issues are with affinity and specificity, defined during immunization and selection, as well as with reproducibility and identifiability, defined by the class of antibody. The selectivity and identity of polyclonal antibodies are inconsistent, and vary between batches and animals. Although a monoclonal antibody is inherently reproducible as it originates from a clonal hybridoma and can be identifiable, it may nevertheless be variable (Barnes et al., 2003; Bradbury et al., 2018), or, as a cell line, can be lost or become contaminated. In contrast, recombinant antibodies from a given producer cell line can be fully defined by DNA sequence, and thus be entirely reproducible, identifiable and immortal. Therefore, use of recombinant reagents would ensure that scientists will subsequently be able to repeat any experiments. Producers rightly fear that published antibody sequences may be pirated by the unscrupulous. But sequence access and usage could be controlled, for example, by using block-chain, as suggested for avoiding a related abuse of sequence data from ethnic biota (Anon, 2018). Perhaps such technology might allow recombinant cr-Abs to be both simultaneously fully transparent and entirely protected?
The future: are there solutions?
I do not want to give the impression that my sometimes abrasive words reflect a patronizing or holier-than-thou attitude – or that I am blaming the community. Until very recently, I routinely failed to practice many things, which I ardently preach here: mea culpa. I was simply pig ignorant. Ignorant of the scale and prevalence of the problems, and of the possible solutions. Now I am aware and scared, so I ask you too, the reader, to act. The many issues surrounding cr-Abs mentioned above may ruin budding scientific careers. If you publish data using cr-Abs, unless these have been cautiously identified, validated and controlled as F4P, others may be seduced into a swamp of irreproducibility (Andersson et al., 2017). Even more importantly, the increasing availability of new cr-Abs will only increase the problems described in this article, and make the task of selecting optimal cr-Abs overwhelming. cr-Abs will then become a Rumsfeld's ‘unknown unknown’ – we won't know what we don't know.
Given that some suppliers seem to be reliably unreliable, why not take legal action to remedy the situation? Or perhaps someone has already, and it is not reported widely? The individual costs per antibody are rather low, so the effects of the individual ‘flea bites’ of failed cr-Abs on the career of a researcher appear minor. However, the GBSI estimates that yearly costs in excess of $1010 accrue to the use of bad cr-Abs in the US alone. That is a large flea indeed. It is notable that we are only semi-formally aware of the scale of the problems in the USA, and no data yet exists for instance for the UK, Europe and Asia. It would be valuable to survey the situation in these regions.
There is no central entity that acts on behalf of the community to test validate and verify cr-Abs. There clearly is the need for such an organization, and it should be internationally recognized and funded. Antibodies that test as valid would be certified. I can visualize several simple crowd-funded models that could work towards this goal. This gap also emphasizes the lack of a reliable and independent source of validated standard target proteins (Gilda et al., 2015; Quarmby et al., 1998). Given that there is a market for nearly 4×106 cr-Abs, it cannot be beyond human ingenuity to set up an international Public Interest Entity, which, on a not-for-profit basis, produces precisely defined proteomic targets, for example the principal components of the human and mouse proteomes, and verifies antibodies that recognize them in ‘pillar’ validation technologies (Colwill et al., 2011; Edwards, 2016; Edwards et al., 2017; Zhong et al., 2015; Weller, 2016). These antibodies and target standards could then be certified and maintained as standard open-source reference pairs, routinely available under commercial terms, to help validate and align the specificity of other cr-Abs. A similar venture based on crowd-sourcing defined the CD antigens on lymphocytes, which enabled that community to conduct productive research, rather than chasing ill-defined cell-surface proteins (Bernard and Boumsell, 1984).
Clearly, these are a long-term and costly goals, but I raise the question of whether they are actually costlier than the current waste of resources both public and private, intellectual and financial, personal and institutional on ineffective and badly validated cr-Abs? Would establishing such an entity really be more costly than the generation of over 104 antibodies against actin, or over 5000 against caspase 3 – to cite only two of the many extreme examples from CiteAb (Goodman, 2018; Helsby et al., 2014)?
Many technologies, especially cr-Ab-free technologies based on MS and the use of recombinant affinity binders that are based on structural frameworks independent of any immune system (e.g. DARPins, anticalins, knottins, avimers and affimers; Simeon and Chen, 2018), are emerging, which may eventually make non-recombinant antibodies redundant in many applications (Gilbreth and Koide, 2012; Weidle et al., 2013).
However, at present, the sensitivity of MS technologies is much lower, and their capital costs, as with the costs of recombinant affinity binders, are far higher than what can be routinely achieved by using robust cr-Abs. Hence, we will need cr-Abs as pivotal reagents for several years yet. Thus, as cell biologists, it is in all our interests to ensure that the data we produce is optimal through the use of correctly validated, identified, controlled and reported cr-Abs.
The disasters of the reproducibility crisis are being funded largely out of the public purse and are also a huge burden on commercial research budgets. Together, this reduces the money available to do effective science, and prevents effective medicines from reaching us all, and also incidentally increases their cost when they reach us. Clearly it is scientifically, socially, and politically utterly inacceptable that we should continue to suffer from bad cr-Abs. The solutions and any future VICTORy are in our hands.
Acknowledgements
I acknowledge the generosity of the Max-Planck Society and Merck KGaA, who for many years supported work that involved characterizing a large number of cr-Abs. I thank the very many members of the antibody community who have tolerated my horrified cries, but especially Professors Andreas Plückthun (Uni. Zürich), Michael Taussig (Cambridge Protein Arrays), Andrew Chalmers (CiteAb) and Maurice Chen (BenchSci) for listening and clarifying many aspects of the antibody problem, and providing access to unpublished data.
Footnotes
Funding
This research received no grant from any funding agency in the public, commercial or not-for-profit sectors.
References
Competing interests
The author declares no competing or financial interests.