Recent technological advances have made microscopy indispensable in life science research. Its ubiquitous use, in turn, underscores the importance of ensuring that microscopy-based experiments are replicable and that the resulting data comparable. While there has been a wealth of review articles, practical guides and conferences devoted to the topic of maintaining standard instrument operating conditions, the paucity of attention dedicated to properly documenting microscopy experiments is undeniable. This lack of emphasis on accurate reporting extends beyond life science researchers themselves, to the review panels and editorial boards of many journals. Such oversight at the final step of communicating a scientific discovery can unfortunately negate the many valiant efforts made to ensure experimental quality control in the name of scientific reproducibility. This Review aims to enumerate the various parameters that should be reported in an imaging experiment by illustrating how their inconsistent application can lead to irreconcilable results.

Reproducibility and replicability form the foundation of scientific inquiry. These cornerstones were recently put under the spotlight by multiple journals and funding agencies to address a growing problem in scientific data integrity (Baker, 2016; Collins and Tabak, 2014; Landis et al., 2012; National Academies of Sciences, Engineering, and Medicine, 2019). The findings from these investigations cast doubt on the validity of a large portion of life-science studies, which will erode public confidence in scientific findings. Several factors contribute to the problem of reproducibility in life sciences – lack of transparency, non-uniformity of the reagents and specimens used, as well as standardization of instrument quality and assay conditions (Deagle et al., 2017; Halter et al., 2014; Jonkman, 2020; Jonkman et al., 2014; Jost and Waters, 2019; Lee and Kitaoka, 2018; Waters, 2009). While these issues have been investigated by several organizations, including funding agencies (Collins and Tabak, 2014), and even congressionally directed studies in the USA (National Academies of Sciences, Engineering, and Medicine, 2019), their recommendations have focused primarily on cell lines, specimen sharing, data sharing and computational code dissemination. Considering that imaging data make up 70% of the figure panels presented in many developmental and cell biological journals (Marqués et al., 2020), widespread cases of erroneous and incomplete reporting will ultimately render a large portion of published results non-replicable.

There are several reasons why microscopy parameters are often mis- or under-reported. One of the most important reasons is the general lack of appreciation of microscopy fundamentals, and how microscope settings and parameters can influence data and the subsequent conclusions deduced from them. While many courses are now available, basic microscopy instruction is simply not standard for most graduate school curricula. In fact, many students have no choice but to rely on week-long (or shorter) courses to supplement their training. While mentor- or peer-directed guidance may complement these efforts, such ‘on-the-job training’ is highly variable and less effective compared to comprehensive, rigorous and standardized microscopy education. While many biologists routinely leverage microscopy as an important experimental tool, the lack of curricular emphasis perpetuates the wrong message that the details of microscopy experiments are less critical than those of other biological assays.

Another reason for mis- or under-reporting is that space can be limited in many journals. Coupled with the general lack of microscopy knowledge, researchers are unaware of the most salient attributes to report. We also appreciate that it is not feasible to describe every possible detail of the microscope configuration. However, better knowledge of the critical components of a microscope can lead to more succinct, yet accurate, ways of documenting the methods.

The problem of poor reporting is exacerbated when considering how microscopy experiments are performed compared to other biological assays. Optical microscopy is often performed by the biologists themselves after obtaining initial training at a core facility (Sánchez et al., 2011; Wallrabe et al., 2014). This self-help nature of microscopy, coupled with the widespread availability of ‘one-click’ commercial microscopes, creates unpredictable usage patterns from one experiment to the next. The ease of modern microscopes to quickly generate eye-pleasing images also lowers the level of appreciation for additional technical verification. In contrast, techniques such as mass spectrometry or deep sequencing are routinely performed by core facility personnel, who often also write the relevant Methods section of the manuscript. Subsequently, non-expert microscope users are not trained to recognize the nuance in microscope operation, and the oversight and inconsistencies can snowball, culminating in an improperly reported imaging experiment.

While reporting of microscopy methods is necessary, there is no single prescription on how to solve this problem. We acknowledge that, as microscopy technology becomes more advanced, understanding the instruments and how to handle the complex data being produced becomes less trivial. While proper reporting benefits the scientific community at large, it will also offer many intrinsic benefits that may not be immediately apparent to the readers. First, repeatability and replicability of imaging experiments can be attained much more efficiently and accurately when the microscope configuration is recorded, thus saving time and effort. In addition, by meticulously keeping record of important microscopy settings, readers also become more aware of how the various parameters will affect the experimental outcome. This guide therefore aims to provide instructive examples that illustrate how various microscope parameters can impact the image data, and how inaccurate and insufficient documentation impede or even prevent experimental replicability.

Microscope complexity and the lack of user expertise can jointly present a formidable challenge when identifying which optical components and settings directly contribute to the acquired image. An in-depth treatment of the optical physics of modern instruments and their proper usage are beyond the scope of this Review; however, we refer readers to the many excellent publications on the subject (Jonkman et al., 2020; Jonkman, 2020; Murray et al., 2007; North, 2006; Pawley, 2006; Stuurman and Thorn, 2015; Waters, 2009).

In their analysis, Marqués et al. found less than 20% of the surveyed publications contained enough information to support reproducibility of the experiments in question (Marqués et al., 2020). For example, a description such as ‘Images were acquired using a Zeiss LSM 880 confocal microscope’, with no further specification is woefully inadequate, but unfortunately common. To aid the reader in avoiding such pitfalls, we adopt an intuitive way to illustrate how each component within a representative microscope will impact the final image and summarize why their associated parameters must be reported. We use a simplified microscope system as a technical ‘roadmap’ that the reader can follow. This will help maintain vigilance to ensure proper documentation during the image acquisition process, which is essential for subsequent inclusion of imaging methods for publication.

This Review focuses on the common optical imaging techniques of brightfield, widefield fluorescence and point-scan confocal microscopy. There are sections that are applicable for other techniques such as darkfield, polarization, or light-sheet microscopy. While it is not feasible to cover all the nuances for these techniques, the reader is encouraged to extrapolate the information presented here to other optical methods. We also advise end-users to work directly with their local microscopy facility or instrument manufacturers if assistance is needed for proper reporting. We will focus our discussion on the light source, excitation and emission paths, objective lens and sample, as well as the detector (Fig. 1).

Fig. 1.

A simplified, model microscope to guide reporting strategies for imaging experiments. Nearly all microscopes will include one or more (1A,1B) light source(s), (2) excitation and emission filters, (3) objective lens(es), (4) the sample, and (5) detector(s). The green and red lines denote fluorescence excitation and emission light paths, respectively. Each component should be well documented to maintain experimental replicability.

Fig. 1.

A simplified, model microscope to guide reporting strategies for imaging experiments. Nearly all microscopes will include one or more (1A,1B) light source(s), (2) excitation and emission filters, (3) objective lens(es), (4) the sample, and (5) detector(s). The green and red lines denote fluorescence excitation and emission light paths, respectively. Each component should be well documented to maintain experimental replicability.

Light source

A sensible place to start is at the light source. As the signal-generating component of any microscope, it is imperative to describe its most important features, namely, power and wavelength. Most commercial optical microscopes display the illumination intensity (or power) as a percentage (%) of the maximum available. However, reporting power as a percentage does not describe the nominal light intensity experienced by the sample. A more informative and replicable way to report light power is to measure it directly (usually in terms of mW or µW) or as power density (W/cm2) (Jonkman et al., 2020). The power of the excitation light should be measured as close to the sample location as possible to reflect the actual energy the sample is experiencing. This is typically done by using a power meter, an essential tool available in any imaging facility. For lower numerical aperture (NA) air objective lenses, measuring power at the focal plane of the objective is feasible. However, as the NA increases, or if the objective requires oil or water immersion, it is better to remove the objective from its turret and measure the light power in the empty cavity. When reporting measured power, be sure to include where in the light path the measurement was performed. For example, ‘Samples were illuminated with a 488 nm laser (MPB Communications), at 30 µW nominal power measured at the objective focal plane’. This gives a benchmark that biologists can use across experiments to ensure accurate replication (Jonkman et al., 2020). We encourage biologists to ensure that illumination sources are regularly monitored for changes in power due to age, malfunctions or misalignment.

Many light sources are ‘broadband’ in nature – that is, they emit over a range of wavelengths. These include halogen sources or mercury/xenon arc lamps. Each of these light sources has its own advantages and disadvantages. Their broad emission allows for a simpler instrument design by reducing the number of hardware components, but requires optical filters in the excitation path to selectively transmit only a narrow wavelength range of light (discussed below). An additional light source is the single-wavelength light-emitting diode (swLED), which are commonly combined into arrays called ‘light engines’. Similar to a laser, swLEDs illuminate at one wavelength, but have a wider bandwidth (10 nm or more compared to <0.1 nm). When reporting swLED source, it is essential to specify the manufacturer and center wavelength used in imaging.

Lasers represent another common fluorescence illumination source, which emit light over a very narrow wavelength range, usually <0.1 nm. They can deliver a significantly higher power density compared to broad-spectrum or swLED light sources. This is particularly useful for experiments where fluorophore abundance or brightness is limited. For laser sources, it is important to report the wavelength of illumination light. While a distinction of tens of nanometers may seem insignificant, a small change in illumination wavelength can lead to noticeable differences in the resulting image.

For example, Fig. 2 compares the effect of using 514 nm versus 488 nm excitation light to image a cell expressing vimentin tagged with yellow fluorescent protein (YFP). The choice of illuminating with 514 nm (Fig. 2A) or 488 nm (Fig. 2B) is significant as 488 nm illumination results in 50% overall reduction in emitted photons compared to 514 nm at the same total laser power. This could affect experimental outcomes in several ways. For instance, fine structures that are apparent in the inset image shown in Fig. 2C (acquired using 514 nm excitation) are not nearly as apparent when illuminated with 488 nm light (Fig. 2D). Such observational discrepancies could result in possibly conflicting biological hypotheses.

Fig. 2.

Small changes in excitation wavelength alters measured photon emission. Compared here are images taken by J.S.A. and S.K. using 514 nm (A,C) and 488 nm excitation (B,D) of a PTK-2 cell transfected with a construct bearing vimentin tagged with YFP. Samples were grown on #1.5 coverslips, fixed in 2% paraformaldehyde, and imaged in PPD mounting medium. Images were acquired on a Zeiss LSM 880 laser scanning confocal microscope equipped with GaAsP point-detectors, with identical laser powers in each wavelength (9.3 µW), emission bandpass (517–615 nm), pinhole size (1.37 AU), gain (700), offset (zero), dwell times (0.33 µs), using a Zeiss Plan-Apochromat 63×/1.4 NA Oil DIC M27 objective lens. Note that the image in C, corresponding to the area highlighted by the yellow box in A, reveals structural details (denoted by arrows), which are not apparent in the corresponding image (D), corresponding to the area highlighted by the yellow box in B, due to higher signal. Scale bars: 10 µm (A,B), and 5 µm (C,D).

Fig. 2.

Small changes in excitation wavelength alters measured photon emission. Compared here are images taken by J.S.A. and S.K. using 514 nm (A,C) and 488 nm excitation (B,D) of a PTK-2 cell transfected with a construct bearing vimentin tagged with YFP. Samples were grown on #1.5 coverslips, fixed in 2% paraformaldehyde, and imaged in PPD mounting medium. Images were acquired on a Zeiss LSM 880 laser scanning confocal microscope equipped with GaAsP point-detectors, with identical laser powers in each wavelength (9.3 µW), emission bandpass (517–615 nm), pinhole size (1.37 AU), gain (700), offset (zero), dwell times (0.33 µs), using a Zeiss Plan-Apochromat 63×/1.4 NA Oil DIC M27 objective lens. Note that the image in C, corresponding to the area highlighted by the yellow box in A, reveals structural details (denoted by arrows), which are not apparent in the corresponding image (D), corresponding to the area highlighted by the yellow box in B, due to higher signal. Scale bars: 10 µm (A,B), and 5 µm (C,D).

While there may be practical reasons for exciting a fluorophore at suboptimal wavelengths, such as laser availability on an instrument, the end-user should be aware how this potentially changes the data (Jonkman et al., 2020). For example, using a less-optimal laser may require increased intensity in order to achieve similar signal in the final image. This, in turn, induces photodamage and other undesired biological changes, complicating interpretation of the data. Incomplete reporting of these attributes further skews the outcome. There are a number of excellent online tools, such as those listed by Aaron et al. (2019), that provide optical characteristics for many fluorophores. These valuable resources can help readers optimize imaging experiments from the laser lines to the filters to maximize excitation efficiency and minimize crosstalk.

Excitation and emission filters

Both excitation and emission light will need to be properly attenuated, filtered and/or otherwise manipulated before image creation. As excitation light leaves the source, it is essential to select for the desired wavelength, even if the light source is a laser. This is most commonly done with a filter, an optical element that transmits a specific wavelength or group of wavelengths. More complex instruments, like confocal microscopes, use prisms or tunable filters to allow for customized transmission of excitation wavelengths. Likewise, the emitted light from the excited sample must be transmitted to the detector. However, the excitation and emission paths must be separated. This is commonly done with a dichroic mirror, which reflects unwanted light. Prior to entering the detector, the emitted light is further gated so that only the desired wavelength range is acquired. This is accomplished by an emission filter or tunable bandpass prism. Additionally, confocal microscopes have an adjustable pinhole prior to the detector. Rather than selecting based on wavelengths, a pinhole acts to spatially filter emission signal by rejecting out-of-focus light (Pawley, 2006). For most imaging experiments, the optimal pinhole size is 1 Airy unit (AU), which corresponds to the size at which the microscope has the best balance of optical sectioning and signal strength (Pawley, 2006), although scenarios exist that may require pinhole adjustment away from this setting. Thus, it is imperative to specify the pinhole size in AU, as will be illustrated later. Properly describing both the excitation and emission paths prevents serious errors like fluorophores being cross-excited with improper illumination wavelength and filter combinations (for example, see Fig. 2). Furthermore, these microscope settings are necessary to properly interpret and compare any intensity-based measurements. Unfortunately, not all microscope software clearly presents the elements of the light path; however, it will record the optical configurations in the metadata of every captured image. However, when manual selection of filter elements is required, users should document either the wavelength range used (e.g. 500–540 nm) or the center wavelength and bandwidth (e.g. 520±20 nm) of all filters. For example, ‘488 nm excitation light was selected by a single-bandpass filter (488±10 nm, Chroma Technologies)’. For more complex light paths, researchers should seek assistance from facility staff or microscope manufacturers to properly describe the light path components. If the software captures all necessary details in the image metadata, it is helpful to summarize the instrument parameters in the Methods section similar to the example above, but the user should also include the software metadata as supplementary information.

Objective lens

Arguably, the single most important optical component in a microscope is the objective lens. In addition to playing a central role in magnifying and resolving biological features, an objective lens serves as a conduit through which excitation light is directed to the specimen and the ensuing emitted light is collected. This complex optical element has a number of properties that will determine the characteristics of the image, and are necessary to report when presenting microscopy data. At minimum, these are objective manufacturer, magnification and numerical aperture, but may also include immersion medium and any aberration correction designations, such as flat-field correction (Plan), chromatic aberration correction (Achromat), spherical aberration correction (Fluorite), or a combination of all three (Plan Apochromat) (more information available at www.microscopyu.com/microscopy-basics/introduction-to-microscope-objectives). For example, ‘Samples were imaged using a Nikon Apo LWD 25×, 1.1 NA water-immersion objective’.

Aside from the manufacturer, the most self-explanatory attribute of an objective lens is its magnification. This parameter describes the size of the resulting image relative to the specimen itself. The below equation describes magnification (M) as a ratio of objective (F) and tube lens (L) focal lengths:

A tube lens is present in all modern microscopes and serves to refocus the light collected by the objective lens onto a detector. As can be calculated from the above equation, the true magnification of an image may not reflect what is printed the side of the objective if it is used in conjunction with a tube lens for which it was not designed. It is also important to note that many microscopes feature an additional ‘zoom’ lens (commonly 1.25× or 1.5×) that effectively extends the tube lens focal length and increases the image magnification beyond what the objective lens states. This is not to be confused with digital zoom that many software packages use. In practice, the exact magnification for a given image should be reported as a scale bar that has been derived from a calibration standard such as a stage micrometer or from the internal calibration of the instrument control software.

Importantly, the magnification of an objective lens is not related to the resolution of an image. In practical terms, resolution is determined predominantly by the numerical aperture (NA) of the objective lens. For example, a higher NA objective may be able to measure the spacing of myosin proteins, where a lower NA (but similar magnification) lens cannot (Khuon et al., 2010). Together with the magnification, the NA is printed on the side of every objective lens. Simply put, failure to report the NA renders any imaging experiment non-replicable.

To illustrate this point, Fig. 3 features a PTK2 cell that has been immuno-stained for myosin II. This cell was imaged twice at identical 60× total magnification, but with objectives of different NA. Comparison of the high NA image (NA 1.4, Fig. 3A) to the low NA image (NA 0.6, Fig. 3B) shows that despite identical magnification, the resolvable image content differs significantly (Fig. 3C–E). Consequently, poor reporting of NA, and hence resolution, will render it impossible to attribute any differences in ultrastructural observations to optical or biological factors.

Fig. 3.

The effect of numerical aperture on resolving biological structures. (A) Shown here and in the enlarged view in C is an image of a PTK-2 cell immunostained (using a 1:200 dilution of Abcam H11 primary antibody and 1:200 dilution of Alexa Fluor 488-tagged secondary antibodies) for pan-myosin II acquired on a Nikon TiE widefield microscope with a Plan Apo 60×/1.4 NA objective lens. (B) Image of the same PTK-2 cell shown in A, imaged with a lower NA objective at 60× using a Nikon S Plan Fluor ELWD 40×, 0.6 NA objective lens in conjunction with a 1.5× zoom lens. (D) Enlarged view of the highlighted area in B. (E) The pixel intensity profile (au, arbitrary units) over the distance indicated by the pairs of arrows in C and D is able to resolve myosin II subunits in the image taken with 1.4 NA (solid line), but fail to do so with 0.6 NA (dotted image). Cells in A and B were illuminated by a white light LED (Lumencor), with 3.1 mW power at the sample over a wavelength range of 465–495 nm. Emission was collected from 515–555 nm and quantified using an EMCCD camera (Andor, DU-885). Exposure time and camera gain were 82 ms and 50, and 200 ms and 150 for A and B, respectively. Scale bars: 20 µm (A,B), 5 µm (C,D).

Fig. 3.

The effect of numerical aperture on resolving biological structures. (A) Shown here and in the enlarged view in C is an image of a PTK-2 cell immunostained (using a 1:200 dilution of Abcam H11 primary antibody and 1:200 dilution of Alexa Fluor 488-tagged secondary antibodies) for pan-myosin II acquired on a Nikon TiE widefield microscope with a Plan Apo 60×/1.4 NA objective lens. (B) Image of the same PTK-2 cell shown in A, imaged with a lower NA objective at 60× using a Nikon S Plan Fluor ELWD 40×, 0.6 NA objective lens in conjunction with a 1.5× zoom lens. (D) Enlarged view of the highlighted area in B. (E) The pixel intensity profile (au, arbitrary units) over the distance indicated by the pairs of arrows in C and D is able to resolve myosin II subunits in the image taken with 1.4 NA (solid line), but fail to do so with 0.6 NA (dotted image). Cells in A and B were illuminated by a white light LED (Lumencor), with 3.1 mW power at the sample over a wavelength range of 465–495 nm. Emission was collected from 515–555 nm and quantified using an EMCCD camera (Andor, DU-885). Exposure time and camera gain were 82 ms and 50, and 200 ms and 150 for A and B, respectively. Scale bars: 20 µm (A,B), 5 µm (C,D).

Acquisition parameters

Aside from the parameters discussed so far, there are a number of other details that should be reported that may not directly involve a specific microscope component. First, a complete description of the sample preparation protocol is of course essential. However, the sheer diversity of these techniques prevents their comprehensive discussion here (Halpern et al., 2015; Spector and Goldman, 2005; Wheatley and Wang, 1998). Nevertheless, certain sample preparation details are relevant to imaging and should be reported. These include the sample mounting medium, coverslip thickness, and temperature and CO2 concentration for live-cell imaging.

Second, instruments such as laser-scanning confocal microscopes allow the user to average the signal on a line-by-line (or image-by-image) basis as well as the digital pixel resolution. While these settings are often used to optimize image acquisition time, they can have a profound effect on the resulting data. Users should summarize these settings in the methods or figure legends, especially if these values are different between figures.

Finally, there are settings unique to three-dimensional (3D) and/or time-lapse data sets. For 3D images, it is essential to report the step size, or Z-interval. While many commercial confocal microscopes will automatically select an optimal Z-interval given the axial resolution of the instrument, users can override this setting. Changing the step size alters the Z-spatial resolution and will affect the accuracy of spatial information in the image. Likewise, for time-lapse experiments, it is essential to specify the interval between imaging time points. Importantly, this interval does not, in general, equal the detector exposure time (to be discussed later), even if image acquisition proceeds without an intervening pause between time points. Inconsistencies in time sampling can cause events of interest to be measured inaccurately (Aaron et al., 2019), or result in needless photobleaching and sample toxicity. Either of these will ultimately influence the data and the subsequent conclusions.

Detectors

Once an optical image has been formed, it can only be quantified via a digital detector (Lambert and Waters, 2014; Stuurman and Vale, 2016). Detectors can be divided into two groups – cameras and point-detectors. Cameras are used when it is necessary to collect information from more than one location in the sample simultaneously, such as the case in widefield, light-sheet or spinning-disc microscopy. Point detectors are used when each point in the field of view is illuminated and imaged sequentially, such as with laser-scanning confocal or two-photon microscopy. Each type of detector has unique features and can even have different wavelength sensitivities. Owing to these differences, it is necessary to describe the manufacturer and type of detector in addition to its acquisition parameters (discussed below). For example, ‘Images were captured using an sCMOS camera (manufacturer and model information)’.

The most commonly varied acquisition parameter is the amount of time a detector is allowed to accumulate emitted light before it is digitized. For chip-based cameras, like charged-coupled devices (CCDs) or complementary metal-oxide semiconductor (CMOS), this is termed the exposure time; for point detectors in confocal microscopy, it is described as dwell time, which is the time needed to illuminate a single pixel. This single acquisition parameter can dramatically change the interpretation and quantification of an image. Therefore, exposure/dwell time should be accurately reported in time units for all captured images so they can be properly quantified and compared.

Photons that impinge on a detector produce a proportional voltage signal that is then related to a pixel intensity value in the final image. How the voltage and pixel intensity value are related is described by the detector gain. High gain results in relatively few photons translating into a high pixel intensity value, and vice versa. However, increasing the detector gain will also come with the cost of increasing noise (Stuurman and Vale, 2016). Thus, gain directly affects the signal-to-noise ratio (SNR) of an image and should be reported. It is important to note that gain is usually expressed on an arbitrary scale that may be unique to the detector manufacturer and model. So, this further underscores the importance of specifying the detector as mentioned above.

Moreover, point detectors feature the same adjustable parameters as their camera counterparts, such as gain and dwell time. However, users can also adjust another setting called detector offset. This setting is useful so that photon counts below a given threshold are not mistaken as signal, thereby maximizing the dynamic range of the detector. However, significant changes to the offset setting on a detector can also ‘clip’ low-signal pixels, giving them zero intensity. Users should be mindful of any changes to the offset value; artificially changing pixel values to zero effectively discards data from those pixels and removes real signal from the raw data.

In Fig. 4, we consider an image of a PTK2 cell immuno-stained as described in Fig. 3 (Fig. 4A). Fig. 4B–D show images of the same cell, but with altered detector gain, offset and confocal pinhole size, respectively. Fig. 4E is an image of the same cell now immuno-stained for myosin regulatory light chain phosphorylation (RLC-p) and displayed in a second color channel. To demonstrate how changing these settings could have a ‘real-world’ effect, we compared the level of myosin II and RLC-p (Fig. 4B′–D′) and calculated the Manders’ overlap fraction (M1) of the two proteins (Fig. 4F) (Aaron et al., 2018). Fig. 4F demonstrates a 75% variation depending on which image was used to compute the coefficient. This demonstrates how changing gain, offset or pinhole settings between images can significantly affect the final biological conclusion. It could result in one researcher concluding that a minority of myosin II is phosphorylated (Fig. 4C′ and F), while another claiming that nearly all is phospho-activated (Fig. 4D′ and F). It is acceptable to change these parameters to optimize the image but care must be taken to keep these values consistent within an experiment to avoid inaccurate interpretations.

Fig. 4.

Effect of detector settings on measuring imaging results. (A–D) Shown here are images of (A) a PTK2 cell immunostained for pan-myosin II (as described in Fig. 3), acquired using a Zeiss LSM 880 confocal microscope with Plan-Apochromat 63×/1.4 NA oil DIC M27 objective lens. This image has been optimized for SNR and overall signal intensity to serve as the baseline image. (B–D) Images with reduced detector gain (from 700 to 525), increased offset (from 75 to 2000), and increased confocal pinhole size (from 1 AU to 8 AU), respectively, relative to that shown in A. All other parameters remained constant, including 488 nm laser power (9.3 µW), emission wavelength range (500–600 nm), and dwell time (0.33 µs). (E) Image of the same cell, optimized in the same fashion as in A, immunostained for phosphorylated myosin regulatory light chain (MYL9; RLC-p) using Cell Signaling anti-RLC-p T18/S19 primary antibodies and Alexa Fluor 568-tagged secondary antibodies, both at a dilution of 1:200. The image was acquired using 561 nm laser power, with emission wavelength range, dwell time, gain, and offset of 3.3 µW, 570–650 nm, 0.33 µs, 780, and 75, respectively. (A′–D′) Dual color overlay images of signal shown in A–D, respectively, (in cyan) and E (in magenta), showing the change in apparent overlap between total myosin II and RLC-p as instrument settings are altered. (F) Graph of the Manders’ M1 overlap fraction obtained for A′ to D′. Scale bar: 20 µm.

Fig. 4.

Effect of detector settings on measuring imaging results. (A–D) Shown here are images of (A) a PTK2 cell immunostained for pan-myosin II (as described in Fig. 3), acquired using a Zeiss LSM 880 confocal microscope with Plan-Apochromat 63×/1.4 NA oil DIC M27 objective lens. This image has been optimized for SNR and overall signal intensity to serve as the baseline image. (B–D) Images with reduced detector gain (from 700 to 525), increased offset (from 75 to 2000), and increased confocal pinhole size (from 1 AU to 8 AU), respectively, relative to that shown in A. All other parameters remained constant, including 488 nm laser power (9.3 µW), emission wavelength range (500–600 nm), and dwell time (0.33 µs). (E) Image of the same cell, optimized in the same fashion as in A, immunostained for phosphorylated myosin regulatory light chain (MYL9; RLC-p) using Cell Signaling anti-RLC-p T18/S19 primary antibodies and Alexa Fluor 568-tagged secondary antibodies, both at a dilution of 1:200. The image was acquired using 561 nm laser power, with emission wavelength range, dwell time, gain, and offset of 3.3 µW, 570–650 nm, 0.33 µs, 780, and 75, respectively. (A′–D′) Dual color overlay images of signal shown in A–D, respectively, (in cyan) and E (in magenta), showing the change in apparent overlap between total myosin II and RLC-p as instrument settings are altered. (F) Graph of the Manders’ M1 overlap fraction obtained for A′ to D′. Scale bar: 20 µm.

Another commonly used acquisition parameter – binning – acts to improve the SNR in an image by summing the signal in a user-defined neighborhood of pixels. While binning enhances SNR, it also reduces resolution. It should always be documented when an image has been binned, together with the selected pixel neighborhood (i.e. 2×2 binning, 4×4 binning, etc.). This seemingly trivial feature can vastly affect the experimental outcome, as shown in Fig. 5. Fig. 5A shows a PTK2 cell immuno-stained for myosin II (same as Fig. 4A) as well as 2×2, 4×4, and 8×8 pixel binning applied during image acquisition (Fig. 5B–D). Fig. 5A′–D′ shows the same PTK2 cell stained for phosphorylated myosin regulatory light chain (RLC-p) and binned in the same manner, respectively. To compare the effect of binning on quantification, we measure Pearson's correlation coefficient (PCC) (Adler and Parmryd, 2012) in this example, instead of Manders’ overlap fraction. Where M1 showed the change in overlap between the two channels (total myosin II and RLC-p), PCC describes the intensity correlation between the two channels (Aaron et al., 2018, code available at www.github.com/aicjanelia/colocalization). As binning increases, the correlation of intensity between the two channels (hence PCC) also increases (Fig. 5E), even though there is little discernible visual difference across panels A″–D″. The hazard of insufficient acquisition parameters and their effect on the subsequent quantitative analysis cannot be overstated.

Fig. 5.

Effect of binning parameters on quantitative measurements. (A) Shown here again is the image of a PTK2 cell immuno-stained for myosin II (same as in Fig. 4A). (B–D) Effect of digitally binning the image in A using 2×2 (B), 4×4 (C) and 8×8 (D) pixel neighborhoods (using FIJI). (A′–D′) Image of RLC-p (Fig. 4E), binned as indicated for A–D. (A″–D″) Dual-color overlays, with total myosin shown in cyan and RLC-p in magenta. (E) Graph of the measured Pearson's correlation coefficient (code available at www.github.com/aicjanelia/colocalization) for the different binning conditions. All microscope configuration and image acquisition parameters are as indicated in Fig. 4. Scale bar: 20 µm.

Fig. 5.

Effect of binning parameters on quantitative measurements. (A) Shown here again is the image of a PTK2 cell immuno-stained for myosin II (same as in Fig. 4A). (B–D) Effect of digitally binning the image in A using 2×2 (B), 4×4 (C) and 8×8 (D) pixel neighborhoods (using FIJI). (A′–D′) Image of RLC-p (Fig. 4E), binned as indicated for A–D. (A″–D″) Dual-color overlays, with total myosin shown in cyan and RLC-p in magenta. (E) Graph of the measured Pearson's correlation coefficient (code available at www.github.com/aicjanelia/colocalization) for the different binning conditions. All microscope configuration and image acquisition parameters are as indicated in Fig. 4. Scale bar: 20 µm.

These examples illustrate that lack of attention to detail in any one microscope configuration or acquisition setting can create unexpected downstream effects in the final image. Proper reporting not only benefits the scientific community at large, but also the individual researcher; insufficient reporting of these details can render the task of accurately replicating an imaging experiment impossible.

The important effort of benchmarking the performance of imaging instruments has been championed by many groups (Deagle et al., 2017; Halter et al., 2014; Jonkman et al., 2014; Waters and Wittmann, 2014). While this is of vital importance in facilitating data validation from various instruments and allowing scientists to compare scientific findings, standardization of the microscopes alone does not ensure the replicability and comparability of scientific data. Microscopes are, after all, instruments designed to be customized by the users for optimal image acquisition. Variables, such as illumination power, exposure time, objective lenses and detector gain are all meant to be changed, and these are likely to differ from one microscope manufacturer to the next. Likewise, variability in image acquisition parameters and the subsequent results can be dependent on the experimental design (e.g. proper use of controls and quantitative metrics). The topic of proper experimental design is beyond the scope of this Review, but has been covered by us elsewhere (Wait et al., 2020). Naturally, much like optimizing buffer conditions for a biochemical assay, changing these imaging parameters is an inevitable fact of scientific research. However, it is also equally important that these parameters be reported for the larger community of end users.

The importance of good record-keeping for microscopy work is deeply rooted in the nature of its data as compared to, for example, genomics. A DNA sequence is deterministic and finite. There is no further auxiliary information required to extract the pertinent information. Different sequencing techniques should replicate the identical nucleic acid sequence. However, the information in image data cannot be extracted without additional experimental details that not only include sample preparation, but also the underlying microscopy settings. If a biologist needs to report the detailed buffer conditions for a biochemical assay or how a molecular biology experiment has been performed, then it is imperative that the same level of diligence be extended to documenting imaging conditions. Table 1 summarizes the main recommendations in this Review for properly reporting microscope configuration and image-acquisition parameters. For the readers’ convenience, a fillable form is included (see Table S1). Readers are encouraged to use this form for each image acquisition session and include it in the lab notebook to help maintain day-to-day experimental consistency, as well as for appropriate documentation in subsequent publication.

Table 1.

Summary of microscope configuration and image acquisition parameters to be reported for maintaining experimental replicability

Summary of microscope configuration and image acquisition parameters to be reported for maintaining experimental replicability
Summary of microscope configuration and image acquisition parameters to be reported for maintaining experimental replicability

The motivation for this article is driven largely by the recent finding by Marqués et al. (2020) that among a surveyed sample of articles in highly regarded developmental biology and cell biology journals, a majority of published results are comprised of imaging data. However, a disproportionately small fraction of the Methods sections in those papers are devoted to reporting how the data were acquired. This led to a large proportion of those studies with core results that could not be validated. This widespread oversight not only renders the published data non-replicable and the findings potentially irreconcilable, but it also exposes the poor implementation of the self-imposed guidelines of many journals. Indeed, many of the surveyed articles went through peer review, but the reviewers largely failed to recognize the insufficiency of the microscopy-related details in the Methods sections (Marqués et al., 2020). It also highlights an under-utilized, but effective, checkpoint in the process of manuscript preparation – the expertise of the local microscopy core facilities and their assistance in writing or checking the manuscript. This is especially valuable considering the increasing amount of microscopy work being performed at these shared resources. The global microscopy community has recognized the growing problem of poor quality control with published light microscopy data. Organizations like Quality Assessment and Reproducibility for Instruments and Images in Light Microscopy (QUAREP-LiMi; Nelson et al., 2021 preprint) seek to connect the microscopy community in an effort to improve the quality and reproducibility of microscopy data. However, these efforts must also encompass education of microscope users as well.

In this article, we make the assumption that the under- and mis-reporting of microscopy-related parameters are driven not by ill-intention of the authors, but instead perpetuated by the generally accepted lack of understanding and appreciation of how each microscopy parameter, trivial as they may seem, could alter the results. There are innumerable excellent reviews, book chapters and educational websites that cover the theoretical and practical aspects of optics and microscopy experimental design (Jonkman et al., 2020; Jost and Waters, 2019; Lee and Kitaoka, 2018; Murray et al., 2007; North, 2006; Pawley, 2006; Stuurman and Thorn, 2015; Waters, 2009), so we have opted instead to illustrate how altering these parameters would change the acquired image data – and thus the perils of not reporting them. Our list is by no means exhaustive, and we encourage end-users to further educate themselves on the components of their optical instrument that need to be cited for experimental replication. One such resource, MyScope (https://myscope.training; Cribb et al., 2016), can assist when deciding what are the necessary items to cite beyond what is covered here. MyScope provides helpful tutorials and information that users can utilize to self-educate on a variety of microscopy topics. Another useful resource currently in development is the ‘Micro-Meta App’ (https://wu-bimac.github.io/MicroMetaApp.github.io/). This app provides a graphical interface that lets researchers select and record the components of their microscope and image acquisition parameters for better reporting of metadata, following the OME standard (discussed below).

Unfortunately, as commercial microscopes become more complex, they have also become opaque in how they operate. In making their products more ‘user-friendly’, some companies have adopted an approach of reducing the user-interface of their microscopes to a single click, potentially cloaking essential information needed to replicate the data. There is nothing fundamentally wrong about making instruments more user-friendly. However, we advocate that manufacturers document in the image metadata all essential parameters required to replicate and validate the resulting images. Likewise, using a ‘one-click’ microscope does not absolve the end-users from sufficiently and accurately reporting these parameters, nor the reviewers from demanding them. Our belief is that by equipping end-users with this understanding, it would facilitate the appropriate reporting of the essential details that would support data replicability. The inclusion of these imaging conditions would also facilitate more effective reconciliation when there is a discrepancy in the findings, which can be triggered by how images are collected, and not necessarily the underlying biology.

Another impetus for the scientific community to correct the reporting problem in microscopy is the accelerated pace at which imaging data are being shared. With the open microscopy movement gaining traction, it is vital for rigorous reporting to accompany all data. In particular, for image repositories, such as OMERO (Allan et al., 2012), incomplete sample preparation and microscope configuration information renders open-access data useless and could lead to erroneous interpretation. For data sharing to be successful, it must be accompanied by thorough reporting of methods. Fortunately, data collected on most commercial instruments contain metadata that provide important image acquisition parameters. One such data-sharing resource is OMERO. We strongly recommend the life sciences community take advantage of this powerful, yet under-utilized tool (Linkert et al., 2010). Additionally, OMERO has also provided an importer tool, called ‘OMERO.mde’, that allows researchers to view, annotate and edit image metadata. More information on OME metadata and the standards OMERO uses can be found at https://omero-guides.readthedocs.io. It is important to note that acquiring a digital image is only the first step in an imaging experiment. Of equal importance is accurate and sufficient reporting of the image processing and analysis steps. This topic is too vast to be covered here in this Review. We dissect this equally overlooked problem of poor reporting in image processing in a companion article (Aaron and Chew, 2021). We encourage readers to use both articles as a starting point for understanding how to write materials and methods properly and concisely with the end-goal of facilitating image data replication and validation.

Ultimately, the task of enforcing more rigorous documentation lies with the scientific journals and, in some cases, the funding agencies. For journals, this must go beyond editorial enforcement of their own guidelines of including better imaging materials and methods. Journals would be well advised to include at least one reviewer with microscopy expertise when a submitted manuscript contains primarily microscopy data or its key findings involve microscopy. Such an expert reviewer can comment authoritatively on the thoroughness of all provided methodology, with an emphasis on experimental parameters and microscope configuration when imaging data are used in the manuscript. Life scientists who may not be thoroughly familiar with advanced microscopes are urged to consult with the expert microscopists in the core facility when preparing their manuscript. More importantly, we suggest core facilities provide an overview of key parameters, such as Table 1 (or a similar list) next to each microscope, so that the end-users are aware, while the images are being acquired, of the parameters that should be recorded. Table 1 can be used in conjunction with the fillable form in the supplementary section (Table S1). This is especially important in the event these parameters are not fully captured in the image metadata.

The ability to independently verify the result of an experiment is a central tenet of science. Providing insufficient and/or inaccurate method descriptions in publications therefore undermines this principle. Such oversight wastes valuable resources and time, causes unnecessary scientific dispute due to discrepancies, and puts the credibility of the entire scientific endeavor itself at risk. Accurate methods reporting should, therefore, not be taken lightly by all parties involved.

We thank Michael DeSantis as well as the Light Microscopy Shared Resource at HHMI Janelia Research Campus for the use of the confocal microscope. We also thank Wendye Quaye for her design assistance in creating the downloadable supplemental forms.

Funding

The Advanced Imaging Center is generously supported by the Gordon and Betty Moore Foundation as well as the Howard Hughes Medical Institute.

Aaron
,
J. S.
and
Chew
,
T.-L.
(
2021
).
A guide to accurate reporting in digital image processing: can anyone reproduce your quantitative analysis?
J. Cell Sci.
134
,
jcs254151
.
Aaron
,
J. S.
,
Taylor
,
A. B.
and
Chew
,
T.-L.
(
2018
).
Image co-localization–co-occurrence versus correlation
.
J. Cell Sci.
131
,
jcs211847
.
Aaron
,
J.
,
Wait
,
E.
,
DeSantis
,
M.
and
Chew
,
T. L.
(
2019
).
Practical considerations in particle and object tracking and analysis
.
Curr. Protoc. Cell Biol.
83
,
e88
.
Adler
,
J.
and
Parmryd
,
I.
(
2012
).
Colocalization analysis in fluorescence microscopy
. In
Cell Imaging Techniques. Methods in Molecular Biology (Method and Protocols)
, Vol.
931
(ed.
D.
Taatjes
and
J.
Roth
).
Totowas, NJ
:
Humana Press
, pp.
97
-
109
.
Allan
,
C.
,
Burel
,
J.-M.
,
Moore
,
J.
,
Blackburn
,
C.
,
Linkert
,
M.
,
Loynton
,
S.
,
MacDonald
,
D.
,
Moore
,
W. J.
,
Neves
,
C.
,
Patterson
,
A.
et al. 
(
2012
).
OMERO: flexible, model-driven data management for experimental biology
.
Nat. Methods
9
,
245
-
253
.
Baker
,
M.
(
2016
).
Reproducibility crisis
.
Nature
533
,
353
-
366
.
Collins
,
F. S.
and
Tabak
,
L. A.
(
2014
).
Policy: NIH plans to enhance reproducibility
.
Nature
505
,
612
-
613
.
Cribb
,
B.
,
Shapter
,
J.
and
Apperley
,
M.
(
2016
).
Online Education and Training for Microscopy and Microanalysis: MyScopeTM
.
Microscopy Today
24
,
44
-
49
.
Deagle
,
R. C.
,
Wee
,
T.-L. E.
and
Brown
,
C. M.
(
2017
).
Reproducibility in light microscopy: maintenance, standards and SOPs
.
Int. J. Biochem. Cell Biol.
89
,
120
-
124
.
Halpern
,
A. R.
,
Howard
,
M. D.
and
Vaughan
,
J. C.
(
2015
).
Point by point: an introductory guide to sample preparation for single-molecule, super-resolution fluorescence microscopy
.
Curr. Protoc. Chem. Biol.
7
,
103
-
120
.
Halter
,
M.
,
Bier
,
E.
,
DeRose
,
P. C.
,
Cooksey
,
G. A.
,
Choquette
,
S. J.
,
Plant
,
A. L.
and
Elliott
,
J. T.
(
2014
).
An automated protocol for performance benchmarking a widefield fluorescence microscope
.
Cytom. Part A
85
,
978
-
985
.
Jonkman
,
J.
(
2020
).
Rigor and reproducibility in confocal fluorescence microscopy
.
Cytom. Part A
97
,
113
-
115
.
Jonkman
,
J.
,
Brown
,
C. M.
and
Cole
,
R. W.
(
2014
).
Quantitative confocal microscopy: beyond a pretty picture
. In
Methods in Cell Biology
, pp.
113
-
134
.
Elsevier
.
Jonkman
,
J.
,
Brown
,
C. M.
,
Wright
,
G. D.
,
Anderson
,
K. I.
and
North
,
A. J.
(
2020
).
Tutorial: guidance for quantitative confocal microscopy
.
Nat. Protoc.
15
,
1585
-
1611
.
Jost
,
A. P.-T.
and
Waters
,
J. C.
(
2019
).
Designing a rigorous microscopy experiment: Validating methods and avoiding bias
.
J. Cell Biol.
218
,
1452
-
1466
.
Khuon
,
S.
,
Liang
,
L.
,
Dettman
,
R. W.
,
Sporn
,
P. H. S.
,
Wysolmerski
,
R. B.
and
Chew
,
T.-L.
(
2010
).
Myosin light chain kinase mediates transcellular intravasation of breast cancer cells through the underlying endothelial cells: a three-dimensional FRET study
.
J. Cell Sci.
123
,
431
-
440
.
Lambert
,
T. J.
and
Waters
,
J. C.
(
2014
).
Chapter 3 - Assessing camera performance for quantitative microscopy
. In
Quantitative Imaging in Cell Biology
(ed.
J. C.
Waters
and
T. B. T.-M.
Wittman
), pp.
35
-
53
.
Academic Press
.
Landis
,
S. C.
,
Amara
,
S. G.
,
Asadullah
,
K.
,
Austin
,
C. P.
,
Blumenstein
,
R.
,
Bradley
,
E. W.
,
Crystal
,
R. G.
,
Darnell
,
R. B.
,
Ferrante
,
R. J.
,
Fillit
,
H.
et al. 
2012
).
A call for transparent reporting to optimize the predictive value of preclinical research
.
Nature
490
,
187
-
191
.
Lee
,
J.-Y.
and
Kitaoka
,
M.
(
2018
).
A beginner's guide to rigor and reproducibility in fluorescence imaging experiments
.
Mol. Biol. Cell.
29
,
Linkert
,
M.
,
Rueden
,
C. T.
,
Allan
,
C.
,
Burel
,
J.-M.
,
Moore
,
W.
,
Patterson
,
A.
,
Loranger
,
B.
,
Moore
,
J.
,
Neves
,
C.
,
MacDonald
,
D.
et al. 
(
2010
).
Metadata matters: access to image data in the real world
.
J. Cell Biol.
189
,
777
-
782
.
Marqués
,
G.
,
Pengo
,
T.
and
Sanders
,
M. A.
(
2020
).
Imaging methods are vastly underreported in biomedical research
.
eLife
9
,
e55133
.
Murray
,
J. M.
,
Appleton
,
P. L.
,
Swedlow
,
J. R.
and
Waters
,
J. C.
(
2007
).
Evaluating performance in three–dimensional fluorescence microscopy
.
J. Microsc.
228
,
390
-
405
.
National Academies of Sciences, Engineering, and Medicine
. (
2019
).
Reproducibility and Replicability in Science
.
The National Academies Press
.
Nelson
,
G.
,
Boehm
,
U.
,
Bagley
,
S.
,
Bajcsy
,
P.
,
Bischof
,
J.
,
Brown
,
C. M.
,
Dauphin
,
A.
,
Dobbie
,
I. A.
,
Eriksson
,
J. E.
,
Faklaris
,
O.
et al. 
(
2021
).
QUAREP-LiMi: A community-driven initiative to establish guidelines for quality assessment and reproducibility for instruments and images in light microscopy. https://arxiv.org/abs/2101.09153
North
,
A. J.
(
2006
).
Seeing is believing? A beginners’ guide to practical pitfalls in image acquisition
.
J. Cell Biol.
172
,
9
-
18
.
Pawley
,
J.
(
2006
).
Handbook of Biological Confocal Microscopy
.
Springer Science & Business Media
.
Sánchez
,
C.
,
Muñoz
,
M. Á.
,
Villalba
,
M.
,
Labrador
,
V.
and
Díez-Guerra
,
F. J.
(
2011
).
Setting up and running an advanced light microscopy and imaging facility
.
Curr. Protoc. Cytom.
57
,
12.22.1
-
12.22.21
.
Spector
,
D. L.
and
Goldman
,
R. D.
(
2005
).
Basic Methods in Microscopy: Protocols and Concepts from Cells: a Laboratory Manual
.
CSHL Press
.
Stuurman
,
N.
and
Thorn
,
K.
(
2015
).
Digital microscopy
. In
Handbook of Digital Imaging
(ed.
M.
Kriss
), pp.
1613
-
1640
.
John Wiley & Sons
.
Stuurman
,
N.
and
Vale
,
R. D.
(
2016
).
Impact of new camera technologies on discoveries in cell biology
.
Biol. Bull.
231
,
5
-
13
.
Wait
,
E. C.
,
Reiche
,
M. A.
and
Chew
,
T.-L.
(
2020
).
Hypothesis-driven quantitative fluorescence microscopy – the importance of reverse-thinking in experimental design
.
J. Cell Sci.
133
,
jcs250027
.
Wallrabe
,
H.
,
Periasamy
,
A.
and
Elangovan
,
M.
(
2014
).
Microscopy core facilities: results of an international survey
.
Microsc. Today
22
,
36
-
45
.
Waters
,
J. C.
(
2009
).
Accuracy and precision in quantitative fluorescence microscopy
.
J. Cell Biol.
185
,
1135
-
1148
.
Waters
,
J. C.
and
Wittmann
,
T.
(
2014
).
Concepts in quantitative fluorescence microscopy
.
Methods in cell biology
123
,
1
-
18
.
Wheatley
,
S. P.
and
Wang
,
Y.
(
1998
).
Chapter 18 indirect immunofluorescence microscopy in cultured cells
. In
Animal Cell Culture Methods
(ed.
J. P.
Mather
and
D. B. T.-M.
Barnes
), pp.
313
-
332
.
Academic Press
.

Competing interests

The authors declare no competing or financial interests.

Supplementary information