Harnessing Artificial Intelligence To Reduce Phototoxicity in Live Imaging

Fluorescence microscopy, widely used in the study of living cells, tissues, and organisms, often faces the challenge of photodamage. This is primarily caused by the interaction between light and biochemical components during the imaging process, leading to compromised accuracy and reliability of biological results. Methods necessitating extended high-intensity illumination, such as super-resolution microscopy or thick sample imaging, are particularly susceptible to this issue. As part of the solution to these problems, advanced imaging approaches involving artificial intelligence (AI) have been developed. Here we underscore the necessity of establishing constraints to maintain light-induced damage at levels that permit cells to sustain their live behaviour. From this perspective, data-driven live-cell imaging bears significant potential in aiding the development of AI-enhanced photodamage-aware microscopy. These technologies could streamline precise observations of natural biological dynamics while minimising phototoxicity risks.


Introduction
The ability to comprehend biological dynamics is inherently linked to the capacity for non-invasive observation.Fluorescence microscopy has been instrumental in facilitating these analyses across a range of scales, with molecular specificity (1)(2)(3).Over the past two decades, technological advancements such as light sheet microscopy (4-7), structured illumination microscopy (SIM) (8,9), and single molecule localisation microscopy (SMLM) (10)(11)(12), have revolutionised fluorescence light microscopy, enabling us to characterise biological events from molecular interactions up to larger living organisms.
A key challenge associated with advanced microscopy analysis is its likely need to increase fluorescence excitation light levels, which results in phototoxicity or photodamage.These terms refer to the detrimental impacts of light, especially when employing photosensitising agents or high-intensity illumination (13)(14)(15).Despite certain distinctions like photodamage occurring also in non-living materials, both terms are here used interchangeably, for simplicity.Sample illumination may also result in photobleaching, a process characterised by an irreversible loss of a fluorescent signal attributed Deep learning augmented microscopy aims to reduce this compromise, striving to obtain equal information from our sample with less impact on its health.SNR: Signal-to-noise ratio to the destruction of the fluorophore.This is one manifestation of light damage, among other possible effects.Phototoxicity severely influences the experimental outcomes by altering biological processes under observation, skewing findings and impeding consistency.Therefore, it's crucial during live-cell microscopy to carefully consider these factors to prolong imaging durations and achieve dependable research outcomes.The biological validity of live-cell imaging experiments requires a precise balance between the data quality and the specimen's health, as depicted in Fig. 1.Major advancements have been made in both hardware and software technologies aiming to reduce sample light damage.Hardware innovations such as Lattice Light Sheet Microscopy (LLSM) (16) and Airyscan Microscopy (17) are notable examples.Additionally, computational advancements like Fluctuation-based Super Resolution Microscopy offer promising solutions to these issues (18,19).A recent study has shown that a twocolour illumination scheme combining near-infrared illumination with fluorescence excitation presents an interesting capacity to limit the phototoxicity caused by light-induced interactions with fluorescent proteins (20).These technological breakthroughs have the potential to optimise observation accuracy while mitigating photodamage.In parallel, artificial intelligence (AI), specifically deep learning, can significantly improve imaging information in low illumination scenarios by considerably enhancing image quality and quantification (21)(22)(23).This has inspired the search for integrated solutions by the microscopy community (24)(25)(26).The fusion of advanced optical hardware with computational models and intelligent analytics heralds new breakthroughs in overcoming sample damage induced by traditional live fluorescence microscopy methodologies, marking the advent of AI-enhanced intelligent microscopy.

Understanding phototoxicity
Cellular environments undergo continuous adaptation to maintain homeostasis, with oxygen playing a key role in multiple chemical processes (27).Reactive Oxygen Species (ROS), resulting from oxidative processes involving oxygen radicals (Table 1), are integral to cell growth, differentiation, and other functions.However, uncontrolled ROS processes lead to oxidative stress and cellular damage, resulting in autophagy, inflammation, and potentially cell death (27).For these reasons, a balance between these states is critical for cellular physiology.(27) and biomolecules affected across different levels, interfering with molecular pathways and cellular signalling (28)(29)(30)(31)(32).

Oxygen Radicals
Biomolecules Affected Hydrogen Peroxide Flavins Singlet molecular oxygen Porphyrins Nitric Oxide NAD(P)H Superoxide anion radicals Tyrosine Hydroxyl radical Catecholamines Hydroxide ion Cysteinyl thiols Light irradiation to excite fluorescence, in addition to inducing oxidative stress by producing high amounts of ROS, triggers multiscale alterations across biological samples that alter the homeostasis of oxidative processes.Moreover, higher doses of light irradiation increase this oxidative stress, further aggravating cellular homeostasis.Photochemical processes at the molecular level lead to intracellular component damage or toxic compound formation (33,34).The literature extensively documents the impacts of fluorescence excitation light, especially UV light on DNA, such as DNA double-stranded breaks (35), thymidine dimerisations (35), UV-induced apoptosis (36), and tumour factor activation (36).These effects are even routinely employed in research settings under controlled conditions (37)(38)(39)(40)(41).Moreover, molecules naturally present in cells (Table 1) undergo degradation via exposure to lightinduced oxidative stress, canonically produced during the fluorescence excitation process.This degradation culminates in the generation of reactive oxygen compounds that directly impair cell health by oxidising DNA, proteins, unsaturated fatty acids, and enzyme cofactors (15,42).Excessive ROS shifts the homeostasis of the cell cytoplasmic environment' leading to organelle damage like mitochondrial fragmentation due to compromised membrane potential and integrity, among others (13,35,(43)(44)(45).While helpful and versatile, fluorescence microscopy illumination entails an additional source of ROS formation (46).When excited, fluorophores undergo autocatalysis through dioxygen, releasing hydrogen peroxide and consequently degrading -a process commonly known as photobleaching (47).Photobleaching produces ROS similar to other biomolecules, intensifying phototoxic effects (46).However, despite the interrelation between photobleaching and phototoxicity (20), these phenomena exhibit distinct features and can occur independently.Namely, identical oxygen radical compounds originate from photobleaching and direct fluorescence excitation light interactions with other cellular components (46,47).Suggesting that a reduction of photobleaching does not necessarily imply a decrease in phototoxicity, and the other way around.The intricate mechanisms underlying these phenomena have been thoroughly researched, especially concerning various fluorophores' photosensitising effects in live microscopy.Given the common necessity for oxygen in cell cultures, ROS formation during live-cell imaging is an unavoidable aspect of the imaging process.At lower thresholds, the impact of light exposure may be minimal and reversible, allowing the cell to return to physiological conditions.This cell recovery will depend on several factors, namely how resilient the sample is and the degree of phototoxicity incurred by the experiment (13).In severe cases, the behaviour of cells will be permanently altered (15).Moreover, further alterations normally accumulate within a cell's homeostatic environment until reaching an irreversible point causing pronounced impairment of cell health that leads to unwanted cell behaviour (cellular collapse) or death (apoptosis) (Figure 2a) (15,34,35,42).Although scientists have identified several indicators of photodamage, the correlation between fluorescent excitation light dose and light damage is not yet fully understood due to variability in experimental conditions.These variations include culture conditions, illumination modes (Figure 2b) and biological sample variability, hindering the understanding between fluorescent excitation light dose and light damage (14,15).Despite challenges associated with varying specimen resistances to light damage, establishing universal quantitative benchmarks across diverse samples could harmonise these effects and enhance reproducibility in the imaging context.

Phototoxicity quantification
The literature describes several phototoxicity hallmarks with numerous known markers available to identify and characterise sample damage (33).However, using these markers presents additional challenges in live-cell experiments.Firstly, incorporating phototoxicity markers necessitates extra planning and may require allocating fluorescent channels usually reserved for observing conditions of interest (e.g., markers for DNA oxidative damage).Not only does this limit the number of fluorescent channels available for the experiment, but when paired with live cell experiments could enhance ROS formation, thereby escalating the phototoxicity risk on the specimen.Secondly, despite well-documented phototoxicity markers, quantification-based screenings are less commonly employed (34).The absence of live-cell imaging universal metrics relatable to light exposure and consequent damage across various biological systems hampers experimental reproducibility, undermines results' robustness, and limits the obtainable biological readouts.Furthermore, without these universal metrics, it's challenging to fully leverage the capacity to image biological systems.This results in the incorrect assessment of experimental conditions to achieve maximum spatial and temporal resolution while preserving cell viability.
Researchers primarily rely on observation and experience to assess cell health and viability during photodamage evalua-tion (Figure 2a) (13,15,34).While there are some attempts to provide quantifiable metrics for improving sample viability (13,14,42), they often simplify the impact of fluorescence excitation light to a binary classification of viable/healthy or non-viable/dead.These approaches may overlook subtle effects that could disturb a specimen's normal physiology.A gradient model that considers the accumulation of such discrete minor effects would more accurately depict the spectrum of effects documented in the existing literature.Namely, these approaches could be even more flexible by considering both cell health decline and recovery.
Yet an important aspect to consider is the collective physiological and behavioural outcome, rather than individual effects.In controlled experimental conditions, specimens typically exhibit consistent behaviours such as cell division, motility, and membrane dynamics.By modelling and quantifying these behaviours, deviations can be correlated to the negative effects of fluorescent light exposure on the sample.Phototoxicity is a known issue in live-cell imaging, and a plethora of strategies exist to mitigate its effects, ranging from reducing light irradiation by either reducing the acquisition points or the light dose (7) to using more sensitive light detectors (17).Other strategies focus on controlling oxidative stress effects in biological samples by reducing its effects by supplementing antioxidants (48,49), or increasing the resistance of the sample itself (50).Although some strategies are simpler to integrate than others, they require using specific and costly equipment or altering the conditions of the specimen.
As advanced image analysis tools become increasingly available, a future strategy for fully reproducible live-cell imaging procedures could incorporate specific phototoxicity reporters within automated image acquisition and analysis workflows.This approach, paired with standardised experimental guidelines for identifying and quantifying phototoxic events, represents a promising solution.As imaging technologies evolve, adopting such methods should be prioritised by scientists aiming to create robust imaging strategies for visualising biological phenomena.

The relationship between fluorescence excitation light, image information and phototoxicity
Fluorescence live-cell imaging involves the excitation of fluorophores using light, resulting in photon emission.These photons can be captured by a camera or sensor, facilitating the visualisation of cellular targets.Various microscopy methods are available for sample illumination as depicted in Figure 2b.Some strategies, such as widefield, confocal microscopy, and Stimulated Emission Depletion (STED), illuminate the sample across the optical axis.In contrast, twophoton excitation primarily excites sample regions near the imaging focal plane while also subjecting the sample to less harmful infrared light across the optical axis.Total Internal Reflection Fluorescence (TIRF) imaging optically limits illumination to near the coverslip surface, substantially reducing in-depth illumination.However, it generally only illuminates the interface between the sample and the coverslip.Lightsheet microscopy, a more live-cell-friendly approach, illuminates only around the focal plane of the sample but often poses challenges in achieving high resolution compared to alternative methods.Hence, the choice of microscopy depends on sample characteristics; more sensitive specimens like embryos benefit from gentler modalities such as light-sheet microscopy.Inversely, more resilient samples can withstand intense irradiation experienced during Single-Molecule Localization Microscopy (SMLM) or STED microscopy (Figure 2b).
As introduced earlier, modern microscopy technologies seek to minimise the required illumination by becoming more specific towards the type of information that needs to be visualised.However, the acquired information's quality may depend on factors such as the signal-to-noise ratio (SNR), im-age contrast, and temporal and spatial resolution of the image data (Figure 2c).Each microscopy equipment has inherent limitations that affect these properties' optimisation, necessitating a trade-off, which is often expressed by the microscope 'pyramid of frustration' (51), or as characterised explicitly for super-resolution microscopy by Jaquemet et al. (52).This trade-off can be optimised based on experimental needs by adjusting light exposure intensity and acquisition speed -common parameters used to achieve less harmful imaging configurations.As such, methods that enhance image quality from sample-friendly setups, like deep learning methods, are particularly valuable when pushing against intertwined image information and phototoxicity limitations (Figures 1 and 2c).Specifically, deep learning's growing ability to augment and refine image-based information is attracting considerable interest in the microscopy community.
It is becoming one of the most popular strategies to enable imaging setups with reduced phototoxicity (25,53).In succeeding sections, we explore different strategies conceived to enhance microscopy image data's information contrast and quality, which could facilitate varied live-cell imaging setups with decreased fluorescent light illumination.

Deep learning for microscopy to the rescue
The exceptional ability of deep learning to automatically discern, uncover, and summarise complex patterns within images is undeniably one of its most transformative contributions to bioimaging.It has significantly influenced microscopy image analysis by ushering in a shift from traditional mathematical feature modelling to data-driven methods (21-23, 57, 58).We have observed the potential of deep learning to achieve remarkable accuracy and, in some cases, human-level performance in diverse computer vision tasks such as segmentation, denoising, detection, reconstruction, unmixing, and response prediction.Additionally, emerging imaging processing tasks like cross-modal style transfer (59,60) and virtual labelling (61-64) hold promising prospects due to the flexibility they introduce when designing imaging experiments (e.g., spectral unmixing or rapid lowresolution acquisitions that can be virtually super-resolved for subsequent quantification) (21).Indeed, microscopy imaging naturally allows for creating paired image datasets by alternating acquisition setups and combining different modalities (54,65), simulating data (66-70), or, recently, developing correlative approaches such as CLEM (71).Cumulatively, all these advancements have laid a solid foundation for the growing field of deep learning-augmented microscopy (Figure 3).Regardless of the strategy, the key idea behind deep learning is to define models that learn to identify or enhance specific features directly from the data.Traditionally, the learning process, or training, has been classified as supervised, where the model is trained with paired input-output image datasets, or unsupervised, where the model is only exposed to input images during training (Figure 3a).Typically, supervised approaches have demonstrated superior accuracy and specificity to the task and data distribution, but  55); and f) from ( 54) for the virtual labelling and ( 56) for the unmixing.SNR: signal-to-noise ratio.
their versatility relates to the availability of paired images, which could be a limitation in live microscopy.Some alternatives (51,55,72) propose training models using paired images of ex vivo samples-providing perfectly aligned images for training and assessment-to subsequently perform inference with in vivo images (Figure 3b).Importantly, collecting images from fixed images supports faster creation of more extensive and diverse datasets than live imaging.Depending upon the image features, simulated data may also be viable for training such models (66)(67)(68)(69).However, there remain scenarios where obtaining such paired datasets does not encapsulate the complexity of live experiments, is not experimentally feasible or cross-modality acquisition devices are inaccessible.Given this, alongside time-consuming data annotation processes, has propelled exploration into alternative approaches such as semi-or weakly-supervised ( 73), selfsupervised (74,75), or generative techniques (59, 63, 76) (Figure 3a).
Many possibilities exist to exploit deep learning-augmented microscopy and reduce phototoxicity.Drawing inspiration from the delineation provided in ( 22), we distinguish strategies that aim either to surmount the physical limitations intrinsic to live fluorescence microscopy imaging (i.e., acquisition speed or illumination) or to enhance the content in less qualitatively superior but more sample-friendly image data.The former includes techniques such as denoising, restoration, or temporal interpolation.The latter, referred to by the original authors as "augmentation of microscopy data contrast", includes techniques such as virtual superresolution (59,65,(77)(78)(79)(80).
Recent denoising (74,75,81) and restoration approaches (51,(82)(83)(84)(85)(86) have successfully virtualised noise removal, enhanced the SNR, improved fluorescence channel contrast with unparalleled accuracy, or provided isotropic reconstruction of volumetric information of images with low SNR (Figure 3c).That is, they support imaging setups with reduced fluorescence illumination that results in lower SNR or reduced optical 3D sectioning, which in turn, are gentler for the sample.Similarly, acquisition can be slowed down and use intelligent interpolators like CAFI (87) or DBlink (69) to recover temporal information (Figure 3d).As we discussed earlier, reducing the number of illumination time points can significantly decrease the cumulative phototoxicity while potentially enabling some degree of photodamage recovery.
Another innovative approach to enable reduced illumination imaging setups is exploiting cross-modal style transfer methodologies.In brief, these methods involve training a model to translate images between varied microscopy modalities (Figure 3e).For example, it was shown that SIM images could be inferred from input images acquired with widefield illumination (65), reducing the photon dose by a factor of 9/15 in 2D/3D.This capability extends to numerous types of fluorescence microscopy modalities such as confocal to STED (26,59), SIM and SRRF (54), or wide-field to SMLM (67,88,89).The enhancement in spatial resolution via learning fine details is similar in objective to tradi-tional deconvolution.It offers comparable benefits for mitigating phototoxicity, such as using less aggressive imaging modalities (widefield and/or confocal against SIM, STED, and SMLM).While the extent these methodologies can contribute towards scientific discovery is up for debate (21,90), they undoubtedly elevate image data quality which directly impacts subsequent tasks like accurate tracking or segmentation (51,87).As suggested in (51), less aggressive live imaging approaches may yield visually less appealing but easier-to-analyse data due to the reduction in artefacts induced by photodamage (e.g., apoptosis, stressed cells or specimen shrinkage during illumination) with the benefit of preserving close-to physiological conditions.
Exploiting the prowess of data-driven methods, artificial labelling approaches have emerged (Figure 3f).They span from inferring cell nuclei (e.g., Hoechst staining) from actin (e.g., Lifeact staining) (54), to estimating specific fluorescence information (e.g., nucleoli, cell membrane, nuclear envelope, mitochondria or neuron-specific tubulin) from brightfield input images (61, 91).It's worth noting that the latter technique is also categorized as a cross-modality style transfer approach.Moreover, artificial labelling can also be employed for channel unmixing (92)(93)(94), which offers several key advantages, including illumination channel reduction, acceleration of image acquisition, or support for more straightforward or cost-effective imaging experiments (Figure 3f).Among these benefits, the former is pivotal in enabling more sample-friendly setups.Indeed, artificial labelling often works as an intermediary step for further quantification such as segmentation or tracking (54,60).
Beyond the questions present also in the natural domain, novel inpainting and modality transfer techniques pose an additional challenge in bioimaging: they must accurately infer reliable and quantifiable physiological information.Thus, further validation, more assessment methodologies, and standard quantitative strategies ought to ascertain both the biological reliability of restored images and the integrity of recovered signal intensity (i.e., virtual images) are needed (21,95,96).
Biomedical data typically exhibit high variability due to factors such as the biological sample's physiology, experimental protocol, imaging setup, and even the individual researcher conducting the experiments.Defining and establishing an accurate ground truth becomes crucial (95,97), as deep learning model training highly depends on the data quality.Quality in terms of image processing ranges from the number of images available to train to how suitable the features of these images are for the task to train the model on.For instance, it remains to be seen how to strike an optimal balance between data acquisition and model accuracy (i.e., defining an ideal sampling frequency for precise cell tracking).A prevalent notion suggests that larger training datasets enhance model performance.Simultaneously, many publicly available annotated or paired databases are growing (97)(98)(99).However, strategies to combine these datasets effectively while preserving each experimental setup's specificity still need to be clarified.Furthermore, pre-trained models that facilitate fine-tuning and transfer learning are readily accessible thanks to trained deep learning model repositories such as the Bioimage Model Zoo (100) and MONAI (101).Nonetheless, further enquiry is necessary to ascertain the best practices for assembling adequate training data and executing effective transfer learning while considering data quality, image features, and task-specific analytical requirements.Because live-cell image data is usually highly redundant, such optimisation could maximise information extraction from images while minimising photodamage during the acquisition.Life scientists hold the top authority in deciding parameters such as sampling frequencies, resolution, or fields of view.While these decisions may sometimes be sub-optimal for subsequent quantification, they are our best reference for accurate performance.Incorporating users in the loop or their expertise as priors to guide model performance towards more specific and biologically relevant results is one of the pro missing directions to take into AI-enhanced microscopy.
Recent technological advances such as the analytical representation of sparse and raw information to create priorsan approach already proposed for the segmentation of natural pictures in works such as the Segment Anything Model (SAM) (102) -is an important step towards it.

Future Outlook
With the advance of fluorescence microscopy technology, the field is developing intelligent imaging techniques to minimise photodamage and enable accurate observations of biological dynamics.Similar to automatic cars or smart robots in industrial environments, microscopes can be empowered with AI components that enable real-time decisions by integrating the information extracted from the observed data into an intelligent feedback loop that balances the health of the sample and the image data quality (Figures 2c and 4) (53).Note that we refer to image quality as the combination of features (e.g., resolution in time and space, SNR, the size of the field of view, or the number of fluorescent channels) that allow getting the most information relevant for the understanding of the observed process.In line with these, some have already conceptualised and proved such systems.
In the first place, we find event-driven approaches, which automatically identify specific objects or incidents in the image that trigger the acquisition in real-time (103)(104)(105)(106)(107). While these adaptive approaches reduce the induced phototoxicity by only illuminating the sample when needed, in most cases, they are equipped with deep learning models trained to recognise predefined objects or elements in the images, which is not always the case in biology and may limit, even bias, the observation of novel physiological processes.Alternative approaches propose the integration of image resolution enhancement in the loop to get faster and gentler setups.In (24), a deep learning model is trained and validated in the acquisition loop to enhance the volumetric reconstruction and provide an adaptive light field microscopy (LFM) setup.In the context of super-resolution imaging, Bouchard et al. (26) propose evaluating the quality of virtually inferred STED images from confocal microscopy images to determine the uncer-tainty in the observed sample and determine whether a new STED image should be acquired or not.All these works pose new paradigms in the realm of smart microscopy.Deep learning approaches, particularly the unsupervised ones, learn and match data distributions even in highly heterogeneous or complex scenarios without the need for human descriptions or annotations.As mentioned in (108), such methods could be exploited to identify the events that deviate from the general distribution, i.e., to discover new biological patterns.Thus, advancing generative models and unsupervised/self-supervised approaches that can effectively learn from unpaired data alone can provide flexibility when paired datasets are difficult to obtain experimentally or when investigating new dynamics, and contribute to unbiased observations.
Straightforwardly, one would realise that despite sample health preservation being a strong motivation and a major limitation in live-cell imaging, none of the existing solutions analyses or estimates it directly.Robust photodamage reporters that provide quantitative assessments of sample health without requiring additional fluorescence channels can directly contribute to more reproducible biological readouts.This could involve exploring modalities like transmitted light microscopy or label-free techniques.Namely, quantitative reporters would support the design of automated workflows that analyse sample health in real-time during image acquisition rather than only evaluating the image quality (Figure 4).This would allow the detection of early signs of photodamage and the adaptive determination of optimal imaging conditions.In other words, it will open the door for data-driven sample-oriented live microscopy.Pursuing such technical innovations while deepening our understanding of photodamage mechanisms will enable microscopists to unlock the full potential of intelligent imaging.With photodamage-aware AI and automated tools, the goal of observing undisturbed physiological processes can be realised.This will profoundly enhance fluorescence microscopy's capacity to uncover ground truths in biology.

Discussion
Fluorescence microscopy has become an indispensable tool in cell biology, providing unparalleled insights into biomolecular dynamics.However, phototoxicity remains a major impediment, necessitating a deeper mechanistic understanding alongside the development of imaging techniques that mitigate this limitation.While the intercourse between microscopy hardware innovation and computational imaging has yielded promising solutions, standardised methodologies to comprehensively assess photodamage are lacking.Recent strides in deep learning provide optimism by enhancing information extraction from low-light or accelerated acquisitions.Nonetheless, robust validation strategies are still required to ensure biological fidelity.
A key opportunity lies in constructing universal photodamage metrics that account for subtle, cumulative deviations in sample physiology.Integrating such quantifications in intelligent automated analysis enables microscopes to optimise  In practice, a more gentle imaging setup would balance the health state of the imaged sample and the quality of the information obtained with deep learning augmented microscopy.Several aspects, such as the levels of phototoxicity, the properties of the extracted image information, and the different tasks that one could perform with deep learning, their expected accuracy and their likelihood to fit well in the experiment must be considered when deciding an optimal imaging configuration.In this way, combining advanced image processing in microscopy acquisition pipelines would support reproducible and optimised conditions for live-cell imaging.The source images for this figure are provided by (54,55) imaging conditions dynamically.This calls for a convergence of synergistic advancements spanning photodamage biology, microscopy hardware, computational imaging and model interpretation.An outstanding challenge is model training, which requires extensive paired datasets that sufficiently encapsulate biological variability.Alternatives like unsupervised learning provide flexibility but may compromise accuracy.Incorporating biological expertise through techniques like priors and prompts appears promising to guide models.Additionally, optimised strategies for effective model training and validation must be developed through empirical examination.While deep learning has elevated imaging capabilities, over-reliance on its reconstruction prowess could promote complacency.Achieving gentler acquisition first requires reevaluating illumination intensities and sampling frequencies.We claim that AI should extract the maximum information from the least invasive data rather than recover information already compromised by excessive phototoxicity.Keeping biological relevance at the crux while exploiting technology will lead to microscopes that truly observe life undisturbed.

Fig. 1 .
Fig.1.Graphical abstract.The delicate balance between sample health and the information obtained by imaging requires a compromise between both elements.Deep learning augmented microscopy aims to reduce this compromise, striving to obtain equal information from our sample with less impact on its health.SNR: Signal-to-noise ratio

Fig. 2 .
Fig. 2. Phototoxicity and sample health during live-cell acquisition.a) Examples of photodamage across widefield and confocal microscopy.Cellular rounding up and membrane blebbing are observed in widefield; actin depolymerisation and cellular collapse are seen in confocal.yellow: actin; orange: tubulin; cyan: nuclei.b) The effect of different light microscopy modalities according to the type of illumination on the imaged sample.Some modalities, such as Lightsheet microscopy, decrease the amount of light on the sample to increase cell survival.Other modalities sacrifice sample health to gain spatial resolution, such as STED c) AI-enabled low phototoxicity live-cell microscopy by reducing the amount of light needed on the sample to obtain the same information.A few examples are shown, such as image restoration and contrast enhancement applicable to different microscopy modalities, acquisition sampling reduction by tuning time acquisitions, and fluorescent channel tuning.The scale bar represents 25µm.

Fig. 3 .
Fig. 3.The deep learning landscape for a gentler live-cell microscopy imaging.a) Types of deep learning methods according to the training strategy.b) High illumination intensities are used to obtain images with high SNR at the expense of causing photobleaching, among others.Reducing the fluorescence illumination intensity prevents photobleaching at the expense of obtaining images with low SNR.These common limitations in traditional live-cell imaging can be overcome by using deep learning to enhance the image contrast and extend the acquisition in a gentler manner.Additionally, one could image fixed samples to create the training data.c-f) Deep learning models can be used to run microscopy acquisitions that use lower fluorescence light intensities or illuminate the sample less often.For this, after the imaging experiment, one could c) restore and denoise images; d) improve the temporal resolution by inferring intermediate time points; or e) virtually super-resolve images with supervised super-resolution or with generative approaches for cross-modality style transfer.f) One could avoid using fluorescence illumination with virtual staining or partially, by using unmixing approaches that can decouple different structures from autofluorescence or crosstalk between channels.a), c), e) were extracted and modified from (54); b) and d) from (55); and f) from (54) for the virtual labelling and (56) for the unmixing.SNR: signal-to-noise ratio.

Frame: t - 1 Frame
Phototoxicity and sample health estimation

Fig. 4 .
Fig.4.AI-enhanced photodamage-aware microscopy.In practice, a more gentle imaging setup would balance the health state of the imaged sample and the quality of the information obtained with deep learning augmented microscopy.Several aspects, such as the levels of phototoxicity, the properties of the extracted image information, and the different tasks that one could perform with deep learning, their expected accuracy and their likelihood to fit well in the experiment must be considered when deciding an optimal imaging configuration.In this way, combining advanced image processing in microscopy acquisition pipelines would support reproducible and optimised conditions for live-cell imaging.The source images for this figure are provided by(54,55)

Table 1 . Oxygen radicals and examples of biomolecules affected by oxidative processes
. The most common oxygen radicals present during ROS processes