Honeybees use visual and olfactory cues to detect flowers during foraging trips. Hence, the reward association of a nectar source is a multimodal construct which has at least two major components – olfactory and visual cues. How both sensory modalities are integrated to form a common reward association and whether and how they may interfere, is an open question. The present study used stimulation with UV, blue and green light to evoke distinct photoreceptor activities in the compound eye and two odour components (geraniol, citronellol). To test if a compound of both modalities is perceived as the sum of its elements (elemental processing) or as a unique cue (configural processing), we combined monochromatic light with single odour components in positive (PP) and negative patterning (NP) experiments. During PP, the compound of two modalities was rewarded, whereas the single elements were not. For NP, stimuli comprising a single modality were rewarded, whereas the olfactory-visual compound was not. Furthermore, we compared the differentiation abilities between two light stimuli that were or were not part of an olfactory–visual compound. Interestingly, the behavioural performances revealed a prominent case of configural processing, but only in those cases when UV light was an element of an olfactory–visual compound. Instead, learning with green- and blue-containing compounds rather supports elemental processing theory.
Angiosperms attract suitable pollinators using sophisticated olfactory and visual cues culminating in impressive cases of pollinator manipulations for maximizing pollen transfer (Schiestl and Schlüter, 2009). Honeybees are well known pollinators that associate the multifaceted features of a flower with its rewarding components – pollen and nectar (Dyer et al., 2012; Dyer and Garcia, 2014; Giurfa, 2004; Raguso, 2004). It remained unclear, however, how combinations of different sensory modalities are learned and shape behavioural decisions.
In a controlled laboratory environment, honeybees can be classically conditioned using the well-established proboscis extension response (PER) paradigm. This assay was used, for example, to ask whether honeybees can discriminate between olfactory stimuli (for a detailed summary see for example: Matsumoto et al., 2012) or between visual stimuli, for example when monochromatic lights were associated with a reward (Dobrin and Fahrbach, 2012; Hori et al., 2006; Lichtenstein et al., 2018) or punishment (Mota et al., 2011a,b). In an aversive operant conditioning paradigm using a walking arena, bees could learn that a certain wavelength is signalling safety (Kirkerud et al., 2017), a behaviour that was also observed in Drosophila (Vogt et al., 2015). Original colour learning experiments with harnessed honeybees were only successful after cutting off both antennae (Niggebrügge et al., 2009). However, recent modifications of the PER protocol proved that bees reliably learn visual stimuli with both antennae intact (Dobrin and Fahrbach, 2012; Lichtenstein et al., 2018; Lichtenstein et al., 2015). This opened up the prerequisite for our study addressing the question how honeybees combine olfactory and visual cues to form a common percept of a visited flower. Previous studies provided arguments for the existence of such interactions between the two modalities. Experiments by Mota et al. (2011a,b) suggest that the visual component is less important in an olfactory context and more difficult to learn but can act as a cue during olfactory conditioning. An earlier study by Gerber and Smith (1998) shows that visual pre-training modulates learning of an odour stimulus. However, these studies used both modalities temporally separated in different experimental phases and designs, whereas the natural situation during a flower visit provides visual and olfactory cues simultaneously with the reward. Hence, the flower–reward association might represent a combination of olfactory and visual stimuli, with a complex ranking and interplay during a close-up situation (Kevan and Lane, 1985; Menzel and Greggers, 1985).
To address this question, we adapted a positive patterning (PP) and negative patterning (NP) paradigm that had previously been used to investigate whether an odour mixture of two single odour components are perceived as a unique cue or as the sum of its single elements (Deisig et al., 2001; Deisig et al., 2003; Deisig et al., 2007). The two options have previously been termed configural or elemental processing roles, respectively. Recent findings (Mansur et al., 2018) suggest that bees may even use a pronounced form of a configural learning strategy (Williams and Braker, 1999) to combine both modalities. However, this might depend on the visual stimulus used during the experiments, since not all wavelengths may have the same impact. For example, the natural context of UV light is different from the relevance of other colours perceived by bees (Papiorek et al., 2016). We therefore included the three monochromatic wavelengths forming the basis for the trichromatic visual system of the bee: UV, blue and green (Hori et al., 2006; Menzel, 1981; Menzel and Blakers, 1976) in our experimental paradigm. To test our visual stimulation device, we performed electroretinogram recordings for all visual stimuli. This ensured that all wavelengths evoked distinct receptor neuron activity (Fig. S1). Furthermore, tests showed that all wavelengths used were discriminated by the bees in a classical differential conditioning experiment. To find out which learning strategy might be used during olfactory–visual integration, we presented the olfactory–visual compound as the reinforced stimulus (PP) or as the non-reinforced stimulus (NP), with the single elements being unrewarded (PP) or rewarded (NP), respectively. In addition to a memory test after 1 h, we included generalisation tests for a novel odour and light element as well as their odour–light compound stimulus. This allowed us to test if bees judge novel stimuli in a similar way compared to the modality combinations they were confronted with during training. In a second series of experiments, we tested if two olfactory–visual compound stimuli can be discriminated in a classical differential learning experiment and if odour presence has an influence on the separation between two visual stimuli.
MATERIALS AND METHODS
Foragers of Apis mellifera carnica Pollman 1879 were caught individually in small glass vials at the entrance of the hive in the morning at our departmental bee station (University of Würzburg) before each experiment during the summer season (May–September) 2017. During the winter season (October 2017–February 2018) bees were maintained in a heated glasshouse with an artificial light source and fed with pollen and 50% sugar solution (w/w) ad libitum. Only pollen foragers were collected for the experiments, since their sucrose responsiveness is high for the used reward sucrose concentration of 30%, whereas water and nectar foragers show a rather variable gustatory response score (Scheiner et al., 2004) which might influence the learning performance. Bees were immobilised on ice and harnessed in small metal tubes that allow the proboscis and the antennae to move (Bitterman et al., 1983). About 1.5 h before the experiment started, bees were fed ad libitum using a 30% sugar solution (w/w) and adapted to the light conditions in the laboratory (dimmed room with red light, average temperature: 24°C). A recent study showed that there is no difference between bees tested with dimmed light or under moderate ambient illumination similar to natural conditions (Lichtenstein et al., 2018). Ten minutes before the first conditioning trial, bees were tested for intact proboscis extension response (PER) by touching the antennae with a toothpick soaked with 30% sugar solution (w/w) without subsequent feeding. Only individuals showing a PER were chosen for experiments.
We used a Syntech CS-55 (Ockenfels Syntech GmbH, Kirchzarten, Germany) generating a continuous air flow of 1.0 l m−1 and added a stimulus flow of 0.5 l min−1, which was shifted between a blank and a stimulus pipette to prevent mechanical stimulation (Fig. 1A). We tested all single odour components to determine which could be differentiated in classical conditioning experiments (data not shown). The odours (geraniol and citronellol; Sigma-Aldrich, Germany) were diluted in paraffin oil 1:100 (v/v). Six µl of the odour solution was pipetted on filter paper strips (1×8 cm) in three drops of 2 µl in a row and placed into the stimulation pipette.
We used three different LED light sources: 375 nm (UV, intensity: 7.5×1013 photons cm−² s−1, TRU Components, Conrad, Hirschhaid, Germany), 465 nm (blue, intensity: 6.5×1013 photons cm−² s−1, Avago Technologies, Broadcom Inc., San José, CA, USA) and 525 nm (green, intensity: 3.93×1013 photons cm−² s−1, Avago Technologies, Broadcom Inc., San José, CA, USA) (Fig. 1B). To control and synchronise the light stimulation with the odour presentation, we used the TTL output of the Syntech CS-55. Our custom build light device was positioned 3 cm above the tested bee and orthogonal to the odour stimulation device (Fig. 1).
The restrained experimental bees were placed on a plastic sleigh with multiple bees and walls between individual bees to avoid cross stimulation. The sleigh was moved along a horizontal line in front of the stimulation devices. An air extractor behind this setup ensured that the tested odours were not accumulating around the bee (Fig. 1A,B). The timing of the training procedure was adapted from previous studies (Lichtenstein et al., 2018; Lichtenstein et al., 2015; Riveros and Gronenberg, 2009; Riveros and Gronenberg, 2012). Before and after each stimulation bees had a 15 s resting phase in front of the airstream to accustom to the training situation. Durations of the conditioned stimuli (CS− and CS+) were set to 10 s. The unconditioned stimulus [US, 30% sugar solution (w/w)] during reinforced trials (CS+) was presented overlapping for the last 3 s of the CS. The US was presented on a toothpick. A dry toothpick was presented in the same way during CS− trials, to avoid conditioning to the toothpick presentation itself. We used an 8 min inter-trial interval (ITI). ITI timing was controlled via a custom-made software ‘TimingProtocol’ (freely available on request; Lichtenstein et al., 2018), which provided the experimenter with acoustic cues for experimental control.
Positive and negative patterning
During positive patterning (PP), the olfactory–visual compound was presented rewarded (AX+[CS+]) and the single elements, ‘odour’ (A−[CS−]) and ‘light’ (X−[CS−]) were non-reinforced (Fig. 1C, first row). During negative patterning (NP), the compound was non-reinforced (AX−), but the single elements were rewarded (A+, X+) (Fig. 1C, second row). Each stimulus was repeated seven times, resulting in a total of 21 trials. Hence, conditioned bees received seven reinforced trials during PP and 14 reinforced trials during NP. CS+ and CS− stimuli were presented pseudorandomised in such a way that each stimulus occurred only twice in a row, but randomly. Furthermore, we made sure to always start with a CS−. All experiments were performed during the summer season.
Memory and generalisation test
One hour after the last acquisition trial a memory and generalisation test was conducted. The learned single elements (A, X) and the compound (AX), as well as two novel elements (B, Y) and a novel compound (BY) were presented randomised one time each.
Differential conditioning of two compounds with same odour component
To test if bees learn to discriminate two different olfactory–visual compound stimuli we made sure that the bees did not use the odour information alone by using the same odour in both compounds. Each stimulus was presented nine times, resulting in a total of 18 trials. For each pair of stimuli, we tested two groups of animals in which we exchanged the CS+ and CS−. Furthermore, we combined this olfactory visual compound training with the results we obtained from purely visual conditioning of the light components to ask if the odour component has a reinforcing influence on visual learning. All experiments were obtained during the winter season.
The memory test was performed 1 h after the last acquisition trial and contained presentations of the conditioned compounds (AX and BX) and of each single component (A, B, X), to test if the latter can substitute for the conditioned compound after acquisition. In total, the memory test comprised five trials in random order.
Response measurement and statistical analyses
A PER was counted if the bees extended their proboscis over a virtual line between the mandibles. A binary response (1) during acquisition was counted in cases when the response to the CS stimulation occurred before the US had been presented. Only bees that survived the entire experimental procedure were taken for statistical analysis.
All analyses were done with R Studio (Version 1.0.143, RStudio, Inc.). For descriptive analyses we used the package ‘ggplot’ and plotted the percentage of the binary PER recorded during the acquisition trials (learning curves) and for the one trial generalisation/memory tests (bar plots). For positive and negative patterning experiments, we computed different generalised linear models (GLMs) and used an analysis of variance (ANOVA) for repeated measurements for within-group and between-group comparisons on the most suitable model. Even though ANOVA is usually not allowed for dichotomous data such as those of the PER experiments, Monte Carlo studies have shown that ANOVA can be used under certain conditions (Lunney, 1970), which are met by our experiments: equal cell frequencies and at least 40 degrees of freedom in the error term. For post hoc comparisons, we used Tukey HSD tests. For statistical analyses of differential conditioning of two compounds, we used Wilcoxon signed rank tests for within-group comparisons. For the memory tests, we used Cochran's Q test for within-group comparisons. For significant differences, pairwise comparisons using the Wilcoxon sign test (with Bonferroni correction) were performed. The alpha level was set to 0.05 for all statistical analyses.
Odour dominates olfactory–visual compound learning
During positive patterning (PP) experiments with geraniol–blue (n=66, Fig. 2A) or geraniol–green (n=66, Fig. 2B) as the reinforced compound, the animals associated the compound with the reward. However, the learning performance to the olfactory element alone reached the same response rates, although it had never been rewarded (Fig. 2A,B, left panels). Only the light element was discriminated from the pure odour and the olfactory–visual compound. The same ranking appeared in the memory test (Fig. 2, right panels) reflecting a predominantly odour-driven reward association. This was confirmed by the generalisation tests. Here, both the novel odour and the novel olfactory–visual compound were generalised to the initially trained olfactory–visual compound (Fig. 2A,B). The only exception occurred in the group that had been trained to a compound containing blue light. Here, the novel UV–odour compound was not generalised to the initially trained olfactory–visual compound (Fig. 2A).
UV component interferes with the olfactory–visual compound
A different picture emerged when UV was used as visual element during PP. In total, we trained 121 bees to discriminate an olfactory–visual compound of geraniol–UV (CS+) from its single elements (CS). As in the previous experiments, bees learned to associate the compound with the reward (Fig. 3). However, although not rewarded, the PER performance to the odour alone was significantly increased compared with that of the compound, whereas UV evoked almost no response and was significantly different from both the compound and the odour stimulus (Fig. 3). Moreover, this effect was still present during the memory test and not generalised to the novel compound stimulus including green light (Fig. 3, right panels).
Bees learn light and odour elements equally well, but do not differentiate their compound
In negative patterning experiments bees had to learn that the single elements (geraniol, blue), were rewarded (CS+), but their compound (geraniol–blue) was not (CS−) (n=61, Fig. 4A). In another group we tested green light instead (n=60, Fig. 4B). In both cases, the bees failed to discriminate. However, in our experimental setting bees showed the same learning performance and established a reward association with both light-only and odour-only stimuli, reaching equally high learning rates (∼60%). However, there was a tendency that during the memory test the single reward-associated elements can be discriminated from the non-rewarded olfactory visual compound, which was significant for the odour, but not for the blue light (Fig. 4A). For the generalisation test, the single elements (citronellol, UV) as well as their compound (citronellol–UV) were introduced as novel stimuli. The trained bees did not generalise the single-element reward associations to the novel light-only and the novel odour–light compound, but generalised the single-odour element (Fig. 4A,B). This illustrates that after our training procedure a novel odour element was perceived differently compared with an olfactory–visual compound including the same odour. However, this difference in generalisation predominantly occurred when UV was introduced as an element of a novel olfactory–visual compound (Figs 2 and 4).
Bees solve negative patterning when UV is an element of the olfactory–visual compound
In negative patterning experiments including UV, 121 bees were conditioned in total. The bees had to learn that the single elements, geraniol or UV, were rewarded (CS+), but their olfactory–visual compound (CS−) was not. The PER rates for the latter were significantly lower compared with the single elements (Fig. 5). Thus, if UV is part of the olfactory visual compound stimulus the compound can be differentiated from its single elements. However, this was not the case for other tested wavelengths (Fig. 4). During the memory test, the single reward-associated odour element was differentiated significantly from the olfactory–visual compound, whereas the single reward-associated light was not. However, the trained bees generalised to the novel compounds and their single elements when blue light was introduced (Fig. 5, upper panel), but only to the single elements when green light was the novel stimulus component (Fig. 5, lower panel).
UV light perception is modulated in an olfactory–visual compound
To understand how an odour element can modulate light perception, we trained bees to discriminate two olfactory–visual compound stimuli. To make sure that light identity was the only difference, we had to keep the odour information constant. In total, we trained four groups of bees to separate blue and UV light, as well as green and UV. The odour–UV compound was either unrewarded (Fig. 6A and Fig. 7A) or rewarded (Fig. 6B and Fig. 7B). For the pure-light discrimination, this kind of inverted meaning did not have any influence on wavelengths separation (Figs 6 and 7, right subpanels). However, a different picture emerged when the light information was part of an olfactory–visual compound. Bees could differentiate better between the lights if UV was part of the unrewarded compound (Fig. 6A and Fig. 7A). Instead, in cases when UV was part of the rewarded compound, bees were only able to discriminate the most different wavelengths (Fig. 6B and Fig. 7B). This suggests that modulation of light perception by the odour element of an olfactory–visual compound occurs and thus represents a case of sophisticated cross-modal stimulus interactions.
Odour dominates olfactory–visual compound learning in PP
The ability to discriminate the individual modalities odour and light, and their olfactory–visual compound was investigated using positive (PP) and negative patterning (NP) experiments. During PP, when only the olfactory–visual compound was rewarded, honeybees showed, in addition to the reward-associated compound, a high response to single olfactory stimuli (Figs 2 and 3), even though they had never been rewarded in this experimental context. This mostly olfactory-driven reward association could be confirmed by a memory and a generalisation test 1 h after the last conditioning trial. We therefore conclude that bees cannot solve cross-modal PP discrimination of olfactory and visual information. In contrast, a recent study showed that bees can solve this problem starting with the 6th conditioning trial (Mansur et al., 2018). This is equivalent to the number of learning trials we performed in our study. Thus, the number of conditioning trials alone cannot explain this discrepancy. Unfortunately, Mansur et al. (2018) did not test if the established olfactory–visual compound reward association was also generalised to a novel olfactory–visual compound and its single elements to strengthen their findings.
Bees learned light and odour elements equally well in NP
In the NP experiments, honeybees showed equally high learning performances to both, the odour element and the light element which they could not differentiate from the unrewarded compound if blue and green light was an element (Fig. 4). A trend in separating the olfactory–visual compound from the pure odour might be established during the memory test (Fig. 4A,B, right panels). However, if UV was one of the elements, bees solved the NP task and memorised that information, which they partially generalised to novel stimuli (Fig. 5). Our results, therefore, are partially in line with the observations by Mansur et al. (2018) who also reported the capability of solving cross-modal NP. In contrast to their experiments, where the reward association of the visual element occurred delayed and stayed at a rather low level, bees in our NP paradigm learned the individual visual and olfactory elements equally well, from the beginning of the acquisition phase (Fig. 4). One explanation for this might be that bees in the study by Mansur et al. (2018) had a different stimulus situation, which might have caused discrepancy between their results and our own.
Different learning capabilities for UV compared with green or blue light
There are different strategies for solving complex learning tasks such as positive- and negative-patterning experiments. The elemental learning strategy describes learning of a compound by summing up its single elements, whereas the configural learning strategy describes the compound as a unique cue during conditioning (Giurfa, 2003; Deisig et al., 2001; Deisig et al., 2003). Interestingly, we found arguments supporting one theory or the other, depending on the wavelength. Bees could not solve the PP and NP task if blue or green light was used as the visual element (Figs 2 and 4) and showed similar high PER rates for the olfactory–visual compound (CS−). This suggests the summation of the single rewarded elements and supports the elemental processing theory (Wagner, 1971). In contrast, the patterning experiments including UV as an element could be solved (Figs 3 and 5) supporting a configural character of olfactory–visual compound processing. Most interestingly, during PP trials, UV as a non-rewarded light element lowered the response to the rewarded UV–odour compound, whereas the single odour element, which was also not rewarded, evoked the highest PER rate (Fig. 3). This could mean that the negative reward association of the single light element lowered the perception of the odour light compound, which is mainly driven by the odour perception. During NP trials, the bees were able to solve the patterning task, which is only possible using a configural learning strategy (Myers et al., 2001) (Fig. 5). Hence, this suggests that UV might be processed differentially, resulting in a different associative strength during our cross-modal conditioning experiments compared with green and blue light.
UV, but not blue or green light, interferes with the olfactory–visual compound
When we trained honeybees to discriminate two olfactory–visual compound stimuli, we had to keep the odour information constant to ensure that the bees did not use olfactory information to solve the discrimination task. Hence, we varied the visual element of the two compounds and compared it to discrimination of the light elements when presented without odour. Interestingly, the ability to discriminate two olfactory–visual compounds depended on the visual element (wavelength) we included. Two olfactory–visual compounds could be significantly differentiated if UV was an element of the unrewarded olfactory–visual compound (Figs 6 and 7, upper panels), but not if UV was an element of the rewarded compound (Figs 6 and 7, lower panels), even though the involved single light elements could be differentiated in either case (Figs 6 and 7, right hand panels). Similarly, studies with Africanised A. mellifera on absolute and discriminant learning tasks with visual stimuli could show only that learning performance depended on the quality of the colour of the light stimulus, with lower learning performances for violet light compared with blue or green light (Jernigan et al., 2014).
Furthermore, our results show that the PER rates in response to the generalisation test after PP were significantly lower if the novel odour–light compound contained UV as an element (Fig. 2). In cases when UV was an element of the reinforced compound during acquisition, bees showed a high generalisation to novel odour and light elements (Fig. 3). During NP including UV, the associative strength of UV is significantly lower than that to the single odour element (Fig. 5). Moreover, the odour element did not dominate in the associative strength of the compound as it did in NP experiments including green and blue light (Fig. 4). Thus, in this experimental context, the odour–UV compound is perceived as a unique cue and not as the sum of the associative strength of the single elements. If UV is presented without any odour context, it can be learned and differentiated from other light stimuli as well (Figs 6 and 7, right panels). Thus, in general, bees had no difficulty associating UV with reward. This is why we think that the interaction (integration) of the odour–light pathway might be different if UV is included compared with green or blue, and that this might be based on neurobiological differences. Although we cannot completely exclude experience-driven responses to UV, we assume different processing pathways for the tested wavelengths. Recent studies in the honeybee show that visual learning involves the central complex and the mushroom bodies, with the vertical lobes of the mushroom bodies being involved in differential learning of visual stimuli (Plath et al., 2017). Since the UV in our studies was not polarised as it would be in a natural foraging context (Rossel and Wehner, 1984), it is possible that the bees could not associate the UV stimuli in a natural context due to a lack of information. These findings suggest that the differential effects with UV may be due to differences in internal processing of UV and light polarisation information, compared with blue or green light. Indeed, studies in various insect species show that information about polarised UV received by photoreceptors in the dorsal rim area of the compound eye is bundled via the anterior optic tract to the anterior optic tubercle, lateral complex and central complex (anterior sky-compass pathway) (e.g. Homberg et al., 2011; Held et al., 2016; Schmitt et al., 2016; Grob et al., 2017; Stone et al., 2017). Furthermore, studies in ants showed that colour learning and long-term memory formation elicited plastic changes in the optic lobes, central complex and the anterior optic tubercle, suggesting that multiple brain levels are involved in visual learning (Yilmaz et al., 2019).
The neural level of olfactory–visual integration
Multimodal sensory integration involves convergence of different sensory pathways at a higher brain level. The honeybee's mushroom body (MB) represents such a high-order sensory integration centre. The MB intrinsic neurons number up to ∼170,000 Kenyon cells (KCs), with dendritic arborisations organised in concentric layers within the input region, the MB calyx (Mobbs, 1982; Strausfeld, 2002). Each layer within the MB calyx is preferentially innervated by one modality: for instance, the outer lip region receives olfactory information from projection neurons of the antennal lobe, whereas visual projection neurons of the optic lobes innervate the collar, and the basal ring is innervated by both modalities (e.g. Mobbs, 1982; Schildberger, 1983; Schürmann, 1987; Ehmer and Gronenberg, 2002; Strausfeld, 2002). Hence, KCs receiving input from the different compartments of the MB calyx provide a computational space for simultaneous processing of activity triggered by visual and olfactory input.
The MB output is conveyed to ∼400 MB output neurons (MBON; Rybak and Menzel, 1993). Hence, the relatively large coding space of activity in a large number of KCs converges to a few hundred MBONs that potentially combine input from different modalities represented in groups of KCs. Recently, we exposed honeybees to olfactory, visual and olfactory–visual compound stimuli and recorded MBON activity (Strube-Bloss and Rössler, 2018). Interestingly, we found four types of response behaviours in MBONs. MBONs sensitive to light only (i), to odours only (ii), to light and odours (iii), and MBONs that did not respond to any of the presented stimuli (iv). This suggests that the modality-specific layered input of the MB is conserved in subpopulations of MBONs (i, ii), but a substantial proportion of MBONs integrate olfactory and visual information across MB input layers (iii). The subpopulation of MBONs that did not respond to any of the presented stimuli (iv) may become recruited after associative conditioning, as we showed earlier (Strube-Bloss et al., 2011). Moreover, MBONs hold the capacity to combine complex stimulus features like odour and its spatial occurrence (Strube-Bloss et al., 2016). We therefore propose that reward associations to an olfactory–visual compound stimulus may recruit initially non-responsive MBONs, which will encode the multimodal reward association during memory retention; a hypothesis we are currently testing.
Overall, the patterning experiments suggest that an olfactory–visual compound stimulus is perceived as the sum of its single elements and, therefore, follows elemental processing. However, UV light seems to have a special effect since olfactory–visual compounds containing UV were discriminated from its single elements during NP experiments. This supports configural processing of the single elements. Furthermore, the discrimination between UV versus blue and UV versus green is affected when the visual stimuli were part of an olfactory–visual compound. Thus, olfactory–visual integration follows sophisticated cross-modal stimulus interactions that depend on the presented wavelength of light stimuli, supporting a distinct processing pathway for UV compared with other wavelengths.
The authors thank Dirk Ahrens for beekeeping and Leonie Lichtenstein for her advice in PER conditioning with monochromatic light. Furthermore, we thank Leonie and Matthias Lichtenstein for providing the program ‘TimingProtocol’.
Conceptualization: M.C.B., M.F.S.-B.; Validation: W.R., M.F.S.-B.; Formal analysis: M.C.B., M.F.S.-B.; Investigation: M.C.B., M.F.S.-B.; Resources: W.R., M.F.S.-B.; Writing - original draft: M.C.B.; Writing - review & editing: M.C.B., W.R., M.F.S.-B.; Visualization: M.C.B., M.F.S.-B.; Supervision: M.F.S.-B.; Funding acquisition: M.F.S.-B.
This work was funded by the Deutsche Forschungsgemeinschaft (STR 1334/3-1) to M.F.S.-B. Further support was provided by the Faculty of Biology at the University of Würzburg.
The authors declare no competing or financial interests.