ABSTRACT
Honeybees are well-known models for the study of visual learning and memory. Whereas most of our knowledge of learned responses comes from experiments using free-flying bees, a tethered preparation would allow fine-scale control of the visual stimuli as well as accurate characterization of the learned responses. Unfortunately, conditioning procedures using visual stimuli in tethered bees have been limited in their efficacy. In this study, using a novel virtual reality environment and a differential training protocol in tethered walking bees, we show that the majority of honeybees learn visual stimuli, and need only six paired training trials to learn the stimulus. We found that bees readily learn visual stimuli that differ in both shape and colour. However, bees learn certain components over others (colour versus shape), and visual stimuli are learned in a non-additive manner with the interaction of specific colour and shape combinations being crucial for learned responses. To better understand which components of the visual stimuli the bees learned, the shape–colour association of the stimuli was reversed either during or after training. Results showed that maintaining the visual stimuli in training and testing phases was necessary to elicit visual learning, suggesting that bees learn multiple components of the visual stimuli. Together, our results demonstrate a protocol for visual learning in restrained bees that provides a powerful tool for understanding how components of a visual stimulus elicit learned responses as well as elucidating how visual information is processed in the honeybee brain.
INTRODUCTION
Many pollinators exhibit astonishing abilities to navigate and locate flowers, their source of nutrients, and their ability to learn floral traits (visual, scent, morphology) has been shown to be crucial for maintaining plant–pollinator relationships (for review, see Chittka et al., 1999). While seeking out flowers of a particular species and ignoring others, the amount of information perceived by their peripheral system exceeds by several orders of magnitude their brain processing capacity. Attention provides an animal with the ability to attend to one stimulus while filtering out non-relevant ones (Miller et al., 2012). By locking their attention on a subset of physical cues and filtering out environmental noise, they can use specific salient features of their resource. These attended features, for example, floral traits such as colour, scent and morphology, can then be learned by pollinators.
Among insect pollinators, honeybees (Apis mellifera Linnaeus 1758) were an early model for studying visual preferences and learning and memory because of the honeybee's ability to rapidly learn and retain new associations (Lubbock, 1882; Von Frisch, 1914). Since these early studies, researchers have extensively characterized honeybee vision (including colour vision and shape detection) (e.g. Srinivasan and Lehrer, 1988; de Ibarra and Giurfa, 2003; Niggebrügge and de Ibarra, 2003; Srinivasan, 2006) and the associated photoreceptors and neural pathways (e.g. Menzel, 1979; Meyer, 1984; Backhaus, 1991; Peitsch et al., 1992; Vorobyev et al., 2001; Wakakuwa et al., 2005; for review, see de Ibarra et al., 2014), making bees one of the most studied organisms for vision after primates (Web of Science; de Ibarra et al., 2014). Concurrent with these neurophysiological and behavioural studies, visual learning experiments have shown that honeybees exhibit a wide range of visual cognitive abilities, from a classical association of colour, pattern or orientation with a reward (e.g. Srinivasan et al., 1994; Giurfa et al., 1996a,b, 1997; de Ibarra et al., 2001) to complex high-order cognitive tasks such as the delay-matching-to-sample task (e.g. Giurfa et al., 2001; Avarguès-Weber et al., 2011, 2012), an ability that was long believed to exist only in vertebrates. Moreover, bees are great examples of the importance of attentional processes during navigation and learning. They can detect coloured targets using serial visual search (Spaethe et al., 2006), increase the accuracy of their decision when errors are punished (Chittka et al., 2003) and adapt their decision latency to the difficulty of the task (Dyer and Chittka, 2004).
Despite the abundant work on visual preferences, only a handful of studies have been able to demonstrate visual learning in tethered bees, and those learned responses are extremely weak (usually <15% responding). Learned responses can improve, but only after antennae are removed (Hori et al., 2006), the head is not immobilized (Dobrin and Fahrbach, 2012) or the number of trials is dramatically increased (Kuwabara, 1957; Masuhr and Menzel, 1972; Hori et al., 2007; Letzkus et al., 2008; Mota et al., 2011). Learned responses by tethered bees that were comparable to olfactory proboscis extension response (PER) conditioning were only reached when odours were used in combination with colour (Gerber and Smith, 1998), in a context-dependent manner with odours (Mota et al., 2011) or when motion was added to the visual stimuli (Balamurali et al., 2015). Furthermore, Niggebrügge and coworkers observed similar level of performance using a differential conditioning procedure in bees with impaired antennae but with poor colour discrimination during the memory test (Niggebrügge et al., 2009). Interestingly, successful visual PER was possible using polarized light (Sakura et al., 2011) or with Africanized honeybees (Jernigan et al., 2014). Most of the visual PER protocols consisted of the illumination of the entire experimental setup with a light of specific wavelength, thus restricting the use of configurational features (e.g. relative position of two stimuli such as above/below or right/left). In contrast, using tethered walking bees in a virtual environment would provide the opportunity to maintain the bees in a behavioural context where they are actively sensing their visual environment. Furthermore, such an approach would enable fine-scale quantitative, qualitative and temporal control of the visual stimuli experienced by the bees, as well as allow testing of simple and complex learning tasks in behavioural open- and closed-loop visual contexts.
Would bees' learning performances be more comparable to that of free-flying insects when they are actively behaving? Do they use the attributes of a visual object (e.g. shape, colour) independently or do they merge them into an ‘object token’ (sensu Kanwisher and Driver, 1992) that is the target of their visual attention? To shed more light on these open questions, we developed a new paradigm for visual conditioning in tethered walking honeybees based on their locomotor response and fixation behaviour. Importantly, the system provides feedback to the behaving animal – similar to the operant control experienced by free-flying bees – that is crucial for the learned responses. In the present work, we used a differential conditioning procedure with pairs of stimuli differing in shape, colour or both, to test: (1) whether bees readily learn visual object-like stimuli; (2) whether learning performances are a function of the stimulus characteristics; and (3) whether bees learn single or multiple components of a visual stimulus, and whether certain combinations are more salient than others.
MATERIALS AND METHODS
Honeybee collection and preparation
Approximately 600 forager honeybees from four different hives were used in visual learning experiments; a total 87 bees were discarded because of absence of response to sucrose prior to the experiment, low level of fixation to the visual stimuli (<1 s) or low walking speeds (<5 mm s−1) over the course of the pre-test and test phases. Forager honeybees were collected on artificial sucrose feeders (50% w/w, Sucrose 4097, J. T. Baker Chemical Co., Phillipsburg, NJ, USA) placed near or at the entrance of two indoor hives (winter 2015) and two outdoor hives (spring through fall 2016). To ensure that the honeybees were behaving similarly between rearing conditions and collecting sites, we compared the distance walked during pre-test (20 s) and the level of performance of trained honeybees (i.e. the proportion of bees that changed preference) when presented a blue square and green circle (described hereafter). There was no significant difference between indoor and outdoor hives for both the walked distance (indoor hive: 94.1±11.6 mm; outdoor hive: 89.3±10.5 mm; means±s.e.m.; t-test, indoor versus outdoor, t=0.30, d.f.=31.98, P=0.76, sample size: 19, 15) and learned preferences of the trained bees (z-score test, indoor versus outdoor, 0.63 versus 0.67, P=0.83, sample size: 19, 15). Indoor hives consisted of small colonies (around 2000 workers and a queen) obtained at the beginning of fall (15 September onward). Hives were connected to boxes (90×90×90 cm or 90×60×60 cm) filled with plants known to be pollinated by bees and present in Seattle, WA, USA. Feeders filled with 50% sucrose solution (w/w, Sucrose 4097, J. T. Baker) were placed at different locations in the box. A homemade pollen mixture was inserted into the hives every 2 weeks and consisted of five doses of fresh pollen (Brushy Mountain Bee Farm, Moravian Falls, NC, USA) for two doses of 50% sucrose solution (w/w). Hives were kept under a 16 h:8 h light:dark cycle by two lamps (one on top of each box, Roleadro Galaxyhydro 300 W LED Grow Light Full Spectrum, Shenzen, China, and Zoo Med ReptiSun T5 HO Terrarium Hood, San Luis Obispo, CA, USA), at 28±2°C and 70±10% relative humidity. The humidity and temperature followed the light cycle (increase at dawn/decrease by dusk). Forager honeybees were collected the day before the experiments and kept at 29±1°C and under 80±1% relative humidity in containers (dimensions: 8×4×5 cm or 11×7.5×8.5 cm), with a 30% (w/w) sucrose solution (Sigma-Aldrich, St Louis, MO, USA) available ad libitum and under a 16 h:8 h light:dark cycle. Before experiments, bees were anaesthetized on ice and tethered by the thorax to a metal wire using UV-activated glue (3:1 mix of Loctite 3104 Light Cure Adhesive, Loctite, Düsseldorf, Germany, and UV Glue GL114, JewelrySupply.com, USA). After tethering, bees were fed with up to 30 µl (indoor hives) or 15 µl (outdoor hives) of 30% (w/w) sucrose solution and allowed to recover for at least 1 h in a dark, warm and humid environment (25±1°C, 55±5% relative humidity).
Experimental setup
Locomotion compensator
Visual display
The locomotion compensator sits at the centre of a cylindrical visual arena (frosted mylar, 20 cm diameter, 30.5 cm high), subtending 330 deg with a 30 deg opening in the rear to allow access to the bee and the compensator (Fig. 1A). A video projector (Acer K132 WXGA DLP LED Projector, 600 lm) positioned above the arena projects the visual stimuli downwards onto the mylar screen; the image transformation to project planar patterns onto the cylindrical surface is calculated using the Panda3D package (Carnegie Mellon Entertainment Technology Center) for Python 2.7. The visual stimuli are either presented stationary or rotated in closed-loop as a function of the calculated bee heading (i.e. the visual scene moves laterally with the rotation of the ball). Presentation of visual stimuli in closed-loop (e.g. during the testing phases, see below) provides feedback and behavioural responses similar to the operant control experienced by free-flying bees, whereas presentation of static visual stimuli in open-loop (e.g. during training phases) ensures that bees are not learning information about the position of the visual objects but only their features.
Visual stimuli
Visual stimuli were composed of the following combinations of colours and shapes: blue circle (BC), blue square (BS), green circle (GC) and green square (GS). The stimuli were similar in brightness (blue stimulus: peak at 451 nm, 18 lx, 0.02 W m−2; green stimulus: peak at 537 nm, 21 lx, 0.03 W m−2; Fig. S1B) and surface area (circle: 20 cm2; square: 25 cm2). All experiments were conducted under red light (Bulbrite light bulb, 130 V, 10 W, red; covered by a red filter: Rosco Roscolux Medium Red, Lighting Filter, Stamford, CT, USA). Stimuli were positioned on the horizon of the screen, aligned with the top of the locomotion compensator corresponding to a visual angle of 28 deg. The visual angles of both stimuli were above threshold for discrimination against background and chromatic perception (Giurfa and Vorobyev, 1997). Visual stimuli were displayed on a black background, as this elicited the strongest phototactic responses compared with other backgrounds (Fig. S2). In addition to the blue and green stimuli, during periods between training sessions (described below), a closed-loop pattern of multiple human-grey bars [peaks at 630 (maxima), 451 and 537 nm, 2 cm wide, evenly spaced by 3 cm; 28±3 lx, 0.1 W m−2] were used. The pattern of grey bars was only used during inter-trial intervals (ITI). A single bar of the grey pattern and the space between two bars corresponded both to a visual angle of approximately 11 deg. Spectral characteristics of the visual stimuli were obtained by measuring the relative irradiance of each stimulus (USB2000+ Spectrometer, Ocean view software, calibration light HL-2000, Ocean Optics, Dunedin, FL, USA). Together, the visual arena and locomotion compensator allowed determination of the cumulative heading of the honeybee with respect to the visual stimulus and the pure translations in X and Y to recreate the bee's path and estimate the walking speed.
Experimental procedures
After recovering from the cold-induced immobilization and tethering procedure, responsive bees were identified using the PER by touching the antennae with a 50% (w/w) sucrose solution. Bees were allowed to acclimate in the arena for 2 min, after which they experienced the visual stimuli. An assay consisted of three sequential phases of stimuli presentation: pre-test, training and testing. During pre-test and testing phases, all groups were exposed to two visual stimuli (e.g. GC and BS) under closed-loop conditions (20 s each; Fig. 1B). At the beginning of those phases, the stimuli were located at +40 deg and −40 deg (0 deg being in front of the bee) and their order (e.g. square–circle versus circle–square) was randomized between bees. In addition, trained and unpaired bees were presented combinations of stimuli, rewarded and punished, as described in the experimental groups below. Between stimuli presentations (the ITI), bees were presented with a pattern of multiple grey bars in closed-loop.
Experimental groups were as follows
Naive group
Between the pre-test and test phases, naive bees were exposed, under closed-loop conditions, to the same pattern of multiple grey bars as displayed during ITIs (see Trained group and Unpaired group, below). The time elapsed between both phases was equal the total duration of the training phase (1140 s). This treatment controlled for exposure to the experimental setup and the duration of the experiment.
Trained group
In our training protocol, we used a massed training method (Bitterman et al., 1983; Menzel et al., 2001) with 50% w/w sucrose used as a reward (positive unconditioned stimulus; US+) and 60 mmol l−1 quinine was used as a punishment (negative unconditioned stimulus; US−). One of the two visual stimuli was rewarded (conditioned stimulus, CS+) and the other one was punished (CS−). During a trial, one of the two conditioned stimuli was presented individually in open-loop for 20 s and centred on the screen (i.e. 0 deg on the azimuth). This centred stationary presentation during training ensured that the bees learn the stimuli components and not its position. Ten seconds after the onset of the CS, the US was delivered to both antennae for 10 s (Fig. 1B). If the bee exhibited a PER, she was allowed to drink a drop of the solution. Bees were exposed to 1 min of ITI in closed-loop between trials and 2 min of the ITI pattern at the end of the training session. The training phase consisted of six presentations of each stimulus in a randomized order. To control for innate visual preferences at the individual level, the stimulus that was preferred during the pre-test phase was systematically punished and the non-preferred stimulus systematically rewarded during training.
Unpaired group
For the unpaired group, rewards and punishments were delivered in a pseudo-random order, as described in the trained group, but without temporal contingency with the CS (i.e. a reward or punishment was delivered during the ITI, also in a randomized manner, and for 10 s; Fig. 1B).
For all groups, the time spent walking towards each visual stimulus was used to determine the individual preference for the stimulus, with the preferred stimulus corresponding to the longest fixation time. Fixation was defined as the insect maintaining a stimulus within the interval of [−20, 20 deg] (i.e. directly in front of the bee). Bees clearly spent more time fixating one stimulus over the other (approximately 5–10 s difference) allowing determination of visual preference in individual bees. All bees walking less than 10 mm, or those that did not fixate (<1 s), were discarded from the analysis (<12% of bees).
Experimental series
A series of experiments were conducted to assess: (A) whether bees can learn to discriminate between two stimuli that differed in their shape and colour; (B) whether bees can learn to discriminate stimuli that differed in only one component (either shape or colour); and (C) whether single or multiple components of the visual stimuli are learned by the bees (Table 1). Experimental treatments are described below.
Experimental series A
To test whether bees learn to discriminate between two stimuli that differ in their shape and colours (Table 1), bees were trained with visual stimuli that differ in both shape and colour (i.e. GC versus BS) over the course of 12 trials (6 CS+/6 CS−). The stimuli were different in both shape and colour, thus potentially enhancing training success. To ensure that learned responses were not a function of a specific combination of colour and shape, we also trained a group of bees to GS versus BC. Control groups of naive and unpaired bees were also tested, and groups were run in parallel. We also examined the acquisition rate of the learned responses, where bees were exposed to the same protocol as for the trained group but with either four trials (2 CS+/2 CS−) or eight trials (4 CS+/4 CS−) between the pre-test and test phases.
Experimental series B
To test whether bees discriminatively learn when trained with visual stimuli that differ in colour but not shape (i.e. BS versus GS; or BC versus GC) or in shape but not colour (i.e. GS versus GC; or BS versus BC) (Table 1), bees were trained with visual stimuli over the course of 12 trials (6 CS+/6 CS−). For each set of stimuli, a control group of naive bees were also pre-tested and tested in parallel.
Experimental series C
In the last series of experiments, the shape–colour association of the stimuli was reversed either after or before training (i.e. after the pre-test phase) (Table 1). The aim of this experimental series was to examine whether bees learned single or multiple components of the visual stimuli over the course of training by decoupling the relationship between shape and colour. To examine this, two different sets of stimuli were used in two experimental groups: BS versus GC and BC versus GS.
In the first set of experiments, bees were pre-tested and trained with GC versus BS, and tested with BC versus GS; this allowed testing of whether bees learned only one component of the stimulus, because if bees learned one component during their training phase, then they should respond to that component in the test phase regardless of the presentation of the other component (e.g. if bees learn the colour blue when trained to BS, then they should also respond to BC in the test). In this experiment, a control group of naive bees was tested in parallel.
In the second set of experiments, bees were pre-tested with GC versus BS, and then trained and tested with GS versus BC; this allowed testing of whether bees learned the shape–colour combination, because if bees learned the combination, then changing the stimuli between training and test would negatively impact their learning performance, while their performance should not be affected by changing the stimuli between pre-test and training. The assignment of the rewarded and punished CS was randomized and learning performance was assessed by comparing bees' performances with that of an unpaired group. The unpaired group was used as a control because quantifying learning based on a change of bees' preference would have introduced confounding factors (i.e. the training is applied on stimuli that differ from the pre-tested set of stimuli). In the unpaired group, the rewarded and punished CS were randomized.
Statistical analyses
All statistical analyses were conducted using R (R Foundation for Statistical Computing, Vienna, Austria) and the package ‘Plotly’ (https://CRAN.R-project.org/package=plotly) for the creation of 2D histograms. The standard error of a proportion was calculated as: (Le, 2003). A binomial test was used to compare the proportion of change with a random choice (P=0.5) and a multiple-sample proportion test (e.g. z-score test) to compare two or more proportions. Two-sample t-tests were used to compare between walking distances or fixation times after verification of normality, and Wilcoxon tests were performed for non-normal data. For the second experiment in experimental series C, we used two different approaches to calculate the proportion of ‘correct choice’ (defined as the proportion of bees responding to the rewarded stimulus) in the control group: (1) determining the proportion of correct choices for the unpaired group when experiencing the same sequence of CS+/CS− stimuli as the trained group; and (2) conducting a permutation test by resampling the choices from the trained sequence 1000 times and comparing those results with the proportion of correct choices from the unpaired group using a binomial exact test.
RESULTS
Visual training changes the innate preference of honeybees
During the first 20 s of the pre-test to the visual stimuli (a green circle and a blue square), honeybees fixated one or both visual stimuli for 8.8±3.56 s (mean±s.e.m.; Fig. S3). Innate preference did not influence the amount of time fixating the stimuli (t-test, blue square versus green circle, t=1.62, d.f.=76.25, P=0.11; Fig. S3). Similar levels of fixation were also observed during the test phase (t-test, t=1.47, d.f.=98.11, P=0.14) and across experiments (i.e. with different sets of stimuli). When first exposed to those stimuli in closed-loop, the majority of bees innately preferred the green circle (PGC=0.65, binomial exact test, P<0.001, sample size: 104). In the absence of conditioning, naive bees remained constant in their visual preferences between the pre-test and test phases (PGC,pre-test=0.70 and PGC,test=0.73, z-score test P=0.8; Fig. 2A). By contrast, trained honeybees were significantly more prone to switch their preferences between the pre-test and test phases (Pswitch,trained =0.65 and Pswitch,naive=0.33, z-score test, P<0.01; Fig. 2A). An independent group of bees was presented ‘unpaired’ visual stimuli and reward, and the temporal relationship between the cognate CS and US was disrupted. This group did not differ from the naive group in terms of its change of preference (Pswitch,unpaired=0.4 and Pswitch,naive=0.33, z-score test, P=0.52; Fig. 2A). And similar to naive bees, the unpaired group was also significantly less prone to change preference than the trained group (Pswitch,trained=0.65 and Pswitch,unpaired=0.4, z-score test, P=0.04; Fig. 2A), demonstrating the associative nature of their learning.
To also test whether bees could learn the blue circle (BC) versus the green square (GS), honeybees were trained with those stimuli. When first exposed to this set of stimuli, naive honeybees did not have an overall preference as a group for either stimulus (PGS=0.63, binomial exact test, P=0.20), and were constant in their choice (PGS,pre-test=0.63 and PGS,test=0.67, z-score test, P=0.79), but trained honeybees were significantly more prone to switch preferences between the pre-test and the test phase (Pswitch,naive=0.30 and Pswitch,trained=0.60, z-score test, P=0.04; Fig. S4). Importantly, the innate preference for a stimulus, whether a green square or a blue circle, during the pre-test did not bias the learning performance (Pgreen,rewarded=0.68, Pblue,rewarded=0.6; P=0.61). We next examined the influence of the number of training trials for the acquisition of visual memory. As training trials increased from four to 12, the change in visual preference by trained honeybees correspondingly increased (Pswitch,trained,4t=0.4 and Pswitch,trained,12t=0.65, z-score test, P=0.04; Fig. 2B).
The bees’ visual preferences and learned responses were confirmed by quantitative analysis of their fixation time to the visual stimuli (sensu Paulk et al., 2014) (Fig. 3). Naive bees spend significantly more time in front of their innately preferred stimulus during both pre-test and test phases (Wilcoxon test, pre-test: V=820, P<0.001; test: V=571, P=0.03; Fig. 3A). Unpaired bees did not spend more time fixating the innately preferred stimulus during the test phases (pre-test: V=465, P<0.01; test: V=258, P=0.61; Fig. 3B). By contrast, trained bees significantly shifted their fixation time: during the pre-test they spent more time in front of the CS− than the CS+, but during the test phase they spent more time in front of the CS+ than the CS− (pre-test: V=253, P<0.001; test: V=0, P<0.001; Fig. 3C). Importantly, the total fixation was the same between the pre-test and test phases, but the bees’ visual preferences completely switched between the CS− and CS+ after training (Fig. 3C). Examining the temporal dynamics of the locomotor response, we observed that the distribution of cumulative angles was similar between the pre-test and test phases for all groups except for the trained bees that changed their stimulus preference, as these bees walked more in the direction of the rewarded stimulus during the test phase (Fig. 3). Finally, similar levels of walking speed were found over the course of the experiment and across experimental treatments (i.e. naive, unpaired and trained groups; Figs S5 and S6). Together, these results show that both the fixation time and the bee preference during the test phase captured the significant visual learned responses in the trained group (Figs 2 and 3).
The effect of shape or colour alone on the learned responses by honeybees
To better understand the contribution of different visual features in honeybee learning, bees were trained to shape or colour alone (e.g. BC versus BS, or BC versus GC). Across all treatment groups (naive and trained), when honeybees were pre-tested with stimuli that differed in their shape alone, they innately preferred the square form when both stimuli were green (Psquare|green=0.65, binomial exact test, P=0.04, sample size=49) but did not show any shape preference when both stimuli were blue (Psquare|blue=0.53, binomial exact test, P=0.77, sample size=49). Nonetheless, there was an interaction between the treatment group and the components of the visual stimuli. Naive honeybees were constant in their shape preference when the shapes were green (Pcircle,pre-test=0.28 and Pcircle,test=0.40, z-score test, P=0.37; Fig. 4A), whereas when the shapes were blue, naive bees did not maintain their preference over the course of the experiment (Pswitch|green=0.36 and Pswitch|blue=0.60, z-score test, P=0.02; Fig. 4A). Trained honeybees did not learn shape stimuli, regardless of colour, exhibiting responses not significantly different from naive bees (Pswitch,naive|green=0.36 and Pswitch,trained|green=0.42, z-score test, P=0.68; Pswitch,naive|blue=0.6 and Pswitch,trained|blue=0.5, z-score test, P=0.48; Fig. 4A).
When bees were first exposed to stimuli that differ only in their colour and not shape, they significantly fixated on the green stimulus (Pgreen,pre-test=0.73, binomial exact test, P<0.001, sample size=97) regardless of shape. For naive honeybees, this green preference was constant between the pre-test and test phases (Pgreen,pre-test=0.73 and Pgreen,test=0.65, z-score test, P=0.20; Fig. 4B). When trained with blue versus green squares, honeybees did not learn the predictive value of the colour of the stimuli (Pswitch,naive=0.32 and Pswitch,trained=0.41, z-score test, P=0.53; Fig. 4B). However, circles elicited a different learned response: when bees were trained with blue versus green circles, they significantly changed their preference compared with naive honeybees (Pswitch,naive=0.33 and Pswitch,trained=0.64, z-score test, P=0.03; Fig. 4B).
Bees learn unique combinations of visual stimuli
In the previous experiments, bees could learn to correctly choose between a green circle and a blue square, a blue circle and a green square, or between two coloured circles, but it was unclear whether one or both components (i.e. shape or colour; shape and colour) of the stimulus were learned. To answer this question and further explore how keeping the visual stimuli constant over the course of the experiment impacts visual learning, we exposed honeybees to two different sets of experiments. The first set of experiments swapped the colour–shape combination between the pre-test and training/testing phases of the experiment, and the second set of experiments swapped the colour–shape combination between the pre-test/training and testing phases. Our reasoning is the following: if a single component of the visual stimuli is learned, the lack of maintaining the visual stimuli between training and test should not impact learning performance. For instance, if bees were trained on and learned a colour, then the bees should respond to the learned colour irrespective of its shape. Conversely, if bees learn the combination of shape and colour, we expect changing the stimuli between training and test to impact their learning performance, while their performance should not be affected by changing the stimuli between pre-test and training. This latter expectation was tested in a second set of experiments.
In the first set of experiments, when control (naive) honeybees were exposed to a green circle and a blue square during the pre-test phase, and then to a green square and blue circle during the test phase, without any training in between, the bees exhibited a random preference to both visual stimuli (Pgreen=0.64, binomial exact test, P=0.18). Similarly, trained honeybees that were exposed to a green circle and a blue square during pre-test and training, and then to a green square and blue circle during the test phase, did not show a preference during the test (Pgreen=0.65, binomial exact test, P=0.26). Moreover, during the test phase, trained bees did not show a preference for the colour of the rewarded stimulus, here denoted as the ‘correct choice’, and control bees showed similar responses for the stimuli they initially rejected (Pcorrect,naive=0.54, z-score test, P=0.85; Pcorrect,trained=0.50, P=0.81; Fig. 5A). For example, among the bees that preferred the green circle in pre-test, half preferred the blue circle (shape preference) and the other half the green square (colour preference) during the test (Fig. 5A).
In the second set of experiments, when bees were pre-tested with a green circle and a blue square but trained and tested with a green square and a blue circle, bees tended to choose the rewarded (‘correct’) stimulus, although the differences in responses were not statistically significant (Pcorrect=0.65, binomial exact test, P=0.18). As opposed to the previous experiments, where innate preferences were negatively reinforced, here for each individual bee the rewarded and punished stimuli were randomly assigned. The rationale was to avoid confounding factors originating from quantifying learning performances based on changes in preference where stimuli differed in both shape and colour between the pre-test and test phases. As a control, we used an unpaired group where the CS+ and CS− were also randomly assigned, similar to the trained group, thereby allowing explicit control of the experienced stimuli during the pre-test versus training and test phases, something that is not possible in the naive group. When comparing the trained and control groups, trained bees made more ‘correct choices’ than control bees, albeit these differences were not statistically significant (Pcorrect,trained=0.64 and Pcorrect,unpaired=0.45, z-score test, P=0.18; Fig. 5B). To ensure that our results were not biased by the arbitrary assignment of reward sequence in the unpaired group, we conducted a permutation test by randomly resampling the choices from the reward sequence 1000 times and found no significant difference between the two methods (binomial exact test, P=0.82). Together, these results suggest that bees learn multiple components of the visual stimuli.
DISCUSSION
Our results demonstrate that bees can achieve visual learning while tethered on a locomotion compensator, and that the visual stimuli used as the CS play a crucial role in the observed learning performance. Stimuli that differed in both colour and shape were consistently learned. Similarly, learning was achieved with stimuli that differed in colour if they were circles, which may be linked to bees' natural attraction to flower-like patterns (Lehrer et al., 1995; Dafni et al., 1997). However, squares that differ in colour and stimuli that differed only in shape were not learned, at least in the configurations tested here. Nonetheless, we are confident that bees can distinguish between the circles and squares given their initial preferences during the pre-test phase when exposed to two green shapes. Although bees are known to prefer blue over other colours (Giurfa et al., 1995), in our experiments, bees fixated preferentially the green stimulus when given the choice between green and blue objects of the same shape or between a green circle and a blue square. However, when presented a green square and a blue circle, bees did not preferentially fixate one stimulus, showing that innate preference may result from a complex interaction between shapes and colours.
Previous studies have shown that honeybees are able to learn visual stimuli on the basis of one or multiple features or components (e.g. size, contrast, orientation, bilateral symmetry) (e.g. Srinivasan et al., 1994; Giurfa et al., 1996a; de Ibarra and Giurfa, 2003; Niggebrügge and de Ibarra, 2003). Moreover, novel visual stimuli that share one or several components of the previously learned pattern could evoke subsets of the same neural representation sufficient enough to drive the learned behaviour (e.g. Ronacher, 1992; Horridge and Zhang, 1995; Horridge, 1997; Stach et al., 2004). In our experiments, bees may learn to choose between a green circle and a blue square based only on the colour or shape component of the stimuli, or, alternatively, one of the components may be predominant in the bee's perception of the stimulus. By training bees with the green circle and a blue square and then testing them with the reversed association of shape and colour, we showed that trained and naive bees responded similarly to the novel stimuli. Both components of the stimuli may thus be implicated in the decision-making of bees when presented with this ambiguous choice. It is also possible that half of the bees learned only the colours while other learned only the shapes, leading to results similar to ours. Further studies could explore the relative importance of these features and their generalization in our paradigm by testing bees with stimuli of intermediate wavelength or different shapes (e.g. diamonds or stars).
Finally, we explored the importance of the pre-test phase by presenting a different set of stimuli during this phase and during the training and test phases. Analogous protocols have been used to modulate innate visual or spatial responses in diverse animals, from insects to mammals, suggesting the utility of our protocol (Tang and Guo, 2001; Bolin et al., 2012; Kent et al., 2013). As bees exhibited a strong innate preference in most of our experiments, the pre-test phase provided insight in the individual preferences and allowed us to train the bees against those innate preferences. Otherwise, the randomized assignment of the reward to one stimulus might have biased the performance level, as the experiment shown in Fig. 5B suggests.
Why then, when using the PER in honeybees, do we observe such low performances in visual learning compared with the high performances of olfactory learning? Olfactory conditioning using PER may be especially relevant in the context of feeding behaviours, where taste and olfaction are coupled. By contrast, visual learning by PER may not be contextually relevant given that bees are using their vision for a wide variety of behaviours, including locomotor responses in flight, navigation and landing. This is supported by studies showing that bee performance in visual PER conditioning increased in the presence of motion cues (Balamurali et al., 2015). In addition, visually oriented insects depend on the optic flow – the image shift on the retina during self-motion – as their main source of information about their 3D surrounding environment. The optic flow can be separated into translational and rotational components that can be segregated by a saccadic flight structure strategy (Collett and Land, 1975). This strategy allows the animal to enhance depth perception by actively shaping the visual inputs (e.g. Boeddeker and Egelhaaf, 2005) and is explicitly coupled to the flight and foraging behaviours of bees (e.g. Srinivasan et al., 1991; Boeddeker and Hemmi, 2010). Such active behavioural states play profound roles in modulating visual responses and processing in bees and flies (Paulk et al., 2014; Weir et al., 2014).
Thus, the locomotion compensator may provide sensorimotor pathways and responses similar to those used by naturally behaving bees. Moreover, the treadmill and virtual environment enable a fine control of stimuli, an important feature for neuronal recordings. The experiments presented here have allowed us, using simplistic stimuli (circles and squares), to demonstrate and characterize associative visual learning in tethered bees. However, virtual environments can be used to display more naturalistic stimuli. For example, it is possible to record the trajectory of a walking animal, reconstruct the panorama experienced by the animal and replay it in the virtual environment. It is also possible to include more sensory modalities in the environment (e.g. odours) and explore, for example, multimodal learning. Implementation of depth perception can be used to explore navigation and its neural basis. Overall, locomotion compensators and a virtual environment are a good intermediate between a complete tethered and unrestrained behaving animal, allowing us to explore and characterize behaviour and its related neural basis in a controlled environment and a behaving animal.
The higher learning performances observed here, in comparison to most studies of visual PER conditioning, may also be linked to the presentation of discrete stimuli. In many visual PER conditioning studies (Kuwabara, 1957; Masuhr and Menzel, 1972; Hori et al., 2006, 2007; Mota et al., 2011), the conditioned stimulus consisted of the ambient illumination of the entire experimental area by a specific wavelength. However, it has been shown in bumblebees that a colour stimulus with strong contrast to the background is necessary for behaviour (Lunau et al., 1996; Spaethe et al., 2001). In addition to a potential increase in detection and discrimination, discrete stimuli allow the experimenters to place the bees in an operant behavioural context, where bees have closed-loop control of the position of the stimuli. Behaving bees thus receive both motor and visual feedback. Discrimination performances in free-flying learning experiments have been hypothesized to be dependent on attention level (Giurfa, 2004) and studies have highlighted the importance of attention-like processes for visual discrimination and fixating behaviour (e.g. Paulk et al., 2014; Van De Poll et al., 2015; de Bivort and van Swinderen, 2016). Moreover, temporal coordination in the insect brain was promoted by closed-loop behavioural control in a virtual environment (Paulk et al., 2015). It is thus possible that honeybees need visual and motor feedback to actively sense their visual environment and maintain their attention on a visual stimulus.
In summary, the methods and results described here provide a framework to examine the components of visual learning in tethered bees. In addition to fine-scale environmental control and feedback, the system provides motivation for experiments characterizing the neural bases of visual learning and memory. Major advances in our understanding of the neural basis of sensory processing have recently been made through the use of virtual reality environments, either in walking (Guo and Ritzmann, 2013; Paulk et al., 2013, 2015) or flying simulators (Suver et al., 2012; Tuthill et al., 2013; Weir and Dickinson, 2015). We believe that such virtual reality environments are an important tool for characterizing learning and visually guided behaviours and identifying the neural processes that underlie these behaviours.
Acknowledgements
The authors thank M. Giurfa, L. Chittka, A. Avargues-Weber, N. Kutz and C. Lahondere for support and discussions; D. A. San Alberto for technical support; A. Hsu, J. Cheresh and E. Pollock for experimental help; and two reviewers for their comments. The authors also thank E. Sudgen, T. Daniel, T. Mohren and the Puget Sound Beekeeping Association, particularly B. Becker, for their help with honeybee rearing.
Footnotes
Author contributions
Conceptualization: C.R., E.R., C.V., J.A.R.; Methodology: C.R., E.R., C.V.; Software: E.R.; Investigation: E.R., C.V.; Writing - original draft: C.R., E.R., C.V., J.A.R.; Writing - review & editing: C.V., J.A.R.; Visualization: E.R., J.A.R.; Supervision: C.V., J.A.R.; Project administration: J.A.R.; Funding acquisition: J.A.R.
Funding
We acknowledge the support of the Air Force Office of Scientific Research under grant FA9550-14-1-0398 and FA9550-16-1-0167 (J.A.R.), National Science Foundation under grant IOS-1354159 (J.A.R.), an Endowed Professorship for Excellence in Biology (J.A.R.), the University of Washington Institute for Neuroengineering, and the Human Frontier Science Program under grant HFSP-RGP0022 (J.A.R. and C.V.).
References
Competing interests
The authors declare no competing or financial interests.