ABSTRACT
Studies have shown that bats are capable of using visual information for a variety of purposes, including navigation and foraging, but the relative contributions of visual and auditory modalities in obstacle avoidance has yet to be fully investigated, particularly in laryngeal echolocating bats. A first step requires the characterization of behavioral responses to different combinations of sensory cues. Here, we quantified the behavioral responses of the insectivorous big brown bat, Eptesicus fuscus, in an obstacle avoidance task offering different combinations of auditory and visual cues. To do so, we utilized a new method that eliminates the confounds typically associated with testing bat vision and precludes auditory cues. We found that the presence of visual and auditory cues together enhances bats' avoidance response to obstacles compared with cues requiring either vision or audition alone. Analyses of flight and echolocation behaviors, such as speed and call rate, did not vary significantly under different obstacle conditions, and thus are not informative indicators of a bat's response to obstacle stimulus type. These findings advance the understanding of the relative importance of visual and auditory sensory modalities in guiding obstacle avoidance behaviors.
INTRODUCTION
Animals navigating and foraging in their natural environments must not only detect biologically relevant signals, but they must also determine how to use that sensory information for a given task. An animal's surroundings are generally filled with noise and ambiguous signals, and the information from multisensory input can contain more information than signals from any single sensory modality, or the signals carried through one sensory modality can contain more reliable information than that in others. Therefore, being able to combine stimulus information across multiple sensory modalities, and to subsequently weight these different sensory inputs, is crucial to disambiguating information about the world, forming unified perceptions of objects and guiding probabilistic decision making. For example, both male and female big-clawed snapping shrimp (Alpheus heterochaelis) use the same visual display in aggressive and mating interactions, requiring individuals to assess sex via chemical cues in order to respond to the display appropriately (Hughes, 1996). Gray squirrels (Sciurus carolinensis) emit alarm signals that contain both a visual and auditory component among conspecifics, and in populations inhabiting urban environments with more auditory noise, individuals rely more heavily on the visual component than their more rural counterparts (Partan et al., 2010).
Bats serve as an excellent model for studying multi-modal sensing, integration and decision processes. Over 1000 species of bats produce echolocation calls with the larynx and are well known to use auditory information (via passive listening or by active biosonar) for prey capture and foraging (Anderson and Racey, 1991; Bell, 1982; Faure and Barclay, 1994; Gomes et al., 2016; Marimuthu and Neuweiler, 1987; Russo et al., 2007; Ryan, 1987); a prevailing misconception is that echolocating bats are blind or have no use for visual information (Thiagavel et al., 2018). This can be attributed to their mostly nocturnal lifestyle and relatively small eyes. Although many bat species rely extensively on biosonar for many facets of life, echolocation is only functional over relatively short distances, owing to its reliance on high frequencies that attenuate rapidly in air (Jakobsen et al., 2013; Lawrence and Simmons, 1982). Thus, bats also use vision for tasks such as navigation (Davis, 1966; Griffin, 1970; Höller and Schmidt, 1996; Layne, 1967; Williams et al., 1966), escape behaviors (Chase, 1983; Mistry, 1990) and predator surveillance (Eklöf, 2003). Our goal is to quantify bat responses when multisensory cues in the auditory and visual domains are present.
Vision and hearing are closely coordinated senses, and, in many organisms, a major function of sound localization is to direct the eyes to the source of a sound (Heffner and Heffner, 1992; Heffner et al., 1999). However, when visual cues and cues of other sensory modalities conflict, visual input often dominates (Bekoff, 1972; Posner et al., 1976; Uetake and Kudo, 1994; Ward and Mehner, 2010; Wilcoxon et al., 1971; Witten and Knudsen, 2005). Accordingly, we can find various instances in which animals have demonstrated visual dominance over other senses (Bekoff, 1972; Posner et al., 1976; Uetake and Kudo, 1994). Which, if any, sense dominates perception depends on the type of task being performed (Parker and Robinson, 2017). Generally speaking, spatial navigation tasks, including those that require obstacle avoidance, tend to depend largely on vision (Welch and Warren, 1980). This is an intuitive strategy for an organism for which the primary sense is vision; however, for an organism for which the primary sense is audition, such as echolocating bats, vision may not dominate.
Prior studies of bats have suggested that task performance using echolocation may be enhanced by the presence of visual cues, especially in dim-light conditions like those found at dusk or dawn. Northern bats (Eptesicus nilsonii) in southern Sweden may use visual cues during prey search to locate bright white moths that are active just above and within tall grass (Eklöf et al., 2002; Jensen et al., 2001). Laboratory experiments report that lingual-echolocating Egyptian fruit bats exhibit visual dominance in some tasks, even when both auditory and visual cues are present and discriminable (Danilovich and Yovel, 2019). This species has large eyes and well-developed vision, and its sonar click rate depends on light level (Danilovich et al., 2015). Notably, laryngeal echolocating bats presented with hypothetical escapes via transparent, rigid windows through which light entered a darkened room or maze tended to collide with these structures despite the assumption that their echolocation should have alerted them to the obstacle (Chase, 1981, 1983; Davis and Barbour, 1965; Mistry, 1990). However, it has also been shown that bats presented with smooth vertical surfaces, such as glass windows, do not receive returning echoes until they are in very close proximity to the structure, owing to the angles at which sound is reflected (Greif et al., 2017), raising the possibility that bats did not receive echoes from the surfaces presented in the above-named studies prior to being able to abort their escape attempts.
Other studies suggest that vision has a deleterious effect on task performance when combined with echolocation. Free-flying little brown bats (Myotis lucifugus) made a greater number of collisions with a trailer when the exterior lights were on (McGuire and Fenton, 2010). In situations in which bats appeared to be guided by sight, they often improved their performance when their eyes were covered (Griffin and Galambos, 1941). This may be because these experiments were conducted in light conditions that were too bright and impaired visual function. It has been demonstrated that normal levels of room illumination (∼377 lx; similar to lighting in an interior classroom) appear to impair obstacle avoidance in M. lucifugus, and performance was best in very dim conditions (∼1 lx; similar to a night with a full moon) (Bradbury and Nottebohm, 1969).
The extent to which bats utilize visual cues when echolocation cues are available and distinct is an ongoing topic of inquiry. As a whole, the field of audiovisual integration in echolocating bats is incomplete and would benefit from the addition of more studies examining this phenomenon under a variety of conditions across many species. From the literature, it is clear that bats possess the capacity to see and that vision plays a role in their natural behaviors (Chase and Suthers, 1969; Curtis, 1952; Suthers et al., 1969), but which modality, echolocation or vision, is prioritized appears to depend on the environment, stimulus strength, species and task.
To understand the relative importance of visual and auditory sensory modalities in obstacle avoidance tasks, we conducted a set of behavioral experiments in which we quantified orientation of the laryngeal echolocating big brown bat (Eptesicus fuscus) in response to visual and sonar obstacles that resulted in different combinations of audiovisual cues. One of the challenges that studies have faced in the past is being able to present stimuli in the visual domain without providing information in the auditory domain, which is essential to determine which cues the bat is using. The present study exploited a novel method to display visual obstacles that yielded no echo returns. The goal of this study was to determine whether bats would rely solely on vision for performing obstacle avoidance and whether the presence of visual and auditory cues would differentially affect the behavioral responses observed during task performance. We hypothesized that the combination of visual and echolocation cues together would augment avoidance behavior compared with a single modality alone. Thus, we predicted that bats would evade an obstacle significantly more in the presence of visual and echolocation cues together than in the presence of visual or echolocation cues alone.
We would like to bring to the reader's attention the novelty of the methods employed in experiments reported here. To our knowledge, the use of lasers as visual obstacles that carry no acoustic information has never before been implemented and presents an exciting opportunity to explore in future experiments the question whether bats perceive laser beams as solid objects and, if so, whether this perception depends on the diameter of the beam.
MATERIALS AND METHODS
Animals and setup
We designed an obstacle avoidance task in which three wild-caught adult female big brown bats, Eptesicus fuscus (Beauvois 1796), were trained to fly into a box suspended from the ceiling (Fig. 1A) for a food reward. Experiments took place in a large room (6×6×2.5 m) under infrared illumination. Bats were captured in North Carolina under collecting permit 17-SC01070 issued by the North Carolina Wildlife Resources Commission, and were housed and trained at Johns Hopkins University according to all procedures set forth by the Institutional Animal Care and Use Committee (protocol number BA17A107). A food reward of mealworms (Tenebrio molitor larvae) was given when the animals entered the box (60×70×65 cm), requiring them to navigate past the obstacle, if present, and land on any one of the three enclosing walls. Bats that did not enter the box, or landed on the outside of the box, were not given rewards. A fan-operated mist-producing apparatus was used to create a column of water vapor in front of the box opening during training and testing. The opening of the box remained unobstructed for control trials and was partially obstructed with an obstacle during test trials.
Obstacles
We created obstacle conditions (Fig. 1B) in which the three bats (G20: N=33 trials; G40: N=60 trials; O90: N=45 trials) were presented with acoustic-only cues (A+V–, N=43 trials), vision-only cues (A–V+, N=38 trials), acoustic and vision cues (A+V+, N=35 trials) or no obstacles (A–V–, N=56 trials). A–V+ cues are challenging to create because this condition requires an acoustically transparent object that still serves as a visual obstacle in the flight path. To create this condition, a laser was used, in conjunction with the column of mist. The mist supplied additional airborne particles to increase the scattering of light, resulting in increased visibility of the entire laser projection. The result was a thin beam of solid green light (520 nm, 3 mm diameter), which, according to electroretinograms, should be near the peak sensitivity of E. fuscus (Hope and Bhatnagar, 1979) and easily detectable. The ability of E. fuscus to detect the 520 nm light was also verified in a separate behavioral experiment (see below). A+V– cues were constructed by placing thin (5 mm diameter), flexible pieces of rubber wrapped in a thin string of unlit light-emitting diodes (LEDs) in front of the opening. A+V+ cues were these same LED-wrapped rubber pieces with the LEDs illuminated. To prevent bats from relying on spatial memory to avoid obstacles, each was randomly positioned in either a horizontal or vertical configuration and placed in center or off-center locations at the box opening across trials. The entire box was covered in non-reflective black felt in order to minimize strong visual cues, even when partially illuminated by the light of the obstacles. The felt also served to attenuate echoes (Warnecke et al., 2018), but bats could still use echolocation to find the walls of the box. To enable infrared video recordings, experiments were conducted under long-wavelength ambient light, outside the visible range of E. fuscus (Hope and Bhatnagar, 1979).
Audio-video recordings
Each trial was recorded with two high-speed infrared Miro cameras (Phantom, Wayne, NJ, USA) sampling at 100 frames s−1, which permitted 3D reconstruction of the bat's flight trajectories using DLTdv5 digitizing software (Hedrick, 2008). Echolocation calls emitted by bats during the trials were recorded with a 24-channel wide-band ultrasound microphone array (Pettersson Elektronik, Uppsala, Sweden). The camera and microphone systems were synchronously recorded, triggered via a transistor–transistor logic pulse generated with custom hardware. The resulting reconstructed flight trajectories, extracted from the digitized center of mass of the bat, and audio recordings were further processed and analyzed using custom MATLAB (Natick, MA, USA) scripts to extract acoustic parameters of the fundamental harmonic of the bats' echolocation calls and kinematic parameters of the bats' flight, presented in Table 1.
Statistical analyses
All statistical analyses were conducted in R v. 3.6.3 (https://www.r-project.org) using the lme4 package (https://cran.r-project.org/web/packages/lme4/index.html) to generate linear mixed effects models (LMMs) or generalized linear mixed effects models (GLMMs), with individual bat identities used as random effects. For analyses of temporal and spectral echolocation call parameters, the individual trial was also used as a random effect. Each analysis is reported with the model used and the statistical results. Planned contrasts and post hoc analyses were carried out using the multcomp package (https://cran.r-project.org/web/packages/multcomp/index.html), adjusting P-values using the Bonferroni-based false discovery rate method (α=0.05).
Determination of 520 nm light-detection behavior
To confirm that the light generated by the laser was detected by the bats, two additional E. fuscus were trained in a separate set of behavioral experiments. In this paradigm, bats were trained to crawl to the arm of a Y-platform, in front of which was a piece of black felt onto which the green laser stimulus was projected (Fig. 2A). The laser 520 nm beam was manually oriented to either the left or right in alignment with the arms of the platform. Bats were rewarded with a mealworm for crawling towards the side on which the laser was projected. One individual was tested in 75 trials and the other in 100 trials; a permutation test was used to estimate the expected percentage of correct responses to determine whether the animals performed significantly above 50% in the visual detection task. This generates a cutoff percentage, or performance threshold, for ensuring that bats reliably detect the presence of the signal (i.e. the laser) at a rate that is statistically better than chance.
RESULTS
Behavioral detection of 520 nm light
In the two-choice laser detection task, the two bats went to the correct arm of the platform in 87% (Bat 1) and 95% (Bat 2) of trials, which is above their respective chance performance of 58% and 60% (Fig. 2B). This confirms that the laser stimulus is indeed detectable by E. fuscus.
Obstacle avoidance performance
In the flight experiment, bats were required to use echo acoustic and/or visual cues to steer around an obstacle placed at the opening of a box to receive a food reward. Performance was compared across four sensory conditions: A–V– (control), A+V–, A–V+ and A+V+. Obstacles were thin enough (≤5 mm) and the overall width of the box was large enough (70 cm wide) to accommodate the full wingspans of the individual bats (<35 cm) on at least one side of the obstacle. A chi-square test of proportions revealed that there was no significant effect of obstacle orientation (χ2=5.42e-31, d.f.=1, N=118, P=1) or position (χ2=5.06, d.f.=4, N=118, P=0.28) on bat entrances to the box across sensory conditions, so we excluded these terms from subsequent models, grouping all trials using each obstacle stimulus type. There was a significant difference in the number of trials in which the bats flew into the box across each stimulus type (GLMM with binomial error distribution, F3,167.21=9.86, P<0.001; Fig. 3A). Bats almost always entered the box under unobstructed control conditions (98%). The percentage of flights into the box was significantly reduced in A–V+ and A+V– conditions (84% and 83%, respectively) and even more so in A+V+ conditions (57%).
We also determined whether bats contacted one type of obstacle more than another type. ‘Contacts’ were defined as either colliding with the obstacle or touching it with a wing as it was passed. For 60% of A–V+ trials in which the bat entered the box, bats ‘made contact’ with the laser beam without attempting to avoid it or the mist column onto which it was projected, as indicated by the lack of observable changes in the flight trajectory (Fig. 3B). This is significantly more than for the A+V– trials, in which contact was made with the echo-acoustic obstacle in 3% of trials (GLMM with binomial error distribution, F2,85.29=36.12, P<0.001) There was no significant difference between A+V– and A+V+, in which no contacts were made in any trial.
Flight analysis
We used 3D reconstructions of each of the bats' flight trajectories (Fig. 4) to determine the animal's position during each recorded frame. Using this information, we calculated the mean speed of the bat during each trial and compared this across each obstacle condition and whether bats entered the box (LMM). There was no significant difference in speed across obstacle conditions (F3,163.04=0.46, P>0.05), but bats did fly significantly faster when they entered the box (F1,163.65=31.45, P<0.01) than when they avoided the box (Fig. 5A). There was no interaction effect between stimulus and the outcome.
Additionally, we decomposed the trajectories into 1.0 m bins to look at speed on a finer scale. These distance bins were compared across each obstacle type (LMM). We observe a significant interaction between the main effects of distance bin and obstacle type (F19,174.33=8.80, P<0.01), and planned contrasts of each obstacle type within each distance bin revealed significant differences primarily when the bat was within 0–1 m and 1–2 m of the box opening (Fig. 5B).
Angle of avoidance
We also analyzed the angle of avoidance in trials in which the bat did not enter the target box. The angle of avoidance is defined as the maximum angle between the bat–target vector and the bat tangent when the bat is initiating its avoidance turn, which occurs at or before the minimum distance between the bat and the opening of the box. This measure can be used as a proxy for when the bat makes the decision to not enter the box. There was no significant difference (LMM, two-way ANOVA) in angle of avoidance across obstacle types (F3,22.58=0.36, P>0.05), there was no significant difference in the distances at which the turn occurred (F1,22.99=0.0003, P>0.05), and we observed no interaction effects.
Echolocation calls
We determined the position of the bat along each flight trajectory at the time of each echolocation call emission. The trajectories were then binned into 0.5 m increments relative to the position of the center of the box opening, and we analyzed the number of calls produced in each distance bin and under each stimulus condition (negative binomial regression). As expected, bats increased the number of calls as distance to the box decreased (Fig. 6), and we observed a significant interaction effect between distance bin and obstacle type (F117,6718=2.9, P<0.01). There was also a significant difference between the number of calls emitted and whether or not bats entered the box (F1,259.32=2.68, P>0.05). On average, bats emitted two more calls when they entered the box than when they did not.
In many trials, audio and video recordings were captured in which the bat was not yet in flight and/or initially out of view in one or both cameras, making it impossible to re-create the trajectory at those points. These calls were not included in the previous analysis and instead were analyzed separately (negative binomial regression) to determine whether the bats were calling more frequently at the beginnings of trials with different obstacle conditions or whether the calls could be used to predict whether bats would enter the box. There was no significant effect associated with the outcome of whether bats entered the box or not (F1,556.27=2.18, P>0.05). There was a significant effect of the stimulus presented (F3,557.47=2.18, P<0.05), but these effects were not statistically significant (P<0.05) during post hoc comparisons after adjusting for multiple comparisons. No interaction effect was observed.
Lastly, we analyzed several spectrotemporal acoustic parameters of bat sonar calls during each trial (see Table S1 for summaries of statistical results). There was no significant overall effect of obstacle stimulus type on spectral or temporal parameters. Because echolocation call interval (synonymous with pulse) is known to vary significantly with distance to an object, an analysis of call interval across 1 m distance bins was conducted with respect to the different obstacle conditions (LMM). The data show that there is a trending decrease in call interval (increase in call rate) as bats approach the box (Fig. 7) and that there is a significant interaction effect between distance bin and obstacle type (F15,5434=2.54, P<0.01). Post hoc comparisons only result in a single significant comparison at 1–2 m between A+V– and A+V+ obstacles. Additional significant differences in sonar sound groups, peak and start frequencies, and bandwidth are related to whether the bat entered the box or not. We observed an increase of ∼124 Hz in start frequency of the frequency-modulated sweep in trials in which bats did not enter the box. Peak frequency of bat echolocation calls decreased by ∼647 Hz and total bandwidth of calls decreased by ∼300 Hz when bats did not enter the box. On average, bats produced three more sonar sound groups and increased their call interval by ∼21 ms when they did not enter the box.
DISCUSSION
Bats can use both visual and acoustic cues to navigate their environments. In some instances, such as short-range navigation under crepuscular light conditions, visual and acoustic cues may provide complementary information. However, there are frequently scenarios in which vision and echolocation do not provide complementary information. For example, when navigating long distances or detecting large objects at distances exceeding ∼100 m, echolocation no longer provides reliable information, owing to the high degree of atmospheric attenuation that limits the functional range of high-frequency sound transmission (Lawrence and Simmons, 1982; Holderied and von Helversen, 2003; Stilz and Schnitzler, 2012). Thus, vision would likely provide reliable cues for identifying landmarks or large obstacles, whereas echolocation favors detection of small objects at close distances (Boonman et al., 2013). And in complete or near complete darkness, where visual cues are virtually absent, echolocation provides information about the location, size, texture and motion of objects around which the bat maneuvers (Fenton et al., 2016). The question of how vision and echolocation interact arises in situations in which both cues are available and provide useful information.
We employed an orientation paradigm to investigate the effects of multimodal cueing on obstacle avoidance in the laryngeal echolocating bat, E.fuscus. Specifically, we focused on the behavioral responses to stimuli that yielded cues within the visual and/or auditory domains, as these two senses are most often utilized in tandem and are both functional in distal sensing. We sought to determine whether bats demonstrate either auditory or visual dominance in the context of spatial orientation and obstacle avoidance. In nature, this might arise when dim-to-intermediate light levels could facilitate bimodal sensing.
When presented with a task that required entering a box that was partially obstructed by an obstacle, E. fuscus demonstrated behavioral patterns that depended on the stimulus dimensions of the obstacle. Stimuli were constructed to provide echoic feedback, visual feedback or both, and the bats' performances were analyzed across several echolocation and flight kinematic parameters. When visual and echo acoustic stimuli were presented simultaneously, the two cues (A+V+ condition) were combined into a multimodal composite signal (MCS). MCSs can result in several potential outcomes that depend first on whether the individual component signals convey the same information (i.e. they elicit the same behavioral response) and second on how the conveyed information of the combined signal influences behavior (Partan and Marler, 1999). If the two component signals elicit the same behavioral response, then the two are said to convey redundant information, whereas different behavioral responses are the result of non-redundant information (Fig. 8).
We anticipated that MCSs, consisting of visual and auditory stimuli, in the obstacle avoidance task would offer redundant information and that, when presented together, they would result in an equivalent or enhanced response (i.e. reduced percentage of trials in which the bat entered the box). This is because redundancy is one of the simplest ways to counteract a noisy environment or discriminate potentially ambiguous signals by having multiple sensory modalities supply ‘backup’ information. Although our laboratory-based task did not introduce noise, environments frequented by bats in the wild often contain extraneous sounds, such as signals produced by nearby conspecifics or reverberant echoes from highly cluttered environments (Dusenbery, 1992; Schnitzler and Kalko, 2001). The results of our study suggest that visual and auditory cues provide redundant information to the bats performing in the obstacle avoidance task reported here. Specifically, we observed that visual cues alone (A–V+) and acoustic cues alone (A+V–) resulted in a similar reduction in successful entry to the box when compared with the unobstructed control condition. When combined into MCSs (A+V+), the bats showed an even further decrease in entry to the box. This suggests that our multimodal signal results in an enhancement effect, perhaps due to the increased saliency of the A+V+ obstacle. Similar effects have been observed in the eastern gray squirrel (S.carolinensis), which displays enhanced responses to multisensory, audio/visual components of a conspecific alarm signal compared with either unisensory component (Partan et al., 2009).
The position and the orientation of the obstacle had no impact on whether bats entered the box. When bats did enter the box, they rarely made contact with physical objects, and thus avoided potential physical discomfort associated with a collision in A+V– and A+V+ conditions. When navigating in proximity to the A–V+ obstacles, bats frequently flew through them, breaking the beam of the laser with their wings and, occasionally, their entire body. In psychophysical experiments that probe the detection, discrimination and scaling of physical stimuli (Munoz and Blumstein, 2012), behavioral responses are used to make inferences about perception (Shettleworth, 2009). Although we demonstrated that the laser stimulus was detectable by E. fuscus and that our obstacle conditions generated different behavioral responses, we do not yet have data to make inferences about the bats’ perception of the A–V+ obstacle.
It appears that bats did not treat the laser beam as a solid object, based on their high percentage of contact with the obstacle, but their decreased number of entrances suggests that they did respond to the laser beam as either an obstacle or other aversive stimulus. Several factors should be considered when interpreting these results, which motivate new lines of investigation. First, it is possible that the 3 mm diameter laser beam was too small to simulate a solid obstacle, and future experiments with wider diameter laser obstacles could address this possibility. Second, the laser obstacle yielded no tactile feedback when it was contacted by the bat in a way the solid echo-acoustic and visual/echo-acoustic obstacles provided. In fact, the absence of tactile feedback from contact with the laser may have informed the bat that the A–V+ obstacle is not a solid object, and this experience may have reduced attempts to avoid ‘collisions’. At this point, we cannot determine whether the bat learned from experience that the laser was not a solid object or whether it never perceived the laser as a solid object. In the future, these questions could be addressed by an experimental test on a 2D plane in which bats are rewarded to navigate around laser obstacles of varying diameter. This could result in one of three outcomes: (1) no change in behavior – bats continue to make contact with the obstacle with no change in percentage of avoidance; (2) the obstacle is treated the same as the echo-acoustic obstacle and no contact is made with the beam, and there is no change in the percentage of avoidances; (3) the obstacle is treated the same as the visual-acoustic obstacle and bats further increase their percentage of avoidances. This approach would offer some additional insight to visual obstacle avoidance in bats, but it would still not yield conclusive answers to the question of whether bats perceive laser beams as solid objects. To tackle this challenge, bats could be trained in a psychophysical task that excludes physical contact with visual stimuli. For example, bats might perform in a match-to-sample task with a range of visual stimuli that include laser beams of varying diameters and solid objects. We wish to stress that the paradigm used in the present study was novel and exploratory in nature, and we encourage others to adopt laser stimuli to further investigate bats' use of purely visual information in obstacle avoidance and to learn what stimulus parameters may influence whether bats perceive laser beams as solid objects.
Often, we analyze echolocation and flight parameters as indicators of the information bats are gathering about their environments. In this experimental setup, bats flew overall faster in trials in which they entered the box. When analyzing speed on a finer scale, we observe an interaction effect between distance to the box opening and stimulus obstacle condition and the speed at which the bat is traveling. When the bat is close to the opening of the box (1–2 m), we see significant decreases in speed when bats are presented with the visual-only and acoustic-visual obstacles, compared with the control and acoustic-only conditions. At 0–1 m, we observe a significant decrease in speed when bats are presented with the acoustic-visual obstacle compared with all other conditions. This suggests that bats make distance-dependent adjustments in flight speed that depend on obstacle modality.
Some acoustic parameters of the bats' echolocation calls showed similar distance-dependent relationships with the obstacle type being presented. The call interval significantly increased when bats navigated around the acoustic-visual obstacle, compared with the acoustic-only obstacle, at 1–2 m. This increase in call interval suggests that the visual obstacle influenced the bats' echolocation behavior at this distance. We also note the significant increase in the total number of calls at 0.5–1 m when presented with the acoustic-visual obstacle compared with all other conditions, followed by a significant decrease at 0–0.5 m. Overall, bats tended to emit slightly more total echolocation calls when they entered the box than when they did not.
Although the documented changes in flight and echolocation behaviors in this study do not offer direct insight into the bats' perception of the obstacles, we can conclude that the addition of visual information to the active sensing of echolocation has the effect of altering the way individual bats choose to interact with their environment. This is consistent with the recent study conducted by McGowan and Kloepper (2020), in which wild Brazilian free-tailed bats (Tadarida brasiliensis) are documented to exhibit different echolocation patterns when flying during the day compared with at night. Future iterations of the present experiment could introduce new behavioral paradigms to further test which environmental contexts influence multimodal sensory processing. Neurophysiological experiments may also contribute to our understanding of multimodal sensing by characterizing the underlying neural processes that mediate responses to different combinations of visual and acoustic stimuli.
Footnotes
Author contributions
Conceptualization: T.K.J.; Methodology: T.K.J.; Software: T.K.J.; Validation: T.K.J.; Formal analysis: T.K.J.; Investigation: T.K.J.; Resources: T.K.J., C.F.M.; Data curation: T.K.J.; Writing - original draft: T.K.J.; Writing - review & editing: T.K.J., C.F.M.; Visualization: T.K.J.; Supervision: T.K.J., C.F.M.; Project administration: T.K.J., C.F.M.; Funding acquisition: T.K.J., C.F.M.
Funding
Results reported here are based on work supported by the National Science Foundation Graduate Research Fellowship Program under grant 1452598 to T.K.J., and National Science Foundation Brain Initiative (NCS-FO 1734744), Air Force Office of Scientific Research (FA9550-14-1-0398NIFTI) and Office of Naval Research (N00014-17-1-2736) grants to C.F.M. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the funding agencies.
References
Competing interests
The authors declare no competing or financial interests.