Visual motion detection is among the best understood neuronal computations. As extensively investigated in tethered flies, visual motion signals are assumed to be crucial to detect and counteract involuntary course deviations. During free flight, however, course changes are also signalled by other sensory systems. Therefore, it is as yet unclear to what extent motion vision contributes to course control. To address this question, we genetically rendered flies motion-blind by blocking their primary motion-sensitive neurons and quantified their free-flight performance. We found that such flies have difficulty maintaining a straight flight trajectory, much like unimpaired flies in the dark. By unilateral wing clipping, we generated an asymmetry in propulsive force and tested the ability of flies to compensate for this perturbation. While wild-type flies showed a remarkable level of compensation, motion-blind animals exhibited pronounced circling behaviour. Our results therefore directly confirm that motion vision is necessary to fly straight under realistic conditions.
The execution of coordinated muscle contractions underlying locomotion is inherently imprecise and requires continuous adjustments based on sensory feedback (Taylor and Krapp, 2007; Rossignol et al., 2006; Dickinson, 2014; Tuthill and Azim, 2018). Vision is well suited to keep animals on track as any self-motion evokes characteristic image movements across the eye, termed optic flow (Gibson, 1950; Koenderink and van Doorn, 1987). In principle, optic flow allows moving animals to detect and counteract involuntary course deviations. This is perhaps most relevant during flight, where locomotor trajectories need to be controlled fast and in three spatial dimensions to avoid detrimental collisions (Egelhaaf, 2013).
Flies are among the most agile flying animals, performing minute coordinated changes in various aspects of wing motion to control course (Muijres et al., 2017). A great demand on stabilizing sensory feedback is therefore expected (Egelhaaf, 2013; Dickinson and Muijres, 2016). To investigate the influence of optic flow signals on course control, flight behaviour has been probed with visual motion stimuli (Collett and Land, 1975; Mronz and Lehmann, 2008; Stowers et al., 2017). Presented with wide-field image motion, animals display a following response, indicative of intended image stabilization on the eye and interpreted as a strategy to maintain a stable bearing. Further support for this idea is provided by closed loop paradigms in restrained animals, where flight behaviour is read out and fed back to visual stimulation, enabling precise experimental control over behaviour–stimulus coupling (Goetz, 1975; Dickinson and Muijres, 2016; Frye and Dickinson, 2001). However, here, the dynamics underlying sensory-motor integration are altered and both spatial and temporal aspects of visual feedback can only resemble natural reafference to some extent. Furthermore, feedback from other sensory organs is disconnected or entirely missing and behavioural output is severely restricted. Last, visual stimuli usually contain multiple features, and course stabilization also seems feasible without the explicit representation of motion by keeping conspicuous patterns stable on the eye (Bahl et al., 2013). Therefore, is it difficult to draw firm conclusions about the natural role of motion vision from visual stimulation and behavioural analysis.
In flies, the neuronal basis of visual motion processing is understood in great detail, providing further opportunities to investigate visually guided behaviour (Borst et al., 2020). Briefly, the arrival of photons at the level of the photoreceptor cells (mainly R1–6, but also R7 and R8) triggers a cascade of chemical events in a process known as visual transduction (Montell, 2012). Neuronal signals are then passed on to sequential processing stages (represented by laminar and medulla cells), which act as temporal and spatial filters, allowing local motion detection at the level of T4/T5 cell dendrites (Borst et al., 2020). Axons of T4/T5 neurons project to the lobula plate, where they segregate according to directional preference into four layers: T4/T5 cells tuned to front-to-back and back-to-front motion target layer 1 and 2, respectively. T4/T5 neurons preferring upward and downward motion occupy layer 3 and 4, respectively (Maisak et al., 2013). In the lobula plate, large dendrites of so-called lobula plate tangential cells spatially integrate T4/T5 signals layer specifically (Mauss et al., 2014) and thereby become selective to specific optic flow patterns (Krapp and Hengstenberg, 1996). Tangential cells are prime candidates to detect course deviations and convey signals to downstream motor centres to be used for corrective steering manoeuvres. This notion is supported by various experimental strategies to perturb their activity in restrained walking and flight behaviour (Geiger and Näessel, 1981; Hausen and Wehrhahn, 1983; Heisenberg et al., 1978; Haikala et al., 2013; Busch et al., 2018; Kim et al., 2017; Fujiwara et al., 2017). However, the consequences of manipulating the activity of motion-sensing neurons for course control have not yet been determined in unrestrained flight.
Recently, it has become possible to render Drosophila melanogaster motion-blind by selective genetic manipulation of the first stage of visual motion detection, namely T4 and T5 cells (Maisak et al., 2013; Bahl et al., 2013). Importantly, the visual position system (Bahl et al., 2013) and all other sensory modalities remain functional. Here, we analysed free-flight trajectories of such motion-blind flies. We found that they are well able to fly but exhibit clear deficits in maintaining straight flight trajectories. We therefore provide direct evidence for the conjecture that motion vision is necessary for stable flight performance.
MATERIALS AND METHODS
Flies were raised on standard fly food (cornmeal–agar) at 25°C and 60% humidity and on a 12 h light:12 h dark cycle. Wild-caught flies (‘Luminy’) were provided by Frank Schnorrer and collected by Benjamin Prud'homme. NorpA7 flies were obtained from Bloomington Stock Center (no. 5685). The split-Gal4 driver line used to drive expression of effector genes to manipulate neuronal function in free-flight experiments (w−/w−; R42F06.AD/Cyo; VT043070.DBD/Tm6b) was derived in our lab from AD and DBD domains found in Bloomington Stock Center flies (no. 70685 and no. 72763). This line was crossed to w+ cs; UAS-TNT-E/UAS-TNT-E; +/+ (from Roland Strauss, University of Mainz) to create a fly line which blocks activity in all T4/T5 cells through expression of tetanus toxin: w+/w−; R42F06.AD/UAS-TNT-E; VT043070.DBD/+. For parental controls, both the UAS and the split-Gal4 driver line were crossed with CantonS flies, resulting in: w+/w−; R42F06.AD/+; VT043070.DBD/+ and w+ cs; UAS-TNT-E/+; +/+.
Female flies younger than 48 h were selected for free-flight experiments under CO2 anaesthesia. Wing ablation was performed using standard micro-scissors (Fine Science Tools, art. no. 15001-08). The cut was executed chordwise to the wing using the spot where longitudinal veins 1 and 2 connect at the distal part of the wing as a landmark. The flies were moved to a new vial with standard food and placed back in the incubator for 24 h to recover from CO2 exposure. They were then flipped into empty vials to induce starvation for 4 h before being transferred to the experimental arena. All experiments were performed overnight and were started at the same time of the day.
We assessed the baseline optomotor response using a locomotion recorder previously described in Bahl et al. (2013). Briefly, a fly whose head, thorax and wings were fixed to a needle using near-ultraviolet bonding glue (Sinfony Opaque Dentin) and strong blue LED light (440 nm, dental curing light, New Woodpecker) was placed on an air-suspended sphere. The movements of the sphere were recorded via two optical tracking sensors. The experiments were performed at 34°C, with the temperature being controlled by a self-designed Peltier controlling system.
The visual stimuli were presented on three LCD screens (120 Hz Samsung 2233 RZ) arranged in a U-shape surrounding the fly. The displays were controlled via NVIDIA 3D Vision Surround Technology on Windows 7 (64 bit) and stimuli were displayed using Panda3D and Python 2.7. The stimuli were presented at 120 frames s−1.
For each fly, the experiment consisted of 50 trials with stimuli in each trial being presented randomly. Full-field square-wave gratings had a spatial wavelength of λ=20 deg, a mean luminance of 11 cd m−2 and either high (49%) or low (1.4%) contrast. They moved at a velocity of 20 deg s−1 either to the left or to the right and were presented for a short (0.5 s) or a long period of time (6 s). Flies that walked continuously for at least 10 trials were selected and only the trials that had an average walking speed of higher than 0.25 cm s−1 were included in the analysis. Turning speed traces were determined by taking the average over trials and low-pass filtering the resulting trace (τ=0.1 s in all experiments). We performed all data analysis in Python 3.7 using NumPy 1.15.1 and SciPy 1.1.0.
The arena was a rectangular, transparent enclosure (50×50×30 cm) of acrylic plastic (Evonik, Plexiglas XT). A custom-made array of high-intensity infrared LEDs (Roithner Laser Technik GmbH, H2A1-H830, peak at 830 nm far outside the activation spectrum of fly photoreceptors; Yamaguchi et al., 2010) and a diffuser were positioned below the arena to provide strong background light to facilitate optical tracking by cameras with short exposure time (9 ms) and at high frequency (100 Hz, Point Grey Research, CM3-U3-13S2C-CS).
To encourage flight, small ‘buzzing devices’ were placed in the four corners of the ceiling of the arena. Each device was based on a small Petri dish with yeast-supplemented fly food, i.e. emanating an attractive odour. Contact of flies with the food was prevented by a grid cover, to which a vibration motor (Pololu, #2265) was attached. Vibration went off every minute and evoked take-off and flight in flies sitting on the grid. This way, sufficient flight data were obtained even from flies in the dark, blind flies and motion-blind flies, all of which are otherwise reluctant to fly.
Static or moving visual patterns were displayed on four monitors (144 Hz, ASUS, VG248QE) placed on the sides of the enclosure. Average light intensity at the level of the monitors was 0.02 µW mm−2. A regular checkerboard pattern (dark squares 2 cd m−2, light squares 5 cd m−2) of 5×5 cm in checker size was displayed as a static visual stimulus for the majority of the experiments. For a fly located in the centre of the arena, the closest square element of the checkerboard pattern subtends an angle of 11.4 deg of visual space. Average luminance of the monitors was 3.5 cd m−2. For the experiment shown in Fig. 2B,C, the same pattern moved at 40 cm s−1 to the left for 10 s, remained static for another 10 s and then moved to the right for 10 s, followed by another 10 s of static display. The displays were controlled via NVIDIA 3D Vision Surround Technology on Windows 7 (64 bit) and stimuli were displayed using Panda3D and Python 2.7. The stimuli were presented at 144 frames s−1. The temperature inside the arena was approximately 28°C.
Our multi-camera set-up consisted of five mounted units (FLIR Inc., CM3-U3-13Y3M-CS) observing overlapping volumes of the arena (see illustration in Fig. 1A). We used standard machine vision lenses with a focal length of approximately 6 mm (Thorlabs Inc., MVL6WA). Cameras were connected to a single tracking computer via USB-3. To guarantee accurate synchronization across frame captures, image acquisition for all units was triggered by a single external TTL pulse generator that ran at 100 Hz. To prevent leakage of the visual stimulus into tracking images, we equipped all cameras with near-infrared longpass filters (Thorlabs Inc., FGL780M) that separated the displays' spectrum from near-infrared background lighting.
We calibrated intrinsic and extrinsic camera parameters with a single-step method (Li et al., 2013) that estimates the relevant matrices of all units in a multi-camera set-up from overlapping presentations of a printed random calibration pattern. The underlying camera model was a standard pinhole camera (with radial distortion). On average, we were able to achieve a reprojection error below 1 pixel based on 100–200 synchronized multi-camera snapshots of the calibration pattern. We used the algorithm as implemented in the provided MATLAB toolbox (https://sites.google.com/site/prclibo/toolbox). Calibration was performed periodically throughout the experimental phase to safeguard against shifts in camera position that could affect triangulation.
We processed incoming images from 5 cameras running at 100 Hz. Images were acquired as 640×512 pixel single-channel matrices. We estimated the static background by accumulating incoming images with a weight of 0.001 on each time step and subtracted this background estimate from new frames to isolate moving targets. Finally, we applied a threshold on the resulting image at a minimum value of 5 (out of 255) to further suppress photon noise. With each camera image, we applied a standard blob detection algorithm from OpenCV (‘cv2.findCountours’) to detect contiguous 2D targets. The position of a target was then defined as the weighted centre of the blob. Coordinates of these targets were combined in the triangulation process described below.
Unlike previous studies (Straw et al., 2011), we separated the tracking step into per-frame triangulation and subsequent association to generate trajectories for identified individuals. Triangulation was accomplished using a standard method, the Hungarian algorithm, which can efficiently solve assignment problems in polynomial time (Kuhn, 1955; Ardekani et al., 2013).
Briefly, for each frame our 2D tracking algorithm yields multiple x–y detections which need to be associated across cameras to allow correct reconstruction of 3D positions. If only a single fly moves inside the arena, this operation is trivial; we found a single 2D detection per camera and all detections emanated from the same target. However, when multiple flies moved simultaneously, we needed to correctly assign 2D detections to 3D targets. We always used standard singular value decomposition to estimate the optimal 3D position from a set of 2D noisy observations (Hartley and Zisserman, 2003). The Hungarian algorithm then efficiently calculates a minimum-cost assignment where cost is defined as the reprojection error after assigning particular 2D points to particular 3D targets. We implemented the algorithm in Python 2.7 and Numba, relying on OpenCV or PyMVG (https://github.com/strawlab/pymvg) for various projection operations.
Filtering and association
The method outlined above provides a number of 3D targets per frame. Per-individual analysis requires an association step where these points are aggregated into defined trajectories of single flies. The tracking algorithm treats targets as a collection of linear Kalman filters. Observations are 3D positions. The underlying state consists of six parameters: instantaneous 3D position as well as three velocities in all three directions. All filters are based on a constant-velocity process where manoeuvring is modelled as noisy deviations from this constant velocity. The process matrix simply advances the current position by estimated velocity times the frame length (10 ms). We assumed the following standard deviations for the different components: 2 cm for the measurement noise as well as 1 cm and 50 cm s−1 for the position and velocity components of the process noise, respectively. Standard deviations for the state covariance matrix were initialized as 10 cm for x–y–z position and 100 cm s−1 for all velocities. We did not tune these parameters extensively as they had little effect on tracking quality.
On each time step, we predicted the position of each target and used the Hungarian algorithm to assign novel 3D observations to the set of existing filters (based on aggregated distance cost of the assignment). Observations can only be assigned to a filter if the distance is below 1 cm. Any observation that cannot be matched to an existing filter spawns a new target. If a filter does not receive a fresh observation for 20 time steps, the instance is terminated. A trajectory is then simply the filtered position estimate of a single Kalman instance from spawning to termination.
No additional post-processing was applied to disambiguate crossing paths of flies as we found these events to be rare in practice. Reconstruction of 3D points and tracking were computed offline. We used Python 2.7 and the filterpy package (https://filterpy.readthedocs.io/en/latest/) to implement these routines.
Code is available upon request.
Free-flight data analysis
Trajectory selection and feature extraction
All analysis was carried out in Python 3.7, using the following libraries (among others): NumPy 1.17.2, Pandas 0.25.1 and SciPy 1.3.1. Movement trajectories (walking and flight) were loaded and those below a minimal length (1 s) discarded. Values for x, y and z were smoothed by convolving them separately with a block filter of size 9. To obtain an initial selection of flight trajectories, only segments within a certain z range (1.5 cm above the floor and 1.0 cm below the ceiling) were included and labelled with a new identifier. From positions over time, the following features were extracted: x–y angle (deg), x–y angular velocity (deg s−1), x–y–z flight velocity (cm s−1) and x–y (horizontal) flight velocity (cm s−1). At this point, manual inspection still revealed a fraction of walking trajectories based on low movement velocity and little x–y displacement over time. Hence, trajectories with a mean flight velocity below 3.0 cm s−1 or the sum of x and y standard deviation below 2 cm were discarded.
Saccade detection was carried out based on angular velocity, obtained by differentiating the angle of consecutive x/y positions relative to the arena coordinates. For each flight trajectory, angular velocity was convolved with a Gaussian kernel of the approximate shape of a saccade (σ=40 ms). Saccade time points were then identified by peak values above a threshold of 300 deg s−1. This procedure was done separately for leftward and rightward saccades, taking the respective sign into account.
For each intersaccadic segment, distance and path length between the two endpoints were computed. To obtain the straightness index, distance was divided by path length. Statistically significant differences between genotypes were established by computing the Kolmogorov–Smirnov statistic on two samples using scipy.stats.ks_2samp in Python v.3.5.4.
For each trajectory, from the angle value in each frame we subtracted the angle value from the first frame, so that each trajectory angle started at 0. Then, the last angle value from each trajectory was divided by the duration of the respective trajectory to obtain turning in deg s−1. To test for statistical significance, we performed a t-test using scipy.stats.ttest_ind in Python v.3.5.4 (significance level adjusted by a Bonferroni correction in case of multiple comparisons).
Code is available upon request.
Primary antibodies used were: mouse anti-Bruchpilot (1:20, Developmental Studies Hybridoma Bank, AB2314866), rabbit anti-Tetanus Toxin (1:5000, SSI Antibodies, 65873 POL 016). Secondary antibodies used were: ATTO 647N goat anti-mouse (1:400, Rockland, 610-156-040), Alexa Fluor 568-conjugated goat anti-rabbit (1:400, Life Technologies, A-11011).
Brains were dissected in cold PBS and fixed in 4% paraformaldehyde (0.1% Triton X-100) for 25 min at room temperature. They were then washed 3 times with PBST (PBS containing 0.3% Triton X-100) and blocked with normal goat serum (10% NGS in PBST) for 1 h. Brains were incubated at 4°C for 48 h with primary antibodies diluted in NGS solution. They were washed 3 times (for 1–2 h each) with PBST and then incubated at 4°C for 48 h with secondary antibody diluted in NGS solution. Brains were then washed 3 times in PBST before mounting in SlowFade Gold Antifade Mountant (Thermo Fisher Scientific).
RESULTS AND DISCUSSION
Probing free flight
We released flies in a transparent enclosure (Collett and Land, 1975; Straw et al., 2011; Tammero and Dickinson, 2002) (Fig. 1A) and tracked their positions in 3D at 100 frames s−1 using a calibrated camera system. We calculated features frame-by-frame, such as flight velocity and turning angle. Static or moving visual patterns were displayed on four monitors surrounding the enclosure. We used wild-caught flies for an initial characterization. In agreement with previous accounts, recorded trajectories consisted of straight segments interspersed by sharp turns, so-called body saccades, which have been observed during free flight of different fly species (Collett and Land, 1975; Mronz and Lehmann, 2008; Schilstra and van Hateren, 1999; Tammero and Dickinson, 2002; Egelhaaf et al., 2012) (Fig. 1B–D). We detected saccades on the basis of turning velocity. Another characteristic feature of a saccade is a brief drop in flight velocity (Mronz and Lehmann, 2008; Schilstra and van Hateren, 1999; Tammero and Dickinson, 2002) (Fig. 1D; Fig. S1).
Flight with and without vision
Appropriate detection of various self-evoked optic flow components by neural circuits is instrumental in monocular depth perception (Srinivasan, 2011; Ravi et al., 2019; Egelhaaf et al., 2012) as well as estimating and regulating locomotor speed (Baird et al., 2005; Srinivasan et al., 1996; Pfeffer and Wittlinger, 2016). Self-induced image motion may also provide sensory cues to detect and counteract involuntary course deviations (Gibson, 1950; Goetz, 1975; Collett and Land, 1975; Egelhaaf, 2013).
We first asked how missing visual feedback affected flight structure by comparing trajectories of the same wild-type strain under two conditions: with visual patterns surrounding the arena (i.e. with intact visual feedback) and in darkness (i.e. without any visual feedback). Trajectories obtained in the dark were usually shorter in duration. In addition, the average flight velocity in darkness was increased (Fig. 1E), in line with the idea that flight velocity is reflexively modulated by the received optic flow (Mauss and Borst, 2020; Baird et al., 2005; Srinivasan et al., 1996). All saccade metrics, however, were highly similar (Fig. S1). This finding further supports the notion that, although visual signals can trigger saccades, the execution of the underlying motor programme is not reflexively modulated by visual motion feedback (Bender and Dickinson, 2006; Tammero and Dickinson, 2002; Karmeier et al., 2006).
Closer inspection of flight trajectories revealed that vision is more important to maintain a stable bearing during intersaccadic flight. To quantify this, we calculated a straightness index by dividing the distance between two consecutive saccades by the covered path length (Fig. 1B). For each experimental condition, we further divided intersaccadic segments into two groups: short (50–250 ms) and long (250–2000 ms). Comparing straightness indices between bright and dark condition revealed a significant reduction for long segments recorded in the dark (Fig. 1F). We further obtained data from a completely blind fly strain, NorpA, with a mutation in the essential phototransduction enzyme phospholipase C (Hotta and Benzer, 1970). NorpA flies flew even faster and less straight than wild-type flies in the dark (Fig. S2).
Free-flight behaviour of motion-blind flies
From the results above, we can conclude that vision is important for course stabilization. However, the question remains whether the stabilizing influence is exerted by visual motion signals. Course stabilization can also be achieved by keeping conspicuous visual features at a constant position on the retina (Bahl et al., 2013; Bar et al., 2015).
In the fly optic lobe, lobula plate neurons integrate the signals from specific sub-samples of local motion detectors T4 and T5. Thus, they become selective to flow fields such as rotation, translation (Karmeier et al., 2006; Krapp and Hengstenberg, 1996) or expansion (Klapoetke et al., 2017). Any of these self-induced flow components may influence movement trajectories (Collett and Land, 1975; Rock and Smith, 1986; Warren et al., 2001; Mronz and Lehmann, 2008).
To test for the involvement of motion vision, we rendered flies motion-blind by cell-specific expression of tetanus toxin in the primary motion-sensing neurons (‘T4T5>TNT’; Fig. 2A). We confirmed the absence of motion vision in these flies in two different ways. First, we found that the optomotor turning response of tethered walking flies was completely abolished (Fig. S3). Second, we measured turning of freely flying flies in response to horizontal pattern rotation around the arena. In contrast to controls, which showed the expected following reaction (Mronz and Lehmann, 2008), responses of T4T5>TNT flies to moving patterns (clockwise and counter-clockwise) did not reveal any average turning response (Fig. 2B,C).
We next analysed flight trajectories of control and T4T5>TNT flies in the presence of static patterns. Flight structure of T4T5>TNT flies appeared normal, albeit an increased flight velocity was observed (Fig. 2D). Furthermore, intersaccadic segment straightness of T4T5>TNT flies was reduced compared with controls. These results are similar to those of wild-type flies in the dark (compare Fig. 2D,E with Fig. 1E,F), demonstrating a role of motion vision for keeping flight trajectories straight.
Compensation of aerodynamic asymmetry
The above results suggest that the contribution of motion vision to keeping intersaccadic flight straight is significant but subtle. However, inherent left or right turning bias at the level of individuals (Souman et al., 2009) might in part be concealed in the population response that we measured. Furthermore, course control of individuals in nature may be acutely challenged by air turbulence or chronically by wing damage.
In order to test the flies' ability to compensate for a consistent turning bias, we clipped ∼25% of the tip of either the right or the left wing. Insects that have undergone wing damage change the dynamics of their wing movements to compensate for the loss of propulsion (Bender and Dickinson, 2006; Muijres et al., 2017; Kihlström et al., 2021). We processed data of left wing-clipped flies as if they were clipped on the right side, allowing us to combine data from both manipulations. We quantified trajectories from the following experimental groups: wild-type flies in the light and dark (Fig. 3A–D), as well as TNT control flies, Gal4 control flies and T4T5>TNT flies in the light (Fig. 3E–H). Visual inspection of individual trajectories (x/y coordinates) from wing-clipped wild-type flies in the light revealed a flight structure similar to that of intact controls (Fig. 3A). However, wing-clipped wild-type flies in the dark behaved differently in that many flight trajectories exhibited a clockwise or counter-clockwise circular structure (Fig. 3B). The same was true when comparing TNT controls (normal flight structure) with T4T5>TNT (curved trajectories) (Fig. 3E,F).
Calculating straightness indices revealed a strong reduction for wing-clipped wild-type dark and T4T5>TNT flies, compared with their respective controls (Fig. S4). However, as saccade detection might be compromised by circling flight, we analysed trajectories independent of saccade detection. First, for each trajectory, we took the orientation over time, obtained from the angle of the vector defined by consecutive x–y positions relative to the arena coordinates. We then subtracted the initial angle so that the orientation of each trajectory commenced at zero and computed the average across the first second of recording (Fig. 3C,G). The average orientation of wild-type, TNT control and Gal4 control flies in the light revealed an almost perfect compensation, i.e. a small change in orientation over time. Both wild-type flies in the dark and T4T5>TNT flies in the light, however, exhibited a pronounced average drift in the direction opposite to the wing-clipped side. Furthermore, for each trajectory we calculated a single turning value in deg s−1 by dividing the total change in orientation from beginning to end by the duration. While the average turning velocity for wild-type, TNT control and Gal4 control flies in the light was close to zero, this parameter was much higher for wild-type flies in the dark and T4T5>TNT flies in the light, at ∼45–60 deg s−1 (Fig. 3D,H).
To summarize, eliminating motion vision had the same effect as removing all visual input: flies lost their ability to compensate for an experimentally introduced turning bias. Hence, these results directly demonstrate a stabilizing influence of motion vision on course control.
Sensory cues complementary to optic flow
Animals have various additional sensory cues at their disposal to control heading. For instance, stable bearing can be aided by keeping conspicuous visual features stationary on the retina without the requirement for explicit visual motion representation (Bahl et al., 2013). This involves the computation of an error angle to be minimized by appropriate turning reactions. However, in nature, appropriate visual landmarks may not always be present as for instance in densely cluttered surrounds. Furthermore, using error angle for proportional control of heading is noise sensitive and prone to overshoot, as shown in bats (Bar et al., 2015). Optic flow in turn provides a signal akin to the derivative of an error angle (Bar et al., 2015). Under natural conditions, both position and motion vision system are probably used in a redundant fashion for robust steering.
In addition to vision, mechanosensory feedback from body appendages plays an important role in preventing accidental heading changes. In Diptera, for instance, the halteres – club-shaped appendages modified from hind wings – act as gyroscopes sensing body rotations (Nalbach, 1993; Dickinson, 1999). Because they are tightly coupled to the wing motor system via afferent and efferent connections (Dickerson et al., 2019), they provide ultrafast feedback critical for stable flight. Visual motion in turn signals slower rotations (Sherman and Dickinson, 2004), complementing haltere feedback in a different angular velocity regime.
It has long been recognized that motion vision is suitable to subserve various ethological functions. However, the significance of self-evoked visual motion signals for course control has been difficult to address. Here, by combining free-flight tracking with the specific removal of direction-selective neurons in flies, we directly demonstrate an important contribution of the motion vision system to course stabilization. As phenotypes of motion-blind flies are not different to unimpaired flies in the dark, non-motion visual cues do not seem to contribute substantially, at least in our experimental setup. Our work establishes a basis from which other contributing sensory cues and their integration with motion vision can be further explored under naturalistic conditions.
We would like to thank T. Schilling for the characterization of T4/T5 split-Gal4 driver lines, M. Sauter, Renee Vieira and J. Pujol-Marti for help with dissection and confocal imaging, S. Prech for technical assistance in building the free-flight arena, C. Theile and R. Kutlesa for help with the tethered-walking experiment, W. Essbauer for fly work, B. Prud'homme, F. Schnorrer and N. Luis from University of Marseille, France, for catching and providing the wild-caught strain ‘Luminy’, R. Strauss from University of Mainz, Germany, for providing the UAS-TNT strain and W. Denk from the MPI of Neurobiology for carefully reading the manuscript.
Conceptualization: A.L., A.B., A.S.M.; Methodology: A.L., A.S.M.; Software: A.L., A.S.M.; Validation: M.-B.L., A.S.M.; Formal analysis: M.-B.L., A.S.M.; Investigation: M.-B.L., A.S.M.; Resources: A.L., A.S.M.; Data curation: A.S.M.; Writing - original draft: A.S.M.; Writing - review & editing: M.-B.L., A.L., A.B., A.S.M.; Visualization: M.-B.L., A.S.M.; Supervision: A.B., A.S.M.; Project administration: A.B., A.S.M.; Funding acquisition: A.B., A.S.M.
This work was supported by the Deutsche Forschungsgemeinschaft (SFB 870) and the Max-Planck-Gesellschaft. Open access funding provided by Max-Planck-Institute of Neurobiology. Deposited in PMC for immediate release.
The authors declare no competing or financial interests.