ABSTRACT

Insect navigation is strikingly geometric. Many species use path integration to maintain an accurate estimate of their distance and direction (a vector) to their nest and can store the vector information for multiple salient locations in the world, such as food sources, in a common coordinate system. Insects can also use remembered views of the terrain around salient locations or along travelled routes to guide return, which is a fundamentally geometric process. Recent modelling of these abilities shows convergence on a small set of algorithms and assumptions that appear sufficient to account for a wide range of behavioural data. Notably, this ‘base model’ does not include any significant topological knowledge: the insect does not need to recover the information (implicit in their vector memory) about the relationships between salient places; nor to maintain any connectedness or ordering information between view memories; nor to form any associations between views and vectors. However, there remains some experimental evidence not fully explained by this base model that may point towards the existence of a more complex or integrated mental map in insects.

Introduction

Are the internal mechanisms supporting biological navigation ‘map-like’ representations? Could animal brains contain something similar to the human cartographic artefact, i.e. geometric and geocentric knowledge of the spatial layout of their environment (imagine an accurate plan of central London) into which they can embed their knowledge of salient places and thus flexibly navigate between them, including taking novel shortcuts? Or do they have topological maps, i.e. knowledge of the connectedness (or routes) between multiple locations in the world (imagine the London underground map) with, at best, some approximate estimate of distance and direction? With a topological map, knowledge of connections can be used to plan novel routes between arbitrary nodes within the network. However, it is not always necessary to invoke a geometric or topological map to explain novel routes or shortcuts. A familiar location or landmark might be recognised and approached from a different viewpoint (imagine catching sight of Big Ben). Or keeping careful track of the directions and distances travelled on an outward path (dead reckoning) might allow estimation of the straight-line direction and distance to travel back to the start point.

All these approaches can be found in robot navigation. Current research in this area is dominated by ‘simultaneous localisation and mapping’ or ‘SLAM’ (Bailey and Durrant-Whyte, 2006a), generally characterised as the ability to derive a geometrically accurate layout of observed geographic features within a traversed space while simultaneously tracking the location of the robot within that space (Fig. 1A). This typically combines dead reckoning with landmark recognition in a joint probabilistic estimate of robot and landmark positions (Cummins and Newman, 2008; Davison et al., 2007; Engel et al., 2014; Mur-Artal and Tardos, 2017). However, for long-range autonomous navigation systems, such as self-driving cars, a practical solution is to recover only local geometry, and to link spaces through a topological representation (Boal et al., 2014; Kuipers et al., 2004). Other SLAM approaches do not attempt geometric reconstruction, but learn geometric appearances along trajectories, producing ‘topometrical’ maps (Badino et al., 2012). For robot route planning in general, a textbook approach (e.g. Siegwart et al., 2011) is to convert a geometric map to a topological graph, thus reducing the state space and facilitating the application of planning algorithms (Fig. 1B).

Fig. 1.

Current approaches to robot navigation. Left: Geometric simultaneous localisation and mapping (SLAM). As the robot moves (arrows), it detects landmarks (pentagons) and maintains a probabilistic estimate (ellipses) of their location (green), jointly with an estimate of its own pose (red). Sampling the same landmarks reduces error (e.g. top left, sampled at the start and end of trajectory), allowing convergence to an accurate metric map. Right: Converting to a topological map. One method is to use a ‘Voronoi diagram’ (Siegwart et al., 2011) to find a graph of navigable paths through the space between landmarks. This allows alternative routes (e.g. from start to end) to be planned efficiently.

Fig. 1.

Current approaches to robot navigation. Left: Geometric simultaneous localisation and mapping (SLAM). As the robot moves (arrows), it detects landmarks (pentagons) and maintains a probabilistic estimate (ellipses) of their location (green), jointly with an estimate of its own pose (red). Sampling the same landmarks reduces error (e.g. top left, sampled at the start and end of trajectory), allowing convergence to an accurate metric map. Right: Converting to a topological map. One method is to use a ‘Voronoi diagram’ (Siegwart et al., 2011) to find a graph of navigable paths through the space between landmarks. This allows alternative routes (e.g. from start to end) to be planned efficiently.

Humans asked to draw maps of their environment will typically make large errors in geometry (Foo et al., 2005; Sadalla and Montello, 1989; Sadalla and Staplin, 1980; Warren et al., 2017) but preserve the approximate topology, allowing some shortcuts and novel routes to be calculated. Robot and human navigation may have converged on this local geometry/distal topology solution because the problem space often involves traversal of segments between locations of interest, e.g. along corridors between rooms, or along paths or roads between destinations, where geometric shortcuts may be impossible or hazardous. The same constrained route situation may have existed for our primate ancestors in dense forest (Presotto et al., 2018). Moreover, humans had to invent the compass before they could obtain sufficiently reliable allothetic directional information to prevent catastrophic error accumulation in dead reckoning (Cheung et al., 2007).

Insects form a striking contrast, as they have evolved reliable compass sensing, based on celestial cues. These systems have been described many times (Collett and Baron, 1994; Heinze et al., 2013; Homberg, 2004; Wehner, 1998), but briefly, they involve specialised visual receptors for polarised light, incorporating sun position and spectral gradients to enable a readout of compass heading that can be accurate on the order of a degree over short journeys (Chu et al., 2008; Lambrinos et al., 1998; Stürzl and Carey, 2012), and can be time-compensated during longer journeys (Lindauer, 1960). The insect species most renowned for navigation are central place foragers, operating over rather uniform terrain (e.g. the desert ant) or in flight (e.g. the honey bee). As such, they have a fixed origin at the nest or hive for most of their journeys. Furthermore, a direct shortcut home is unlikely to be impeded, so should be preferred for its efficiency.

Thus, these insects are known to combine their compass information with estimates of speed or distance to perform dead reckoning (path integration) on convoluted outward routes, maintaining a constantly updated ‘home vector’, which they can use at any time to guide them directly back to their starting location (Wehner and Srinivasan, 2003). They can also use vector memories to encode the direction and distance of salient locations in the world (typically food sources) relative to the nest (Dacke and Srinivasan, 2008b; Ribbands, 1949; Wehner et al., 2002). In honeybees, this vector can be communicated by the dance, and thus used by new recruits to discover the food location (Riley et al., 2005). Moreover, as demonstrated in several modelling studies (e.g. Cruse and Wehner, 2011; Goldschmidt et al., 2017), combining the current home vector state with a vector memory allows insects to take novel shortcuts between their current location and the vector memory location. For example, a bee reaching an empty feeder may take a (novel) flight directly towards an alternative (known) feeder location (Menzel et al., 2011); and an ant forced to make a detour on an outward journey to a feeder will take the (novel) direct path from the end of the detour to the feeder (Collett et al., 1999).

Note that this latter capability depends crucially on the insect's vector memory and home vector having a common origin (home) and common frame of reference (the celestial compass). As such, the insect has – at least implicitly – a true geometric map in which salient locations and its own position are encoded in geocentric (nest-centric) metric coordinates. It is also widely accepted that insects have a second – but possibly quite independent – source of geometric information they can use for guidance in the form of memory of visual landmarks or views. Orienting towards (Graham et al., 2003), aligning with (Zeil et al., 2003) or moving so as to reduce the discrepancy with (Cartwright and Collett, 1983) a view memory is inherently a geometric operation owing to the properties of light projection. Thus, for insects, the cognitive map debate should not be about the existence of ‘a centralised mental metric representation’ per se (Hoinville and Wehner, 2018), but rather whether this information can be used in an explicit map-like fashion, e.g. for flexible route planning; or whether it remains implicit and ‘at any one time, the animal knows where to go rather than where it is on some kind of cognitive map’ (Hoinville and Wehner, 2018). In this Review, I approach this much debated question by first outlining a specific, minimalist, ‘base’ model of insect navigation that is geometric but does not support planning. I then critically examine what behavioural evidence exists, or could be provided, to contradict this model and thus establish the existence of a more map-like navigational capability in insects.

A base model for insect navigation

Insect navigation has been explored in a wide range of mathematical and computational models (e.g. Arena et al., 2013; Baddeley et al., 2012; Cartwright and Collett, 1983; Cruse and Wehner, 2011; Dewar et al., 2014; Goldschmidt et al., 2017; Hartmann and Wehner, 1995; Mathews et al., 2009; Möller and Vardy, 2006; Vardy and Möller, 2005; Wittmann and Schwegler, 1995), with a number of these also demonstrated in robots (e.g. Kodzhabashev and Mangan, 2015; Lambrinos et al., 2000; Mathews et al., 2010; Möller, 2000; Smith et al., 2007). Recent work shows some convergence in the proposed computational mechanisms for path integration and visual navigation in insects, and their interaction, which I will present in the form of a ‘base model’, i.e. what appears to be the simplest set of assumptions that might potentially be sufficient to capture insect behaviour.

The base model (Fig. 2A) has three components. (1) Path integration (PI): based on an allothetic compass sense, and starting from a fixed origin, the animal is able to integrate its velocity to encode its current position in a fixed global coordinate system relative to the origin, and has a movement control mechanism that allows it to make a direct return to the vicinity of the origin (Fig. 2B). (2) Vector memory: the state of the PI system on reaching a goal can be stored, and later activation of that memory by an internal motivation to return to the goal can interact with PI to produce a return to the goal (Fig. 2C). (3) View memory: multiple images when facing or moving along a route to a goal can be stored, allowing the familiarity of the current view (e.g. its degree of retinotopic match to a previously stored view) to guide movement (Fig. 2D).

Fig. 2.

The three navigation mechanisms in the insect base model (note coloured shapes are visible obstacles). (A) On a random outbound exploration from the nest (N), the insect path integrates (PI) to maintain a home vector (left), stores vector memories when food (F) is encountered (middle), and stores nestward snapshots when facing the nest (right). (B) Homing using PI. The insect moves so that PI approaches zero, producing the shortest path from food to the nest location. Obstacles may force deviation from the direct route. A dense set of snapshots along the route is stored, tagged as ‘homeward’. At the nest location, continuing PI control produces emergent search around the zero point. (C) Returning to food. Subtracting a food vector memory from PI controls the insect's route back to the food, i.e. where the difference is zero. Note that obstacles may result in different deviations to the homeward path in B. A dense set of snapshots (not shown) along the route is stored, tagged as ‘foodward’. (D) Route following. The insect at each point on the route seeks the heading direction where the current view has a minimum difference from any stored homeward snapshot.

Fig. 2.

The three navigation mechanisms in the insect base model (note coloured shapes are visible obstacles). (A) On a random outbound exploration from the nest (N), the insect path integrates (PI) to maintain a home vector (left), stores vector memories when food (F) is encountered (middle), and stores nestward snapshots when facing the nest (right). (B) Homing using PI. The insect moves so that PI approaches zero, producing the shortest path from food to the nest location. Obstacles may force deviation from the direct route. A dense set of snapshots along the route is stored, tagged as ‘homeward’. At the nest location, continuing PI control produces emergent search around the zero point. (C) Returning to food. Subtracting a food vector memory from PI controls the insect's route back to the food, i.e. where the difference is zero. Note that obstacles may result in different deviations to the homeward path in B. A dense set of snapshots (not shown) along the route is stored, tagged as ‘foodward’. (D) Route following. The insect at each point on the route seeks the heading direction where the current view has a minimum difference from any stored homeward snapshot.

It is not assumed that either vector memory or view memory are associated with specific, individualised goals (feeder A versus feeder B), but only that they are associated with a motivational state, i.e. whether the memory corresponds to facing or moving towards food or towards home. Otherwise, assumptions 1–3 are comparable to the recent model presented by Hoinville and Wehner (2018). The familiarity memory assumed in (3) need not be purely visual but may incorporate multimodal cues, such as odour (Buehlmann et al., 2015) and wind direction (Wolf and Wehner, 2000); for convenience, I will refer to it as ‘view memory’ in what follows. Finally, note that this base model does not include some mechanisms usually assumed to be part of the ‘toolkit’ of insect navigation (Wehner, 2009), such as search behaviour or attractor-like visual homing from an arbitrary direction to a stored snapshot location (Zeil, 2012). These are excluded on the basis that they can potentially be subsumed under PI or view memory mechanisms, as detailed below.

The three components are assumed to interact in the following ways only. First, PI and vector memory determine which images are stored in view memory. During learning excursions, the animal needs to store images when its PI indicates it is facing home (Fig. 2A). For route following, it is necessary for the first trip home from food for the animal to be guided by PI (Fig. 2B), and the first trip back to the feeder to be guided by vector memory (Fig. 2C), for the animal to experience and learn the relevant views. Subsequently, the animal could use view alignment alone (Fig. 2D) to choose its heading direction. Second, the output of the three systems can be combined to control behaviour, and the weighting of PI, vector memory and view memory components can vary.

Although these assumptions may be expressed relatively simply in computational terms, they actually introduce a great deal of flexibility in interpretation of observed behaviour. Two recent modelling approaches that are helpful in this respect are the use of accurate recording and reconstructions of insect environments to determine what visual information is actually available from the insect's viewpoint (e.g. Stürzl et al., 2008; Zeil et al., 2003), and attempts to provide an explicit quantitative prediction for how the weighting between PI, vector and view memory should be determined in any given situation (e.g. Hoinville and Wehner, 2018; Wystrach et al., 2015).

The proposed mechanism for path integration

Vickerstaff and Cheung (2010) have argued convincingly, based on the accuracy and efficiency of updating the vector, that insects use a geocentric static-vectorial representation (Cheung et al., 2007) for their home vector, of which the simplest form is a Cartesian encoding, e.g. as proposed by Mittelstaedt and Mittelstaedt (1973). Cartesian encoding can be generalised to any algorithm in which the motion vector is projected onto multiple axes and for each axis the input is accumulated. Using more than two axes is redundant but may have advantages in reliability, ease of read-out for control or biological plausibility. Such a representation lends itself very naturally to an interpretation in terms of tuned heading direction cells, each modulating the accumulation of speed in its preferred direction (Arena et al., 2013; Goldschmidt et al., 2017; Haferlach et al., 2007; Kim and Hallam, 2000; Mathews et al., 2009; Stone et al., 2017). This will result in a distributed encoding of the outward path, in the form of the accumulated distance in each heading direction (Fig. 3, top), from which the home direction and distance can be recovered by vector summation. We have recently proposed how such a mechanism can be mapped to identified neurons and connectivity of the insect central complex inferred from neurophysiology, neuroanatomy and electron microscopy data (Stone et al., 2017) (see Fig. 3, and also Honkanen et al., 2019). We have shown in extensive testing that this model is sufficient to produce reliable path integration and steering control to return the animal to the origin. It also produces insect-like search patterns around the origin, thus obviating the need to invoke a separate search control algorithm.

Fig. 3.

Path integration in the central complex. Top: (left) in each of eight compass directions (green, identified with TB1 cells in the protocerebral bridge), the speed of motion is accumulated in a set of integrator cells (orange/yellow, identified with columnar cells CPU4). Thus, for any path (two examples are shown), the activity level of CPU4 cells (drawn as an arrow for the CPU4 pair in each direction) forms a distributed population code for the home vector. Note that home vector length is encoded by the relative, not absolute, CPU4 activity. Bottom (from left to right): the same circuit is redrawn to clarify the steering mechanism. Columnar offsets rotate the population coded home vector (memory) one step to the left (yellow) or right (orange). The current heading direction (compass, green) is subtracted from each to determine activation of the left or right output cells (identified with CPU1a and CPU1b). All left and right output cell activities are summed, then the sums are compared to determine the correct way to turn.

Fig. 3.

Path integration in the central complex. Top: (left) in each of eight compass directions (green, identified with TB1 cells in the protocerebral bridge), the speed of motion is accumulated in a set of integrator cells (orange/yellow, identified with columnar cells CPU4). Thus, for any path (two examples are shown), the activity level of CPU4 cells (drawn as an arrow for the CPU4 pair in each direction) forms a distributed population code for the home vector. Note that home vector length is encoded by the relative, not absolute, CPU4 activity. Bottom (from left to right): the same circuit is redrawn to clarify the steering mechanism. Columnar offsets rotate the population coded home vector (memory) one step to the left (yellow) or right (orange). The current heading direction (compass, green) is subtracted from each to determine activation of the left or right output cells (identified with CPU1a and CPU1b). All left and right output cell activities are summed, then the sums are compared to determine the correct way to turn.

The proposed mechanism for vector memory

If a navigating animal stores the current state of its home vector when it arrives at different locations (Fig. 4, step 1), its memories will be in a single, consistent, geocentric frame of reference as the allothetic celestial compass cues are fixed in orientation relative to the terrain, as is the origin. Previous modelling studies (e.g. Cruse and Wehner, 2011) have demonstrated that if an insect is assumed to have acquired such vector memories, it can reload one of these into its PI-homing system to drive movement back to that location, automatically compensating from any enforced deviation from the desired route. Importantly, this does not require the insect to have more than one integrator (cf. Collett and Collett, 2000). The basic concept [which we have shown can be implemented using the central complex model of Stone et al. (2017); see F. le Moel, T. Stone, M. Lihoreau, A. Wystrach and B.W., unpublished results)] is that the difference between an activated vector memory and the current PI state drives steering until those states are equal (Fig. 4, step 2). This can apply from any initial PI state, and hence also supports novel shortcuts (Fig. 4, step 3). If the choice of vector memory is made dependent on the initial amplitude of its difference from the PI state, additional intelligent decisions by the animal can be explained, such as not traversing a short-cut if it exceeds a certain distance (Menzel et al., 2011), or choosing the vector memory that produces the shortest distance to traverse.

Fig. 4.

Vector memory can support novel shortcuts. Step 1: the insect stores the PI state when at a food location (F1) before travelling home (H), bringing the PI state to zero. Step 2: this vector memory is applied as inhibitory input to the steering cells. The insect will thus move until the accumulated PI state balances the memory, i.e. it has returned to the food location. By releasing the inhibition, it could follow PI back home. Step 3: alternately, it can inhibit with another vector memory (F2). It will again move so that PI and memory balance, taking a direct path to F2. Note that this does not require explicit addition of the two vector memories, but achieves the equivalent effect because the PI state corresponds to the first memory when the second memory is activated.

Fig. 4.

Vector memory can support novel shortcuts. Step 1: the insect stores the PI state when at a food location (F1) before travelling home (H), bringing the PI state to zero. Step 2: this vector memory is applied as inhibitory input to the steering cells. The insect will thus move until the accumulated PI state balances the memory, i.e. it has returned to the food location. By releasing the inhibition, it could follow PI back home. Step 3: alternately, it can inhibit with another vector memory (F2). It will again move so that PI and memory balance, taking a direct path to F2. Note that this does not require explicit addition of the two vector memories, but achieves the equivalent effect because the PI state corresponds to the first memory when the second memory is activated.

The proposed mechanism for view memory

Most models of insect view memory follow the ‘snapshot model’ of Cartwright and Collett (1983) by assuming the memory is of a retinotopic, panoramic view, rather than of individual and identifiable landmarks and their estimated spatial locations. Models differ in their assumptions about how the information in a view is stored: a one-dimensional horizon ring (Franz et al., 1998; Möller et al., 1998); a single vector that averages the bearings of all landmarks (Möller et al., 2001) or of the intensity pattern (Hafner, 2001); or using motion vectors (Dittmar et al., 2010; Vardy and Möller, 2005). It has proved surprisingly effective to just use the raw image and simple pixel-wise differencing, at low resolution, as the basis for visual navigation (Philippides et al., 2011; Zeil et al., 2003). This information can tell the animal when it is facing approximately the same way with respect to a previously captured view (Zeil et al., 2003, 2014). It can also be used to establish (as the difference changes) whether it is moving closer to or further from the location where the view was stored. Most strikingly, this proves to be effective even when multiple views have been stored and the system has no information about which one to select for the comparison (Baddeley et al., 2011, 2012). By comparing the current view with all stored views, the best match will occur when aligned in the same orientation as the nearest view. This provides a procedure to use view similarity alone to guide the insect towards that goal. In particular, it is possible that this mechanism operates for movement to the goal from novel locations, not just along familiar routes (Dewar et al., 2014; Graham et al., 2010; Wystrach et al., 2013), thus obviating the need to invoke a separate ‘visual homing’ mechanism. We have recently suggested that the mushroom bodies, a prominent pair of neuropils in the insect brain, implicated in associative learning, are a plausible site for view memory (Ardin et al., 2016a), as described in Fig. 5. In computer simulations, we have shown this neural architecture can store sufficient memories (potentially hundreds) to follow extended routes through complex visual environments.

Fig. 5.

View memory in the mushroom body (MB). Left to right: the insect experiences views when following a route, which are encoded as a low-resolution image (normalised by lateral connections, LN) in neurons (PN) that project to the MB Kenyon cells (KC). The large number and low connectivity of Kenyon cells result in a sparse code that is relatively unique to each view. For homeward routes, a dopamine (DA) reinforcement signal is released if the home vector is decreasing, such that the image is stored as a decrease in the weights of the active Kenyon cells onto an extrinsic neuron (EN). When later deciding which direction to take (upper right), the insect scans left and right, monitoring the EN activity, which will be lowest when it faces in the direction providing a view with greatest similarity to the reinforced images.

Fig. 5.

View memory in the mushroom body (MB). Left to right: the insect experiences views when following a route, which are encoded as a low-resolution image (normalised by lateral connections, LN) in neurons (PN) that project to the MB Kenyon cells (KC). The large number and low connectivity of Kenyon cells result in a sparse code that is relatively unique to each view. For homeward routes, a dopamine (DA) reinforcement signal is released if the home vector is decreasing, such that the image is stored as a decrease in the weights of the active Kenyon cells onto an extrinsic neuron (EN). When later deciding which direction to take (upper right), the insect scans left and right, monitoring the EN activity, which will be lowest when it faces in the direction providing a view with greatest similarity to the reinforced images.

Is this base model a cognitive map?

It is conceptually helpful to make explicit the relationship between this model and a cognitive map (Gallistel, 1990); as succinctly defined in Gallistel and Cramer (1996): ‘a representation of geometric relations among a home site, terrain surrounding the home site, goals to be visited and the terrain surrounding those goals’. In the base model, there are geometric relationships between the home site and goals to be visited, owing to PI and vector memory, which operate in a common geocentric framework with the nest as the origin and the celestial compass defining the axes. There are also geometric relationships between a home site or a goal and its surrounding terrain – a view from a particular place is precisely generated by the relative geometric relationship of the viewer to the terrain around them. However, a key difference between Gallistel's cognitive map and the base model outlined above is that the second (terrain) geometry is not assumed to be embedded in the first. As noted in Collett and Graham (2004), ‘If an animal has attached path integration coordinates to an array of visually defined places it possesses, ipso facto, what is often called a cognitive map’. Menzel et al. (2011) and Collett et al. (2013) similarly highlight the issue of the embedding (or not) of view memory in vector memory as a crucial issue. As such, evidence for insects recovering PI information from views or noticing a discrepancy between their PI and view memory would appear (ipso facto) to be evidence that they have attached PI coordinates to their views, and thus effectively have a map.

Gallistel and Cramer (1996) go on to define navigation as ‘the ability to locate self and goals within the co-ordinate framework, to enable setting a course towards a goal not currently perceived by reference to terrain it can perceive’. The latter part of this (‘set a course towards a goal not currently perceived…’) is a very useful functional definition, distinguishing navigation from simple taxis or other forms of orientated behaviour. The insect base model assumes that PI, vector memory and/or the currently perceived terrain (view memory) enable the animal to set a course towards a goal not currently perceived, but shows that this is possible without any explicit step of self- or goal-localisation.

Is the base model sufficient to account for insect behaviour?

I had … during many years, followed a golden rule, namely, that whenever a published fact, a new observation or thought came across me, which was opposed by my general results, to make a memorandum of it without fail and at once; for I had found by experience that such facts and thoughts were far more apt to escape from memory than favourable ones. Charles Darwin, In The Autobiography of Charles Darwin, edited by Nora Barlow (1958).

In the spirit of Darwin's strategy, my aim in the remainder of this Review is to examine what experimental evidence for insect navigation appears to contradict, or require some significant extension to, the base model. As such, the following review will not cover the many studies that more generally support or investigate PI, vector memory and view memory, but instead will focus on key observations that might suggest these components interact in a form that more closely resembles the type of map over which planning can be accomplished. One caveat: for some of the data to be discussed, my interpretation may differ from the authors, or the authors may have subsequently altered their interpretation; their own recent work should be consulted if clarification on their views is required.

Insects might use view memory to correct path integration

A key assumption in robotic SLAM is that the robot is able to correct cumulative error in its location estimate by simultaneously maintaining the maximum possible consistency with the landmark map, or in other words, minimising the uncertainty over both. This happens most strikingly in loop closure, when a robot recognises a location that it visited earlier on its journey (when accumulated self-motion uncertainty was lower) and is consequently able to reduce the uncertainty of the entire intervening path (Bailey and Durrant-Whyte, 2006a). It seems possible that insects might make use of the same strategy to improve the accuracy of PI by reference to familiar surroundings or even to reset PI at recognised locations (e.g. as implemented in the ‘synthetic ant’ model of Mathews et al., 2009, 2010). This use of a fix relative to the terrain to correct for cumulative PI error is explicitly suggested to be an important function of a cognitive map by Gallistel (1990).

In ants, a relatively straightforward procedure to read out the current state of the path integrator, and its accuracy, is to passively transport the ant to a novel location (to remove the influence of view memories) and observe the direction and distance it runs before commencing a search, and the subsequent spread of the search (e.g. Merkle and Wehner, 2010; Wehner and Srinivasan, 2003). This can be done, for example, after repeated route training to look for improved accuracy (Cheng et al., 2006; Narendra et al., 2007); after the ant has experienced the usual outbound visual cues shifted from their original positions to look for adjustment (Collett et al., 2003); or after direct transfer from a feeder to the nest and subsequent experience of the full range of nest-related visual and olfactory cues (Knaden and Wehner, 2006). These procedures are not without caveats. The absence of familiar cues might itself affect the travel distance (Narendra, 2007a), or when tested in a channel to block external views, visual aliasing might influence the animal to run further (Bolek and Wolf, 2015; Schwarz et al., 2012). Nevertheless, taking such results at face value, there is a general consensus (Collett and Collett, 2006; Wehner and Rössler, 2013) that ants do not correct or reset their PI system at any location other than the nest, and indeed, zeroing of the home vector requires actual entry of the nest, not just experience of these familiar visual surroundings (Knaden and Wehner, 2006). It remains possible, however, that although not used to reset PI, ants nevertheless form (and use in some other context, see below) associations between views and PI. Thus, this negative evidence does not seem sufficient alone to conclude (Collett et al., 2013) they do not have a map.

In bees, testing can be more difficult owing to the technical constraints of following the bee's flight in normal conditions, and in controlling their visual experience. As such, the majority of earlier experiments only estimated the initial heading direction taken by a bee that is assumed to be using its PI state to travel home, and/or noted whether and when it arrived there. More recently, data using radar tracking have provided more explicit information (Capaldi et al., 2000; Riley et al., 1996, 2005). Alternatively, bees can be trained to fly through a smaller controlled space (Srinivasan et al., 1996), but such tunnels provide information only about the distance component of the PI state. It has been stated as fact that ‘honeybees, Apis mellifera, employ landmarks to reduce odometric errors during their foraging flights by resetting the path integrator whenever landmarks [sic] cues appear at spots where they are expected’ (Merkle and Wehner, 2008); however, the evidence seems rather limited. Srinivasan et al. (1997) found that the width of the distribution of search in honeybees traversing tunnels increases with distance flown, but it is reduced if a prominent landmark is provided, and conclude that ‘bees recommence computation of distance when they pass a prominent landmark’. In Chittka et al. (1995), bees were trained in a relatively featureless environment to a feeder with one prominent landmark nearby (see Fig. 6A). If both landmark and feeder were moved, including a rotation relative to the sky/terrain, after relocating the feeder, the bees would depart on the PI compass bearing corresponding to the original feeder location, as though their PI had been corrected. Importantly, owing to the rotation, this direction cannot be explained as directional guidance by the landmark, although it remains possible that other subtle components of the view played a part. By contrast, in Menzel et al. (1998), bees transported from one familiar feeder to another moved in the direction of their accumulated home vector to the first site, not exhibiting any PI update.

Fig. 6.

Schematics of experimental paradigms probing the association of vectors and views. (A) Can views update PI (Chittka and Geiger, 1995)? (Ai) Bees are trained from a nest (N) to a feeder (F) and landmark (rectangle). (Aii) Both feeder and landmark are displaced and rotated. When bees find the feeder, they depart in the original home vector direction (black), not using PI (red) or landmark alignment (orange). (B) Can PI corresponding to a view be reloaded (Menzel et al., 2005)? Trained bees are displaced from a feeder, fly off their home vector, then take a novel route back to the feeder (black). It is assumed they recover the PI coordinates associated with a place during prior exploration flights, and now use the PI–food vector difference to move to food. (C) Can PI prime view memory (Bregy et al., 2008)? (Ci) Ants are trained to a nest with a prominent landmark. (Cii) In subsequent returns, they are more likely to go towards the landmark if it appears nearer to the correct PI location. (D) Are there local vectors (Legge et al., 2010)? (Di) Ants are trained to detour via a single exit from an arena. (Dii) Tested from the feeder, they follow the same ‘local’ direction relative to the sky, not landmarks or PI. (E) Is there sequence memory (Collett et al., 1993)? (Ei) Bees are trained in a maze to choose stimulus A over B, then D over C (sides are randomly varied). (Eii,iii) Given the choice between A and D, preference depends on where in the sequence the stimuli are presented.

Fig. 6.

Schematics of experimental paradigms probing the association of vectors and views. (A) Can views update PI (Chittka and Geiger, 1995)? (Ai) Bees are trained from a nest (N) to a feeder (F) and landmark (rectangle). (Aii) Both feeder and landmark are displaced and rotated. When bees find the feeder, they depart in the original home vector direction (black), not using PI (red) or landmark alignment (orange). (B) Can PI corresponding to a view be reloaded (Menzel et al., 2005)? Trained bees are displaced from a feeder, fly off their home vector, then take a novel route back to the feeder (black). It is assumed they recover the PI coordinates associated with a place during prior exploration flights, and now use the PI–food vector difference to move to food. (C) Can PI prime view memory (Bregy et al., 2008)? (Ci) Ants are trained to a nest with a prominent landmark. (Cii) In subsequent returns, they are more likely to go towards the landmark if it appears nearer to the correct PI location. (D) Are there local vectors (Legge et al., 2010)? (Di) Ants are trained to detour via a single exit from an arena. (Dii) Tested from the feeder, they follow the same ‘local’ direction relative to the sky, not landmarks or PI. (E) Is there sequence memory (Collett et al., 1993)? (Ei) Bees are trained in a maze to choose stimulus A over B, then D over C (sides are randomly varied). (Eii,iii) Given the choice between A and D, preference depends on where in the sequence the stimuli are presented.

A seemingly straightforward prediction that could be tested would be that PI corrected by landmarks should be more accurate given richer or more distinctive visual cues. However, if speed estimation is dependent on visual flow (Srinivasan, 2014), then distance estimation itself might be more accurate in these circumstances, and stable landmarks in the environment might act as additional compass cues or allow angular velocity to be more precisely calculated, producing more accurate PI. It is relevant to note that some versions of SLAM essentially function this way, using current landmarks to improve the immediate estimate of self-motion but not storing this information in a map (Bailey and Durrant-Whyte, 2006b).

If the path integration state is zero, insects might use view memory to reload a previous path integration state

Insects might not use views to continuously correct PI, but may nevertheless have the capacity to do so in certain circumstances: specifically, when they have followed their home vector, leaving PI at zero, but do not find themself at home. If the insect then experiences visual surroundings that were previously experienced in a certain PI state, it might be advantageous to reload that state and use it to find home. In Menzel et al.’s (1998) experiment, bees transported from the nest to one of the two feeders would take the appropriate PI direction home from each, and the behaviour is explained as ‘[bees] attach home vectors to the particular visual scene at the feeding site’. If this was a general capability (i.e. all previously experienced locations or views could evoke the relevant vector coordinates), then the insect's representation of space would be equivalent to robotic SLAM. Alternatively, perhaps only locations where a vector memory has been stored, e.g. food sites, could evoke the relevant vector memory. This would still be a strong step towards a cognitive map as it embeds familiar views in the vector co-ordinate system.

Note that here I am not discussing the possibility that insects might recover a ‘local vector’ that provides some immediate guidance along a familiar path (see below), but rather the possibility that under certain circumstances they actually reset their main PI system to a non-zero home vector. As such, one form of counterevidence is provided by experiments in which a zero-vector insect is observed or induced to move off some direction, but then is able to return to the location from which this movement commenced, indicating that its main PI has in fact been running as though starting from zero; this has been seen in ants (Knaden and Wehner, 2005; Wehner et al., 1996) and bees (Riley et al., 2005). Andel and Wehner (2004) show that an ant displaced from the nest will follow familiar views to get home while accumulating a home vector in the opposite direction, and will express the vector when moved to an unfamiliar location.

A homewards direction taken by a zero-vector insect from a familiar location is not sufficient evidence it has reloaded a PI state [see e.g. Collett (1996) for early discussion of this point] as the direction could be explained by alignment to a homeward view stored at this location (Fig. 2D). Indeed, as remarked by Cheung et al. (2014) in their critique of Cheeseman et al. (2014), any explanation that suggests the bee can use view memory of terrain features to recognise ‘where it is’ and thus reload a vector makes it difficult to rule out that the view itself (without any reference to vectors) accounts for directional guidance.

Thus, strong evidence for view-initiated reloading of a PI state requires that the animal in a familiar place moves off in a direction that is consistent with having reloaded a PI state but inconsistent with view alignment. A potential example was observed in the experiment described in Menzel et al. (2005). Bees were trained to a feeder location, caught when either leaving the nest or leaving the feeder and passively transported to a location that is well out of view of either nest or feeder, but may fall within their previous learning flight experience. The bees moved from the feeder were observed to first fly along their home vector (so their PI should be near zero), then perform a search, but at some point change to taking a directed path, with some flying home but others rather directly towards the feeder (Fig. 6B). It would seem a feeder-oriented flight could only be obtained by combination of a reloaded home vector (the PI state for the current view, acquired during a learning flight) and a revived intention to follow the nest–feeder vector memory, as there is no reason bees would have ever learned a visual route from this location to the feeder. It has been suggested that instead the bees' direction might be a compromise between a view-driven homeward direction and vector-memory-driven feeder direction (Cruse and Wehner, 2011), but this requires simultaneous activation of memories from conflicting motivational states, which should not be possible within the base model. However, it is surprising that bees taken on exiting the nest (hence with zero vectors and foodward motivation) were not observed to take shortcuts to feeder locations [see also similar reports in Menzel et al. (1996) of zero-vector bees failing to take feeder directed shortcuts]).

Another line of evidence for views evoking vector information comes from observations of bees foraging under overcast conditions and producing dances that indicate they have used a memory of the view's relationship to the (no longer observable) sky compass to estimate the feeder's orientation to the nest (Dyer and Gould, 1981). A follow-up experiment by Towne and Moscrip (2008) ensured that bees only discovered the feeder direction after a rotation of landmarks and with the sky overcast, and found that the bees' dances were consistent with the previous orientation of sky to landmarks. Although these results do not address distance information, and hence fall short of establishing that views are associated with PI coordinates, they nevertheless suggest there is some embedding of view memories within the axes of the vector system, or vice versa.

The state of the home vector may ‘prime’ the recall of specific views

An alternative line of evidence for vector–view associations would be if the PI state could prime recovery of the memory of a corresponding view, and thus alter the likelihood that the animal will be influenced by it. For example, Wehner et al. (1996) note that ants are more likely to search relative to visual landmarks they have experienced around their nest the closer their PI system tells them they are to the nest [see also Fig. 6C, Bregy et al. (2008), and for olfactory nest cues, Bühlmann et al. (2012)]. However, PI-directional information becomes progressively less reliable as the home vector becomes shorter, and therefore views may gain a stronger influence if the ant is combining the two sources of information (Legge et al., 2014; Wystrach et al., 2015). More generally, behaving differently to a familiar view when in a conflicting PI state (e.g. zero-vector ants producing more scans along a familiar route; Wystrach et al., 2014) may be explained in terms of conflict or compromise at the output stages of behavioural control (again see Hoinville and Wehner, 2018). Nevertheless, the possibility of some such association, to disambiguate or prevent interference between view memories, is still often proposed (e.g. Freas et al., 2017).

One line of evidence comes from examples where insects with a home vector are released in unfamiliar surroundings. In Narendra (2007b), it is observed that ants in unfamiliar terrain will only follow around 50% of their home vector length before commencing a search. More specifically, this appears to be habitat dependent, with ants that normally forage in more cluttered terrain less likely to complete their home vector under these conditions, whereas those used to more open environments are more likely to run the entire PI length (Bühlmann et al., 2011; Cheng et al., 2012). These results would appear to indicate that ants have at least some expectation of the views they will experience after repeated journeys along a route, if not an expectation directly linked to the PI state. A model that weights cues by certainty would predict that a lack of familiar views should relatively strengthen, not weaken, the influence of PI.

Bees can apparently be trained to make different visual choices depending on their PI state (Menzel et al. 1996). In Srinivasan et al. (1999), bees were trained to use the left opening to a feeder when it was at a short distance from a tunnel entrance, and the right opening when it was at a longer distance; each opening was also visually distinguished by diagonal stripes at a particular orientation. As well as being able to make the correct choice at each distance, when tested at locations between these distances, the choice frequency was linearly related to the distance, e.g. with equal preference for each side when the feeder was halfway between the training locations. If optic flow cues were eliminated, making distances more difficult to determine, the ability to distinguish the near and far choice conditions was reduced.

Priming of views by vectors would be very convincingly shown if bees used dance information – which is in the form of a vector – to index into their view memories. Such a result was claimed for bees trained to a feeder located in a boat that was gradually moved away from the shore. Recruit bees were reported to make no attempt to follow the vector indicated by dancing foragers to the ‘impossible’ food location in the middle of the lake (Gould, 1990). However, a more recent replication reports that recruit bees will indeed follow the dance and leave the nest in search of food (Wray et al., 2008). Consistent with this, Menzel et al. (2011) (see also Riley et al., 2005) noted that recruit bees displaced when leaving the nest have not, as yet, been observed to move directly towards the correct feeder location (as might be thought possible if the vector from the dance is treated as indicating a location in their map), but rather will fly along the vector direction and distance indicated by the dance, regardless of the familiar visual terrain.

View memories can have associated local vectors

Until recently, a widespread assumption in insect navigation research was that insects combine their ‘global’ PI vector guidance with a set of ‘local’ vectors, in which a particular salient location [e.g. a feeder but also potentially a path junction or a ‘panoramic and behavioural context’ (Collett and Collett, 2009)] evokes the direction and distance to be travelled for the next segment of its route, using the celestial compass and odometry, but independent of the PI state (Collett and Collett, 2015). However, recognising that simple view alignment might explain much of the earlier ‘local vector’ evidence, recent reviews (e.g. Collett et al., 2013) and models (e.g. Hoinville and Wehner, 2018) do not include local vectors.

For example, potentially striking evidence for a local vector is given by the training procedure from Collett et al. (1998), where ants were required to travel 8 m north across open ground to the entrance of a buried channel, then make a right-angle turn and travel 8 m along the channel to reach the feeder. The base model vector memory mechanism would predict that ants in this situation should attempt to take the direct (diagonal) route to the feeder, as this would reflect the PI state stored at the feeder. The alternative assumption is that gradual training established an indirect outwards route guided by view memory alone. The ants were tested in a new location using a return channel of different length and/or orientation. Local vectors were evidenced by zero vector ants tending to take a southbound direction on exiting the channel. Direct guidance by the view was discounted on the basis that the ants were in a novel location, and the buried channel was not visible once they had exited, although it was assumed rather than fully established that there were no usable panoramic cues. In Collett and Collett (2009), manipulation of distant visual cues, thought to be negligible, affected execution of local vectors.

Clearly, directional evidence for a local vector would be most be compelling if the direction taken is with respect to the celestial direction where this is actually competing with the view direction, or if for some other reason, direct guidance by the view can be definitively discounted. Some evidence for this is also provided in Collett et al. (1998), for ants that were trained on a path through a corridor of cylinders. When the corridor was subsequently rotated with respect to the sky, ants would start along the corridor, but, either at the exit or earlier, would deviate to follow the original compass direction. Legge et al. (2010) trained ants to take an initial detour vector direction from a feeder towards the exit of a surrounding arena, before turning towards the nest. Zero-vector ants placed back at the feeder took the same initial path, even when conspicuous visual cues around the exit were moved to different orientations (Fig. 6D). Ants were also observed to depart most often in the correct direction from a symmetric landmark array (Bisch-Knaden and Wehner, 2003). The local vector direction taken by ants in Bisch-Knaden and Wehner (2001) appears not to depend on either the sky or the view, but to be relative to the previous direction taken by the ant, as it emerged from behind a barrier that had been rotated relative to the sky, and was no longer visible. This suggests some form of motor memory might be an additional component of the navigation system, particularly on well-practiced routes.

Alternative evidence of a local vector may come from showing its control over distance. Srinivasan et al. (1997) observed that bees produced a broader search pattern for a feeder location in a tunnel if a landmark along the route, present in training, was removed (note an alternative interpretation is that this provides evidence for views being used to correct PI). Collett et al. (2002) found that bees' search for the feeder was relative to the latest passed landmark rather than distance travelled from the nest. However, in Collett and Collett (2009), they note that in both these experiments, the landmark taken to trigger a local vector that guides the bee to the feeder may have been visible to the bee in the feeder location, and hence formed part of a snapshot memory of the feeder location.

Knaden et al. (2006) trained ants in channels with U-turns, such that an assumed local vector from the food to the junction would overshoot the location of the nest provided by PI. Ants subsequently tested in a new straight test channel ran the first segment of their route and appeared to search for the U-turn location, most strikingly, taking the direction opposite to that indicated by PI in one scenario. A recent experiment reported by Fernandes et al. (2015) also trained ants in channels to feeders at different distances and directions. Ants subsequently taken from the nest and given food behaved (in a test channel) as though following a recalled vector from the feeder to the nest, by travelling in the corresponding direction and for the approximate distance. Although interpreted in terms of local vectors, this result might alternatively be seen as support for the reloading of a PI state (see above) triggered by encountering food.

Views could be linked in topological sequences

A surprisingly strong claim from the base model's familiarity mechanism for view memory (Ardin et al., 2016a; Baddeley et al., 2011, 2012) is that no location, orientation or sequence information is needed. This stands in contrast, for example, to the assumption in Zhang et al. (1999, p. 180): ‘If an insect is to make use of such stored images for navigation, it must know, at each stage of its journey, which image to expect next’. If insects store ordered links between individually identifiable views, this introduces topological information that could form the basis of flexible planning between arbitrary locations. If the linkage is by local vectors (see previous section) that have allothetic direction and distance information, this starts to resemble a centralised map. Topological linkage is often treated as a weaker assumption than a geometric map, e.g. in Menzel and Mueller (1996): ‘one can get from A to B via C and D (using instructions such as ‘look for C after B’ or by passing a sequence of similar landmarks), without having access to the complete arrangement of these positions [a map] at any point in the journey’. But for insects it might be more parsimonious to assume they can learn vector–view associations (forming a geometric map) rather than topological connections.

There is some evidence for sequence memory in insects. Bees have been shown able to execute a sequence of motor memories to navigate a maze [honeybees (Zhang et al., 2000); bumblebees (Mirwan and Kevan, 2015)], and may even persist in this sequence of manoeuvres when barriers are removed (Collett et al., 1993). Landmark counting – where bees trained on a course with several similar landmarks on the way to the goal subsequently search after passing the usual number of landmarks, which have been moved either closer together or further apart (Chittka and Geiger, 1995; Dacke and Srinivasan, 2008a; but see also Menzel et al., 2010) – might also be considered a form of sequence memory. Stronger evidence comes from preference choices (Fig. 6E), which can be altered by where they appear in a sequence relative to training (Chameron et al., 1998; Collett et al., 1993; Schwarz et al., 2012), or by a preceding, no longer visible cue (Beugnon and Macquart, 2016; Zhang et al., 1999). Judd and Collett (1998) present evidence for wood ants storing several retinotopic template memories when learning the approach to a feeder, selecting which one to match according to how far they are along their journey, but the behaviour can be modelled without assuming a sequence (Wystrach et al., 2013). Riabinina et al. (2011), in extensive well-controlled tests, were unable to establish a sequence-priming effect in wood ants.

Evidence against sequence memory being important for ants is provided by their ability to join a route when crossing it in an arbitrary place (Kohler and Wehner, 2005; Mangan and Webb, 2012). Wystrach et al. (2011) found that interchanging the positions of landmarks along a familiar route produced increased turns and meanders by ants, but note that this could be explained by changes in the panoramic views formed by the combination of the landmarks and the surrounding environment. In contrast, as discussed in Graham and Mangan (2015), ants' behaviour with respect to views cannot be completely explained by their immediate influence. For example, ants that have already followed a visual route home but are immediately displaced back to the start of their route will behave differently to ants in the same PI state that have not just traversed the route (Collett, 2014), suggesting that recent experience of a view alters future behaviour towards it.

Conclusions

In the first part of this Review, I have attempted to explain insect navigation in terms of mechanisms that: (i) are described with sufficient explicitness to implement as computer programmes; (ii) have been tested for function in simulations and on robots; (iii) can be plausibly mapped at a detailed level to identified neural circuits; and (iv) are the fewest that might potentially be sufficient to account for the rich navigational capabilities observed in insects. In this base model, insects use accurate path integration coupled with vector memories in a fixed geocentric coordinate system, backed up by use of simple (geometric) view matching to memory to maintain headings along routes or move in the goal direction. Notably, they are not assumed to use topological information. Topological maps are usually regarded as simpler than metric maps, but in fact fulfil a very specific need in human and robot (and possibly some animal) navigation to allow flexible planning of alternative routes through previously explored spaces. Insects may have evolved completely different solutions to the problem of travelling between different goals (see Box 1).

Box 1. Comparing insect and robot navigation

It is useful to compare the insect base model presented here with some key issues in robot navigation, as articulated, for example, in Milford and Schulz (2014).

Error accumulation means pure odometry is not viable for any interesting travel range or task

For insects, it is plausible that their odometry is sufficiently accurate in normal foraging conditions that they can depend on it to get near enough to their goal for local mechanisms (e.g. visual memory or an olfactory plume cue) to guide the final approach.

Odometry needs to be corrected by recognition of landmarks

This is the core principle of simultaneous localisation and mapping (SLAM), that simultaneous updating of the robot's own pose relative to landmarks and the geometric layout of the landmarks will converge to an accurate map. The base model assumes that odometry is only reset when home is visited, and arriving at a familiar place is not used to reduce the accumulated error, as no vector information is stored with a view.

How to encode large environments?

The suggested answer is that the insect encodes it as a set of vectors with a common origin at the nest, and that it only ever has at most one vector memory active [along with path integration (PI)] to determine its current movement, although it might switch between vectors without returning to the nest. The effective extent of the environment is thus bounded mostly by PI accuracy.

How are visual locations/landmarks recognised from different viewpoints?

Visual locations are not recognised, but only capable of evoking a stronger or weaker sense of familiarity. Moreover, there is no viewpoint invariance; in fact, the whole principle of the function of the view memory guidance system as proposed is to have the animal experience familiarity only when it adopts the same viewpoint, thus informing it that it is now facing the goal.

If insects can use geometric memories to guide navigation, in what sense do they lack a map? The base model assumes that PI, vector locations and view memories simply provide weighted inputs to a common steering output. In the second part of this Review, I discuss experimental evidence that might suggest they are more explicitly associated. Insects have sometimes been observed to move in a direction that is consistent with resetting or reloading their PI state based on a view (or feeder experience), and which cannot be explained by alignment with the view (Chittka and Geiger, 1995; Menzel et al., 2005). ‘Local vectors’ in a particular direction (Knaden et al., 2006; Legge et al., 2010) or of a particular length (Fernandes et al., 2015) may be triggered by experience of a feeder or food, in the absence of, or in conflict with, visual cues. Bees may be able to use views or landmark layouts to estimate (and dance according to) celestial vector information when this is not directly available (Dyer and Gould, 1981; Towne and Moscrip, 2008). Choices between visual cues may be influenced by the PI state (Srinivasan et al., 1999), and inconsistency between views and the PI state may influence following of a home vector (Narendra, 2007a). Finally, some sequence effects, implying topological representation of routes, can be observed in visual choices (Chameron et al., 1998; Collett et al., 1993).

However, it should be emphasised that many more studies aimed explicitly at testing the hypothesis of shared information between vector and view memory have failed to find supporting evidence. As such, it remains more parsimonious to assume (contra Gallistel, 1990) that insect view memories are not embedded in their vector map.

Nevertheless, the interaction between vectors and views might be richer than the simple averaging of outputs assumed in the base model. For example, recent experiments on ants following a route while walking backward (Ardin et al., 2016b; Schwarz et al., 2017) suggest that insects may translate the outcome of view alignment into a short-term directional setting with reference to the celestial compass, which can be maintained when no longer facing the view. Alternatively, insects may still be capable of assessing the familiarity of views despite facing a different direction, through mental rotation or rotation-invariant processing of the view; or they might index their view memories with directional information from their compass. This might help to account for the observation that insects seem able to use views for guidance without the extensive scanning that forms the basis of some current algorithms.

Many other questions remain to be addressed (see Box 2), particularly to account for the remarkable robustness of insect navigation in complex and variable conditions. We should mind the gap, equally relevant to robotics, between theoretical mechanisms that fully account for function (such as SLAM) and the problems of making these solutions work in the real world. A continued effort to translate hypothesised insect navigational mechanisms to robots should help to evaluate their necessity and sufficiency under natural environmental constraints.

Box 2. Some key open questions for insect navigation
  • How do insects deal with 3D motion and the disturbances to both celestial and terrestrial views caused by pitch and roll of their heads?

  • How do insects obtain sufficiently accurate speed information for PI from the potentially very noisy inputs of optic flow and step-counting?

  • How do view memories remain robust under changing light conditions?

  • How do insects manage to steer a course along a vector direction while facing a different direction, e.g. ants dragging food backward, or bees side-slipping in flight?

  • Do learning walks and flights have structure consistent with the assumed function (in the base model) of acquiring views from multiple directions towards the nest, and might they serve some additional function such as rehearsing return paths?

  • What is the physiological basis of the reliable integration memory needed for PI, and the one-shot learning needed for vector and view memories?

  • What is the physiological basis of the interaction of views and vectors, in particular their weighted combination in behaviour?

  • Are units in the central complex directly analogous to mammalian head direction cells (Taube, 1998)? Is it possible that view memories resemble place cells (O'Keefe, 1979)? Can we find a connection between the PI mechanisms of insects and the grid cells found in mammals (Moser et al., 2008; see Gaussier, et al. 2019)?

Acknowledgements

Thanks to participants in the JEB symposium and to the Insect Robotics lab for discussion of the topics covered in this paper, and to the reviewers for their helpful comments.

Footnotes

Funding

This research was supported by EPSRC grant ‘Exploiting invisible cues' EP/M008479/1, BBSRC grant ‘Visual navigation in ants', BB/R005052/1, and grants EP/F500385/1 and BB/F529254/1 for the Doctoral Training Centre in Neuroinformatics and Computational Neuroscience.

References

Andel
,
D.
and
Wehner
,
R.
(
2004
).
Path integration in desert ants, Cataglyphis: how to make a homing ant run away from home
.
Proc. R. Soc. B Biol. Sci.
271
,
1485
-
1489
.
Ardin
,
P.
,
Peng
,
F.
,
Mangan
,
M.
,
Lagogiannis
,
K.
and
Webb
,
B.
(
2016a
).
Using an insect mushroom body circuit to encode route memory in complex natural environments
.
PLoS Comput. Biol.
12
,
e1004683
.
Ardin
,
P. B.
,
Mangan
,
M.
and
Webb
,
B.
(
2016b
).
Ant homing ability is not diminished when traveling backwards
.
Front. Behav. Neurosci.
10
,
69
.
Arena
,
P.
,
Maceo
,
S.
,
Patane
,
L.
,
Strauss
,
R.
(
2013
).
A spiking network for spatial memory formation: towards a fly-inspired ellipsoid body model
. In
The 2013 International Joint Conference on Neural Networks (IJCNN)
, pp.
1
-
6
.
IEEE
.
Baddeley
,
B.
,
Graham
,
P.
,
Philippides
,
A.
and
Husbands
,
P.
(
2011
).
Holistic visual encoding of ant-like routes: navigation without waypoints
.
Adapt. Behav.
19
,
3
-
15
.
Baddeley
,
B.
,
Graham
,
P.
,
Husbands
,
P.
and
Philippides
,
A.
(
2012
).
A model of ant route navigation driven by scene familiarity
.
PLoS Comput. Biol.
8
,
e1002336
.
Badino
,
H.
,
Huber
,
D.
and
Kanade
,
T.
(
2012
).
Real-time topometric localization
. In
2012 IEEE International Conference on Robotics and Automation
, pp.
1635
-
1642
.
IEEE
.
Bailey
,
T.
and
Durrant-Whyte
,
H.
(
2006a
).
Simultaneous localisation and mapping (SLAM): Part I. The essential algorithms
.
IEEE Robot. Autom. Mag.
13
,
99
-
108
.
Bailey
,
T.
and
Durrant-Whyte
,
H.
(
2006b
).
Simultaneous localisation and mapping (SLAM): Part II. State of the art
.
IEEE Robot. Autom. Mag.
13
,
1
-
10
.
Barlow
,
N.
(ed.) (
1958
).
The autobiography of Charles Darwin 1809-1882. With the original omissions restored. Edited and with appendix and notes by his grand-daughter Nora Barlow
.
London
:
Collins
.
Beugnon
,
G.
and
Macquart
,
D.
(
2016
).
Sequential learning of relative size by the Neotropical ant Gigantiops destructor
.
J. Comp. Physiol. A
202
,
287
-
296
.
Bisch-Knaden
,
S.
and
Wehner
,
R.
(
2001
).
Egocentric information helps desert ants to navigate around familiar obstacles
.
J. Exp. Biol.
204
,
4177
-
4184
.
Bisch-Knaden
,
S.
and
Wehner
,
R.
(
2003
).
Local vectors in desert ants: context-dependent landmark learning during outbound and homebound runs
.
J. Comp. Physiol.
189
,
181
-
187
.
Boal
,
J.
,
Sánchez-Miralles
,
Á.
and
Arranz
,
Á.
(
2014
).
Topological simultaneous localization and mapping: a survey
.
Robotica
32
,
803
-
821
.
Bolek
,
S.
and
Wolf
,
H.
(
2015
).
Food searches and guiding structures in North African desert ants, Cataglyphis
.
J. Comp. Physiol. A
201
,
631
-
644
.
Bregy
,
P.
,
Sommer
,
S.
and
Wehner
,
R.
(
2008
).
Nest-mark orientation versus vector navigation in desert ants
.
J. Exp. Biol.
211
,
1868
.
Buehlmann
,
C.
,
Graham
,
P.
,
Hansson
,
B. S.
and
Knaden
,
M.
(
2015
).
Desert ants use olfactory scenes for navigation
.
Anim. Behav.
106
,
99
-
105
.
Bühlmann
,
C.
,
Cheng
,
K.
and
Wehner
,
R.
(
2011
).
Vector-based and landmark-guided navigation in desert ants inhabiting landmark-free and landmark-rich environments
.
J. Exp. Biol.
214
,
2845
-
2853
.
Bühlmann
,
C.
,
Hansson
,
B. S.
and
Knaden
,
M.
(
2012
).
Path integration controls nest-plume following in desert ants
.
Curr. Biol.
22
,
645
-
649
.
Capaldi
,
E. A.
,
Smith
,
A. D.
,
Osborne
,
J. L.
,
Fahrbach
,
S. E.
,
Farris
,
S. M.
,
Reynolds
,
D. R.
,
Edwards
,
A. S.
,
Martin
,
A.
,
Robinson
,
G. E.
,
Poppy
,
G. M.
, et al. 
(
2000
).
Ontogeny of orientation flight in the honeybee revealed by harmonic radar
.
Nature
403
,
537
-
540
.
Cartwright
,
B. A.
and
Collett
,
T. S.
(
1983
).
Landmark learning in bees
.
J. Comp. Physiol. A
151
,
521
-
543
.
Chameron
,
S.
,
Schatz
,
B.
,
Pastergue-Ruiz
,
I.
,
Beugnon
,
G.
and
Collett
,
T. S.
(
1998
).
The learning of a sequence of visual patterns by the ant Cataglyphis cursor
.
Proc. R. Soc. B Biol. Sci.
265
,
2309
-
2313
.
Cheeseman
,
J. F.
,
Millar
,
C. D.
,
Greggers
,
U.
,
Lehmann
,
K.
,
Pawley
,
M. D. M.
,
Gallistel
,
C. R.
,
Warman
,
G. R.
and
Menzel
,
R.
(
2014
).
Way-finding in displaced clock-shifted bees proves bees use a cognitive map
.
Proc. Natl. Acad. Sci. USA
111
,
8949
-
8954
.
Cheng
,
K.
,
Narendra
,
A.
and
Wehner
,
R.
(
2006
).
Behavioral ecology of odometric memories in desert ants: acquisition, retention, and integration
.
Behav. Ecol.
17
,
227
-
235
.
Cheng
,
K.
,
Middleton
,
E. J. T.
and
Wehner
,
R.
(
2012
).
Vector-based and landmark-guided navigation in desert ants of the same species inhabiting landmark-free and landmark-rich environments
.
J. Exp. Biol.
215
,
3169
-
3174
.
Cheung
,
A.
,
Zhang
,
S.
,
Stricker
,
C.
and
Srinivasan
,
M. V.
(
2007
).
Animal navigation: the difficulty of moving in a straight line
.
Biol. Cybern.
97
,
47
-
61
.
Cheung
,
A.
,
Collett
,
M.
,
Collett
,
T. S.
,
Dewar
,
A.
,
Dyer
,
F.
,
Graham
,
P.
,
Mangan
,
M.
,
Narendra
,
A.
,
Philippides
,
A.
,
Stürzl
,
W.
, et al. 
(
2014
).
Still no convincing evidence for cognitive map use by honeybees
.
Proc. Natl. Acad. Sci. USA
111
,
E4396
-
E4397
.
Chittka
,
L.
and
Geiger
,
K.
(
1995
).
Honeybee long-distance orientation in a controlled environment
.
Ethology
99
,
117
-
126
.
Chittka
,
L.
,
Kunze
,
J.
,
Shipman
,
C.
and
Buchmann
,
S. L.
(
1995
).
The significance of landmarks for path integration in homing honeybee foragers
.
Naturwissenschaften
82
,
341
-
343
.
Chu
,
J.
,
Zhao
,
K.
,
Zhang
,
Q.
and
Wang
,
T.
(
2008
).
Construction and performance test of a novel polarization sensor for navigation
.
Sensors Actuators A Phys.
148
,
75
-
82
.
Collett
,
T. S.
(
1996
).
Insect navigation en route to the goal: multiple strategies for the use of landmarks
.
J. Exp. Biol.
199
,
227
-
235
.
Collett
,
M.
(
2014
).
A desert ant's memory of recent visual experience and the control of route guidance
.
Proc. Biol. Sci.
281
,
20140634
.
Collett
,
T. S.
and
Baron
,
J.
(
1994
).
Biological compasses and the coordinate frame of landmark memories in honeybees
.
Nature
368
,
137
-
140
.
Collett
,
M.
and
Collett
,
T. S.
(
2000
).
How do insects use path integration for their navigation?
Biol. Cybern.
83
,
245
-
259
.
Collett
,
M.
and
Collett
,
T. S.
(
2006
).
Insect navigation: no map at the end of the trail?
Curr. Biol.
16
,
R48
-
R51
.
Collett
,
M.
and
Collett
,
T. S.
(
2009
).
The learning and maintenance of local vectors in desert ant navigation
.
J. Exp. Biol.
212
,
895
-
900
.
Collett
,
T. S.
and
Collett
,
M.
(
2015
).
Route-segment odometry and its interactions with global path-integration
.
J. Comp. Physiol. A
201
,
617
-
630
.
Collett
,
T. S.
and
Graham
,
P.
(
2004
).
Animal navigation: path integration, visual landmarks and cognitive maps
.
Curr. Biol.
14
,
R475
-
R477
.
Collett
,
T. S.
,
Fry
,
S. N.
and
Wehner
,
R.
(
1993
).
Sequence learning by honeybees
.
J. Comp. Physiol. A
172
,
693
-
706
.
Collett
,
M.
,
Collett
,
T. S.
,
Bisch
,
S.
and
Wehner
,
R.
(
1998
).
Local and global vectors in desert ant navigation
.
Nature
394
,
269
-
272
.
Collett
,
M.
,
Collett
,
T. S.
and
Wehner
,
R.
(
1999
).
Calibration of vector navigation in desert ants
.
Curr. Biol.
9
,
1031
-
1034
.
Collett
,
M.
,
Harland
,
D.
and
Collett
,
T. S.
(
2002
).
Use of landmarks and panoramic context by navigating honeybees
.
J. Exp. Biol.
205
,
807
-
814
.
Collett
,
M.
,
Collett
,
T. S.
,
Chameron
,
S.
and
Wehner
,
R.
(
2003
).
Do familiar landmarks reset the global path integration system of desert ants?
J. Exp. Biol.
206
,
877
-
882
.
Collett
,
M.
,
Chittka
,
L.
and
Collett
,
T. S.
(
2013
).
Spatial memory in insect navigation
.
Curr. Biol.
23
,
R789
-
R800
.
Cruse
,
H.
and
Wehner
,
R.
(
2011
).
No need for a cognitive map: decentralized memory for insect navigation
.
PLoS Comput. Biol.
7
,
e1002009
.
Cummins
,
M.
and
Newman
,
P.
(
2008
).
FAB-MAP: Probabilistic localization and mapping in the space of appearance
.
Int. J. Rob. Res.
27
,
647
-
665
.
Dacke
,
M.
and
Srinivasan
,
M. V.
(
2008a
).
Evidence for counting in insects
.
Anim. Cogn.
11
,
683
-
689
.
Dacke
,
M.
and
Srinivasan
,
M. V.
(
2008b
).
Two odometers in honeybees?
J. Exp. Biol.
211
,
3281
-
3286
.
Davison
,
A. J.
,
Reid
,
I. D.
,
Molton
,
N. D.
and
Stasse
,
O.
(
2007
).
MonoSLAM: real-time single camera SLAM
.
IEEE Trans. Pattern Anal. Mach. Intell.
29
,
1052
-
1067
.
Dewar
,
A. D. M.
,
Philippides
,
A.
and
Graham
,
P.
(
2014
).
What is the relationship between visual environment and the form of ant learning-walks? An in silico investigation of insect navigation
.
Adapt. Behav.
22
,
163
-
179
.
Dittmar
,
L.
,
Stürzl
,
W.
,
Baird
,
E.
,
Boeddeker
,
N.
and
Egelhaaf
,
M.
(
2010
).
Goal seeking in honeybees: matching of optic flow snapshots?
J. Exp. Biol.
213
,
2913
-
2923
.
Dyer
,
F. C.
and
Gould
,
J. L.
(
1981
).
Honey bee orientation: a backup system for cloudy days
.
Science
214
,
1041
-
1042
.
Engel
,
J.
,
Schöps
,
T.
and
Cremers
,
D.
(
2014
).
LSD-SLAM: Large-Scale Direct Monocular SLAM
, pp.
834
-
849
.
Cham
:
Springer
.
Fernandes
,
A. S. D.
,
Philippides
,
A.
,
Collett
,
T. S.
and
Niven
,
J. E.
(
2015
).
Acquisition and expression of memories of distance and direction in navigating wood ants
.
J. Exp. Biol.
218
,
3580
-
3588
.
Foo
,
P.
,
Warren
,
W. H.
,
Duchon
,
A.
and
Tarr
,
M. J.
(
2005
).
Do humans integrate routes into a cognitive map? map- versus landmark-based navigation of novel shortcuts
.
J. Exp. Psychol. Learn. Mem. Cogn.
31
,
195
-
215
.
Franz
,
M.
,
Schoelkopf
,
B.
,
Mallot
,
H.
and
Buelthoff
,
H.
(
1998
).
Where did I take that snapshot? Scene-based homing by image matching
.
Biol. Cybern.
79
,
191
-
202
.
Freas
,
C. A.
,
Whyte
,
C.
and
Cheng
,
K.
(
2017
).
Skyline retention and retroactive interference in the navigating Australian desert ant, Melophorus bagoti
.
J. Comp. Physiol. A
203
,
353
-
367
.
Gallistel
,
C. R.
(
1990
).
The Organization of Learning
.
Cambridge, MA
:
MIT Press
.
Gallistel
,
C. R.
and
Cramer
,
A. E.
(
1996
).
Computations on metric maps in mammals: getting oriented and choosing a multi-destination route
.
J. Exp. Biol.
199
,
211
-
217
.
Gaussier
,
P.
,
Banquet
,
J. P.
,
Cuperlier
,
N.
,
Quoy
,
M.
,
Aubin
,
L.
,
Jacob
,
P.-Y.
,
Sargolini
,
F.
,
Save
,
E.
,
Krichmar
,
J. L.
and
Poucet
,
B.
(
2019
).
Merging information in the entorhinal cortex: what can we learn from robotics experiments and modeling?
J. Exp. Biol.
222
,
jeb186932
.
Goldschmidt
,
D.
,
Manoonpong
,
P.
and
Dasgupta
,
S.
(
2017
).
A neurocomputational model of goal-directed navigation in insect-inspired artificial agents
.
Front. Neurorobot.
11
,
20
.
Gould
,
J. L.
(
1990
).
Honey bee cognition
.
Cognition
37
,
83
-
103
.
Graham
,
P.
and
Mangan
,
M.
(
2015
).
Insect navigation: do ants live in the now?
J. Exp. Biol.
218
,
819
-
823
.
Graham
,
P.
,
Fauria
,
K.
and
Collett
,
T. S.
(
2003
).
The influence of beacon-aiming on the routes of wood ants
.
J. Exp. Biol.
206
,
535
-
541
.
Graham
,
P.
,
Philippides
,
A. A.
and
Baddeley
,
B.
(
2010
).
Animal cognition: multi-modal interactions in ant learning
.
Curr. Biol.
20
,
R639
-
R640
.
Haferlach
,
T.
,
Wessnitzer
,
J.
,
Mangan
,
M.
and
Webb
,
B.
(
2007
).
Evolving a neural model of insect path integration
.
Adapt. Behav.
15
,
273
-
287
.
Hafner
,
V. V.
(
2001
).
Adaptive homing-robotic exploration tours
.
Adapt. Behav.
9
,
131
-
141
.
Hartmann
,
G.
and
Wehner
,
R.
(
1995
).
The ant's path integration system: a neural architecture
.
Biol. Cybern.
73
,
483
-
497
.
Heinze
,
S.
,
Florman
,
J.
,
Asokaraj
,
S.
,
El Jundi
,
B.
and
Reppert
,
S. M.
(
2013
).
Anatomical basis of sun compass navigation II: the neuronal composition of the central complex of the monarch butterfly
.
J. Comp. Neurol.
521
,
267
-
298
.
Honkanen
,
A.
,
Adden
,
A.
,
da Silva Freitas
,
J.
and
Heinze
,
S.
(
2019
).
The insect central complex and the neural basis of navigational strategies
.
J. Exp. Biol.
222
,
jeb188854
.
Hoinville
,
T.
and
Wehner
,
R.
(
2018
).
Optimal multiguidance integration in insect navigation
.
Proc. Natl. Acad. Sci. USA
115
,
2824
-
2829
.
Homberg
,
U.
(
2004
).
In the search of the sky compass in the insect brain
.
Naturwissenschaften
91
,
199
-
208
.
Judd
,
S. P. D.
and
Collett
,
T. S.
(
1998
).
Multiple stored views and landmark guidance in ants
.
Nature
392
,
710
-
714
.
Kim
,
D.
and
Hallam
,
J.
(
2000
).
Neural network approach to path integration for homing navigation
. In
From Animals to Animats
, vol.
6
, pp.
228
-
235
.
MIT Press
.
Knaden
,
M.
and
Wehner
,
R.
(
2005
).
Nest mark orientation in desert ants Cataglyphis: what does it do to the path integrator?
Anim. Behav.
70
,
1349
-
1354
.
Knaden
,
M.
and
Wehner
,
R.
(
2006
).
Ant navigation: resetting the path integrator
.
J. Exp. Biol.
209
,
26
-
31
.
Knaden
,
M.
,
Lange
,
C.
and
Wehner
,
R.
(
2006
).
The importance of procedural knowledge in desert-ant navigation
.
Curr. Biol.
16
,
R916
-
R917
.
Kodzhabashev
,
A.
and
Mangan
,
M.
(
2015
).
Route following without scanning
. In
Living Machines: Biomimetic and Biohybrid Systems
(Eds. S., Wilson, P. F. M. J., Verschure, A., Mura, T. J. Prescott),
pp.
199
-
210
.
Springer International Publishing
.
Kohler
,
M.
and
Wehner
,
R.
(
2005
).
Idiosyncratic route-based memories in desert ants, Melophorus bagoti: how do they interact with path-integration vectors?
Neurobiol. Learn. Mem.
83
,
1
-
12
.
Kuipers
,
B.
,
Modayil
,
J.
,
Beeson
,
P.
,
MacMahon
,
M.
and
Savelli
,
F.
(
2004
).
Local metrical and global topological maps in the hybrid spatial semantic hierarchy
. In
IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA ‘04. 2004
, pp.
4845
-
4851,
Vol.
5
.
IEEE
.
Lambrinos
,
D.
,
Maris
,
M.
,
Kobayashi
,
H.
,
Labhart
,
T.
,
Pfeifer
,
R.
and
Wehner
,
R.
(
1998
).
Navigating with a polarized light compass
.
IEE Semin. Self-Learning Robot. II Bio-Robotics
1
-
4
.
Lambrinos
,
D.
,
Möller
,
R.
,
Labhart
,
T.
,
Pfeifer
,
R.
and
Wehner
,
R.
(
2000
).
A mobile robot employing insect strategies for navigation
.
Rob. Auton. Syst.
30
,
39
-
64
.
Legge
,
E. L. G.
,
Spetch
,
M. L.
and
Cheng
,
K.
(
2010
).
Not using the obvious: desert ants, Melophorus bagoti, learn local vectors but not beacons in an arena
.
Anim. Cogn.
13
,
849
-
860
.
Legge
,
E. L. G.
,
Wystrach
,
A.
,
Spetch
,
M. L.
and
Cheng
,
K.
(
2014
).
Combining sky and earth: desert ants (Melophorus bagoti) show weighted integration of celestial and terrestrial cues
.
J. Exp. Biol.
217
,
4159
-
4166
.
Lindauer
,
M.
(
1960
).
Time-compensated sun orientation in bees
.
Cold Spring Harb. Symp. Quant. Biol.
25
,
371
-
377
.
Mangan
,
M.
and
Webb
,
B.
(
2012
).
Spontaneous formation of multiple routes in individual desert ants (Cataglyphis velox)
.
Behav. Ecol.
23
,
944
-
954
.
Mathews
,
Z.
,
Lechon
,
M.
,
Calvo
,
J. M. B.
,
Dhir
,
A.
,
Duff
,
A.
,
Bermudez i Badia
,
S.
and
Verschure
,
P. F. M. J.
(
2009
).
Insect-like mapless navigation based on head direction cells and contextual learning using chemo-visual sensors
. In
2009 IEEE/RSJ International Conference on Intelligent Robots and Systems
, pp.
2243
-
2250
.
IEEE
.
Mathews
,
Z.
,
Verschure
,
P. F. M. J.
and
Berm
,
S.
(
2010
).
An insect-based method for learning landmark reliability using expectation reinforcement in dynamic environments
. In
2010 IEEE International Conference on Robotics and Automation
, pp.
3805
-
3812
.
Menzel
,
R.
,
Geiger
,
K.
,
Chittka
,
L.
,
Joerges
,
J.
,
Kunze
,
J.
and
Müller
,
U.
(
1996
).
The knowledge base of bee navigation
.
J. Exp. Biol.
199
,
141
-
146
.
Menzel
,
R.
,
Geiger
,
K.
,
Joerges
,
J.
,
Müller
,
U.
and
Chittka
,
L.
(
1998
).
Bees travel novel homeward routes by integrating separately acquired vector memories
.
Animal Behav.
55
,
139
-
152
.
Menzel
,
R.
,
Greggers
,
U.
,
Smith
,
A.
,
Berger
,
S.
,
Brandt
,
R.
,
Brunke
,
S.
,
Bundrock
,
G.
,
Hülse
,
S.
,
Plümpe
,
T.
,
Schaupp
,
F.
, et al. 
(
2005
).
Honey bees navigate according to a map-like spatial memory
.
Proc. Natl. Acad. Sci. USA
102
,
3040
-
3045
.
Menzel
,
R.
,
Fuchs
,
J.
,
Nadler
,
L.
,
Weiss
,
B.
,
Kumbischinski
,
N.
,
Adebiyi
,
D.
,
Hartfil
,
S.
and
Greggers
,
U.
(
2010
).
Dominance of the odometer over serial landmark learning in honeybee navigation
.
Naturwissenschaften
97
,
763
-
767
.
Menzel
,
R.
,
Kirbach
,
A.
,
Haass
,
W.-D.
,
Fischer
,
B.
,
Fuchs
,
J.
,
Koblofsky
,
M.
,
Lehmann
,
K.
,
Reiter
,
L.
,
Meyer
,
H.
,
Nguyen
,
H.
, et al. 
(
2011
).
A common frame of reference for learned and communicated vectors in honeybee navigation
.
Curr. Biol.
21
,
645
-
650
.
Merkle
,
T.
and
Wehner
,
R.
(
2008
).
Landmark cues can change the motivational state of desert ant foragers
.
Journal of Comparative Physiology A
,
194
,
395
-
403
.
Merkle
,
T.
and
Wehner
,
R.
(
2010
).
Desert ants use foraging distance to adapt the nest search to the uncertainty of the path integrator
.
Behav. Ecol.
21
,
349
.
Milford
,
M.
and
Schulz
,
R.
(
2014
).
Principles of goal-directed spatial robot navigation in biomimetic models
.
Philos. Trans. R. Soc. Lond. B. Biol. Sci.
369
,
20130484
.
Mirwan
,
H. B.
and
Kevan
,
P. G.
(
2015
).
Maze navigation and route memorization by worker bumblebees (Bombus impatiens (Cresson) (Hymenoptera: Apidae)
.
J. Insect Behav.
28
,
345
-
357
.
Mittelstaedt
,
H.
and
Mittelstaedt
,
M.-L.
(
1973
).
Mechanismen der Orientierung ohne richtende Außenreize
.
Fortschr. Zool.
21
,
46
-
58
.
Möller
,
R.
(
2000
).
Insect visual homing strategies in a robot with analog processing
.
Biol. Cybern.
83
,
231
-
243
.
Möller
,
R.
and
Vardy
,
A.
(
2006
).
Local visual homing by matched-filter descent in image distances
.
Biol. Cybern.
95
,
413
-
430
.
Möller
,
R.
,
Lambrinos
,
D.
,
Pfeifer
,
R.
,
Labhart
,
T.
and
Wehner
,
R.
(
1998
).
Modeling Ant Navigation with an autonomous agent
. In
From Animals to Animats 5
(ed.
R.
Pfeifer
,
B.
Blumberg
,
J. A.
Meyer
and
S. W.
Wilson
), pp. 185-194.
Cambridge, MA
:
MIT Press
.
Möller
,
R.
,
Lambrinos
,
D.
,
Roggendorf
,
T.
,
Pfeifer
,
R.
and
Wehner
,
R.
(
2001
).
Insect strategies of visual homing in mobile robots
. In (ed.
B.
Webb
and
T. R.
Consi
), pp.
37
-
66
.
AAAI Press/The MIT Press
.
Moser
,
E. I.
,
Kropff
,
E.
and
Moser
,
M.-B.
(
2008
).
Place cells, grid cells, and the brain's spatial representation system
.
Annu. Rev. Neurosci.
31
,
69
-
89
.
Mur-Artal
,
R.
and
Tardos
,
J. D.
(
2017
).
ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras
.
IEEE Trans. Robot.
33
,
1255
-
1262
.
Narendra
,
A.
(
2007a
).
Homing strategies of the Australian desert ant Melaphorus bagoti. II. Interaction of the path integrator with visual cue information
.
J. Exp. Biol.
210
,
1804
-
1812
.
Narendra
,
A.
(
2007b
).
Homing strategies of the Australian desert ant Melophorus bagoti. I. Proportional path-integration takes the ant half-way home
.
J. Exp. Biol.
210
,
1798
-
1803
.
Narendra
,
A.
,
Cheng
,
K.
and
Wehner
,
R.
(
2007
).
Acquiring, retaining and integrating memories of the outbound distance in the Australian desert ant Melophorus bagoti
.
J. Exp. Biol.
210
,
570
-
577
.
O'Keefe
,
J.
(
1979
).
A review of the hippocampal place cells
.
Prog. Neurobiol.
13
,
419
-
439
.
Philippides
,
A.
,
Baddeley
,
B.
,
Cheng
,
K.
and
Graham
,
P.
(
2011
).
How might ants use panoramic views for route navigation?
J. Exp. Biol.
214
,
445
-
451
.
Presotto
,
A.
,
Verderane
,
M. P.
,
Biondi
,
L.
,
Mendonça-Furtado
,
O.
,
Spagnoletti
,
N.
,
Madden
,
M.
and
Izar
,
P.
(
2018
).
Intersection as key locations for bearded capuchin monkeys (Sapajus libidinosus) traveling within a route network
.
Anim. Cogn.
21
,
393
-
405
.
Riabinina
,
O.
,
de Ibarra
,
N. H.
,
Howard
,
L.
and
Collett
,
T. S.
(
2011
).
Do wood ants learn sequences of visual stimuli?
J. Exp. Biol.
214
,
2739
-
2748
.
Ribbands
,
C. R.
(
1949
).
The foraging method of individual honey-bees
.
J. Anim. Ecol.
18
,
47
.
Riley
,
J. R.
,
Smith
,
A. D.
,
Reynolds
,
D. R.
,
Edwards
,
A. S.
,
Osborne
,
J. L.
,
Williams
,
I. H.
,
Carreck
,
N. L.
and
Poppy
,
G. M.
(
1996
).
Tracking bees with harmonic radar
.
Nature
379
,
29
-
30
.
Riley
,
J. R.
,
Greggers
,
U.
,
Smith
,
A. D.
,
Reynolds
,
D. R.
and
Menzel
,
R.
(
2005
).
The flight paths of honeybees recruited by the waggle dance
.
Nature
435
,
205
-
207
.
Sadalla
,
E. K.
and
Montello
,
D. R.
(
1989
).
Remembering changes in direction
.
Environ. Behav.
21
,
346
-
363
.
Sadalla
,
E. K.
and
Staplin
,
L. J.
(
1980
).
The perception of traversed distance
.
Environ. Behav.
12
,
167
-
182
.
Schwarz
,
S.
,
Schultheiss
,
P.
and
Cheng
,
K.
(
2012
).
Visual cue learning and odometry in guiding the search behavior of desert ants, Melophorus bagoti, in artificial channels
.
Behav. Processes
91
,
298
-
303
.
Schwarz
,
S.
,
Mangan
,
M.
,
Zeil
,
J.
,
Webb
,
B.
and
Wystrach
,
A.
(
2017
).
How ants use vision when homing backward
.
Curr. Biol.
27
,
401
-
407
.
Siegwart
,
R.
,
Nourbakhsh
,
I. R.
,
Scaramuzza
,
D.
and
Siegwart
,
R.
(
2011
).
Introduction to Autonomous Mobile Robots
.
Cambridge, MA
:
MIT Press
.
Smith
,
L.
,
Philippides
,
A.
,
Graham
,
P.
,
Baddeley
,
B.
and
Husbands
,
P.
(
2007
).
Linked local navigation for visual route guidance
.
Adapt. Behav.
15
,
257
-
271
.
Srinivasan
,
M. V.
(
2014
).
Going with the flow: a brief history of the study of the honeybee's navigational “odometer”
.
J. Comp. Physiol. A
200
,
563
-
573
.
Srinivasan
,
M. V.
,
Zhang
,
S. W.
,
Lehrer
,
M.
and
Collett
,
T. S.
(
1996
).
Honeybee navigation en route to the goal: visual flight control and odometry
.
J. Exp. Biol.
199
,
237
-
244
.
Srinivasan
,
M. V.
,
Zhang
,
S. W.
and
Bidwell
,
N. J.
(
1997
).
Visually mediated odometry in honeybees
.
J. Exp. Biol.
200
,
2513
-
2522
.
Srinivasan
,
M. V.
,
Zhang
,
S. W.
,
Berry
,
J.
,
Cheng
,
K.
and
Zhu
,
H.
(
1999
).
Honeybee navigation: linear perception of short distances travelled
.
J. Comp. Physiol. A Sensory Neural Behav. Physiol.
185
,
239
-
245
.
Stone
,
T.
,
Webb
,
B.
,
Adden
,
A.
,
Weddig
,
N. B.
,
Honkanen
,
A.
,
Templin
,
R.
,
Wcislo
,
W.
,
Scimeca
,
L.
,
Warrant
,
E.
and
Heinze
,
S.
(
2017
).
An anatomically constrained model for path integration in the bee brain
.
Curr. Biol.
27
,
3069
-
3085.e11
.
Stürzl
,
W.
and
Carey
,
N.
(
2012
).
A fisheye camera system for polarisation detection on UAVs
.
In Computer Vision – ECCV 2012. Workshops and Demonstrations. ECCV 2012. Lecture Notes in Computer Science, Vol 7584 (ed. A. Fusiello, V. Murino and R. Cucchiara), pp.
431
-
440
.
Springer.
Stürzl
,
W.
,
Cheung
,
A.
,
Cheng
,
K.
and
Zeil
,
J.
(
2008
).
The information content of panoramic images I: the rotational errors and the similarity of views in rectangular experimental arenas
.
J. Exp. Psychol. Anim. Behav. Process.
34
,
1
-
14
.
Taube
,
J. S.
(
1998
).
Head direction cells and the neurophysiological basis for a sense of direction
.
Prog. Neurobiol.
55
,
225
-
256
.
Towne
,
W. F.
and
Moscrip
,
H.
(
2008
).
The connection between landscapes and the solar ephemeris in honeybees
.
J. Exp. Biol.
211
,
3729
-
3736
.
Vardy
,
A.
and
Möller
,
R.
(
2005
).
Biologically plausible visual homing methods based on optical flow techniques
.
Connect. Sci. Spec. Issue Navig.
17
,
47
-
90
.
Vickerstaff
,
R. J.
and
Cheung
,
A.
(
2010
).
Which coordinate system for modelling path integration?
J. Theor. Biol.
263
,
242
-
261
.
Warren
,
W. H.
,
Rothman
,
D. B.
,
Schnapp
,
B. H.
and
Ericson
,
J. D.
(
2017
).
Wormholes in virtual space: from cognitive maps to cognitive graphs
.
Cognition
166
,
152
-
163
.
Wehner
,
R.
(
1998
).
The ants celestial compass system: spectral and polarization channels
. In
Orientation and Communication in Arthropods
(ed.
M.
Lehrer
), pp.
145
-
285
.
Basel
:
Birkhauser
.
Wehner
,
R.
(
2009
).
The architecture of the desert ant's navigational toolkit (Hymenoptera: Formicidae)
.
Myrmecological News
12
,
85
-
96
.
Wehner
,
R.
and
Rössler
,
W.
(
2013
).
Bounded plasticity in the desert ant's navigational tool kit
.
Handb. Behav. Neurosci.
22
,
514
-
529
.
Wehner
,
R.
and
Srinivasan
,
M. V.
(
2003
).
Path integration in insects
. In
The Neurobiology of Spatial Behaviour
(ed.
K. J.
Jeffery
), pp. 9-30.
Oxford University Press
.
Wehner
,
R.
,
Michel
,
B.
and
Antonsen
,
P.
(
1996
).
Visual navigation in insects: coupling of egocentric and geocentric information
.
J. Exp. Biol.
199
,
129
-
140
.
Wehner
,
R.
,
Gallizzi
,
K.
,
Frei
,
C.
and
Vesely
,
M.
(
2002
).
Calibration processes in desert ant navigation: vector courses and systematic search
.
J. Comp. Physiol. A. Neuroethol. Sens. Neural. Behav. Physiol.
188
,
683
-
693
.
Wittmann
,
T.
and
Schwegler
,
H.
(
1995
).
Path integration-a network model
.
Biol. Cybern.
73
,
569
-
575
.
Wolf
,
H.
and
Wehner
,
R.
(
2000
).
Pinpointing food sources: olfactory and anemotactic orientation in desert ants, Cataglyphis fortis
.
J. Exp. Biol.
203
,
857
-
868
.
Wray
,
M. K.
,
Klein
,
B. A.
,
Mattila
,
H. R.
and
Seeley
,
T. D.
(
2008
).
Honeybees do not reject dances for ‘implausible’ locations: reconsidering the evidence for cognitive maps in insects
.
Anim. Behav.
76
,
261
-
269
.
Wystrach
,
A.
,
Schwarz
,
S.
,
Schultheiss
,
P.
,
Beugnon
,
G.
and
Cheng
,
K.
(
2011
).
Views, landmarks, and routes: how do desert ants negotiate an obstacle course?
J. Comp. Physiol. A Neuroethol. Sensory Neural Behav. Physiol.
197
,
167
-
179
.
Wystrach
,
A.
,
Mangan
,
M.
,
Philippides
,
A.
and
Graham
,
P.
(
2013
).
Snapshots in ants? New interpretations of paradigmatic experiments
.
J. Exp. Biol.
216
,
1766
-
1770
.
Wystrach
,
A.
,
Philippides
,
A.
,
Aurejac
,
A.
,
Cheng
,
K.
and
Graham
,
P.
(
2014
).
Visual scanning behaviours and their role in the navigation of the Australian desert ant Melophorus bagoti
.
J. Comp. Physiol. A
200
,
615
-
626
.
Wystrach
,
A.
,
Mangan
,
M.
and
Webb
,
B.
(
2015
).
Optimal cue integration in ants
.
Proc. R. Soc. B Biol. Sci.
282
,
20151484
.
Zeil
,
J.
(
2012
).
Visual homing: an insect perspective
.
Curr. Opin. Neurobiol.
22
,
285
-
293
.
Zeil
,
J.
,
Hofmann
,
M.
and
Chahl
,
J. S.
(
2003
).
Catchment areas of panoramic snapshots in outdoor scenes
.
J. Opt. Soc. Am. A
20
,
450
-
469
.
Zeil
,
J.
,
Narendra
,
A.
and
Stürzl
,
W.
(
2014
).
Looking and homing: how displaced ants decide where to go
.
Philos. Trans. R. Soc. Lond. B. Biol. Sci.
369
,
20130034
.
Zhang
,
S. W.
,
Lehrer
,
M.
and
Srinivasan
,
M. V.
(
1999
).
Honeybee memory: navigation by associative grouping and recall of visual stimuli
.
Neurobiol. Learn. Mem.
72
,
180
-
201
.
Zhang
,
S.
,
Mizutani
,
A.
and
Srinivasan
,
M. V.
(
2000
).
Maze navigation by honeybees: learning path regularity
.
Learn. Mem.
7
,
363
-
374
.

Competing interests

The author declares no competing or financial interests.