ABSTRACT
Older adults often experience serious problems in spatial navigation, and alterations in underlying brain structures are among the first indicators for a progression to neurodegenerative diseases. Studies investigating the neural mechanisms of spatial navigation and its changes across the adult lifespan are increasingly using virtual reality (VR) paradigms. VR offers major benefits in terms of ecological validity, experimental control and options to track behavioral responses. However, navigation in the real world differs from navigation in VR in several aspects. In addition, the importance of body-based or visual cues for navigation varies between animal species. Incongruences between sensory and motor input in VR might consequently affect their performance to a different degree. After discussing the specifics of using VR in spatial navigation research across species, we outline several challenges when investigating age-related deficits in spatial navigation with the help of VR. In addition, we discuss ways to reduce their impact, together with the possibilities VR offers for improving navigational abilities in older adults.
Introduction
Spatial navigation, the ability to find our way between places in the environment, is essential for effective functioning in everyday life. Imagine, for example, you want to return to your car after shopping. Without generating a spatial representation of the environment, trying to solve this task can be a rather frustrating endeavor. With advancing age, humans often experience serious problems in spatial navigation and tend to get lost quite easily (Lester et al., 2017; Lithfous et al., 2013). As a consequence, they avoid unfamiliar routes and places, which restricts their personal autonomy and diminishes their quality of life (Burns, 1999). Moreover, declines in spatial navigation can be among the earliest indicators of a progression from healthy aging to Alzheimer's dementia (AD). These deficits are probably caused by neurodegenerative processes in key structures of the brain's navigation circuit in the medial temporal lobe (MTL) that show the earliest signs of pathology in AD (Braak and Del Tredici, 2015). Given that (1) aging is a major risk factor for the development of dementia and (2) the proportion of people aged 60 or over is expected to increase rapidly in the coming decades (United Nations, 2015), it is essential to understand the fundamental mechanisms of aging and disease, in particular their impact upon spatial cognition, for the development of clinical assessment tools and interventions that help maintain people's independence.
For an in-depth discussion of age-related impairments in spatial navigation in rodents, non-human primates and humans, we refer the reader to Lester et al. (2017). Many studies in this research area have shown altered computations in relevant neural networks that in turn affect the way older individuals process incoming sensory information, form spatial representations, and plan and control their behavior in navigational contexts. For example, older adults have been found to be biased towards egocentric processing, and age-related deficits are typically more pronounced for tasks drawing on allocentric representations of the environment or switching between strategies (e.g. Gazova et al., 2013; Harris et al., 2012; Wiener et al., 2013, 2012). Whereas adopting an egocentric reference frame refers to the representation of locations relative to the observer, locations are represented independent of the observer's position in an allocentric reference frame (Wolbers and Wiener, 2014). Whereas the hippocampal formation is known to play a key role in allocentric navigation, egocentric navigational strategies instead seem to depend on the engagement of neural structures outside of the MTL. In line with this, age-related deficits in allocentric navigation have been linked to altered hippocampal activation in humans and, in the rodent and non-human primate hippocampus, to impaired firing patterns of place cells that encode the location of the animal in space (e.g. Barnes et al., 1997; Konishi et al., 2013; Moffat et al., 2007; Thomé et al., 2016; Wilson et al., 2005). Moreover, during active navigation, body-based cues (i.e. vestibular cues, proprioceptive cues and motor efference copies), optic flow and certain characteristics of the environment (e.g. landmarks) are available to track one's own position and orientation in relation to the environment. In older adults, however, multisensory integration is typically increased and they often have difficulties in adjusting the sensory weights of different bodily signals appropriately (Kuehn et al., 2018). For example, Bates and Wolbers (2014) found that navigational performance in older and younger adults benefits from the availability of both internal (self-motion) and external (visual landmark) cues, although older adults seem to place less influence on landmark cues as would have been optimal to perform the task.
Research on spatial cognition took a great leap forward with the advent of virtual reality (VR) technologies that allow for realistic navigation and real-time interaction in complex virtual environments while the researcher tracks the participants' responses, movements or brain activity. In fact, many of the above-mentioned studies used VR paradigms to investigate differences in spatial navigation between younger and older adults, and the relevance of VR is also growing in research with other animal species. In addition, VR paradigms are increasingly used for the implementation of training and rehabilitation regimes (see Lange et al., 2010, for a review on the application of VR for sensorimotor rehabilitation). However, working with older age groups, whose sensorimotor and cognitive abilities are different from those of younger adults and whose VR experience might be limited, imposes specific challenges that need to be taken into account in order to draw valid conclusions from the results. Older adults might differ in their perception of virtual environments, the degree of immersion they feel and their ability to navigate these environments. In addition, navigating in VR differs from navigating in the real world in several important aspects. For example, when using stationary desktop VR, only visual cues are available while the navigator's body remains static, making it difficult to generalize the results to real-world navigation. Depending on the animal model or age group under investigation and their reliance on different cues for navigating, incongruences between sensory and motor input might affect their performance to different degrees. In this Review, we will first outline the specifics of using VR in spatial navigation research across species, followed by a discussion of the benefits and challenges of using VR paradigms to study and train navigational abilities across the adult lifespan. Finally, we provide recommendations on how to address these challenges when working with older age groups. We define VR paradigms as experimental paradigms in which participants experience and navigate a digitally generated 3D environment irrespective of whether they actually move during navigation (stationary versus non-stationary VR) or how they view it, i.e. on a 2D screen or in a head-mounted display (HMD; see Fig. 1 for different VR setups for humans).
Virtual reality (VR) in human cognition research. Examples of VR setups to study human spatial navigation using (A) a 2D desktop screen with joystick, (B) a large-scale screen with eye tracking, (C) a large-scale screen with a linear treadmill, and (D) a head-mounted display (HMD) with motion capture. Note that these setups differ along multiple dimensions, including display size, 2D versus 3D presentation, access to spatial cues, and the correspondence between visual and body-based cues. Each of these parameters will affect immersion into the virtual world, which is generally weakest in A and strongest in D.
Virtual reality (VR) in human cognition research. Examples of VR setups to study human spatial navigation using (A) a 2D desktop screen with joystick, (B) a large-scale screen with eye tracking, (C) a large-scale screen with a linear treadmill, and (D) a head-mounted display (HMD) with motion capture. Note that these setups differ along multiple dimensions, including display size, 2D versus 3D presentation, access to spatial cues, and the correspondence between visual and body-based cues. Each of these parameters will affect immersion into the virtual world, which is generally weakest in A and strongest in D.
VR as a game changer for spatial cognition research
VR is going to redefine experimental practice in neuroscience and psychology. It enables researchers to study cognitive and social processes in naturalistic interactive settings while ensuring a high degree of control and standardization (Bohil et al., 2011; Pan and Hamilton, 2018; Slater and Sanchez-Vives, 2016). Through coupling the sensory flow in the virtual environment and the movements of the navigator in non-stationary VR setups, experiences can be created that mimic those that would occur in the real world. Moreover, stationary VR can easily be combined with human brain imaging, and several VR applications have been developed to study animal cognition (see Thurley and Ayaz, 2017, for a review of VR setups for rodents). Stowers et al. (2017), for example, recently presented a VR system for freely moving animals and validated its usability in experiments in flies, mice and fish (Fig. 2). Their system allows for navigation in virtual environments without any movement restrictions, and interactions between the animal and artificial agents are possible.
VR in animal cognition research. (A) Rodent VR setup as described in Thurley and Ayaz (2017). FreemoVR virtual reality system to study animal cognition in (B) flies, (C) mice and (D) fish as described in Stowers et al. (2017). Photo credit: IMP/IMBA Graphics Department (https://strawlab.org/freemovr).
VR in animal cognition research. (A) Rodent VR setup as described in Thurley and Ayaz (2017). FreemoVR virtual reality system to study animal cognition in (B) flies, (C) mice and (D) fish as described in Stowers et al. (2017). Photo credit: IMP/IMBA Graphics Department (https://strawlab.org/freemovr).
Its widespread adoption in consumer electronics and recent technological advancements such as stand-alone VR headsets have led to an increasing affordability of VR systems in general. Studies of spatial navigation particularly benefit from these developments. By using VR, realistic interactions in ecologically valid and complex environments can be simulated while the participants' reactions can be tracked in multiple ways using, for example, HMDs with integrated eye tracking or motion tracking. VR paradigms further provide the unique opportunity to go beyond reality and introduce experimental manipulations that would otherwise not be possible, such as teleportation between remote places in an environment to investigate the relationship between time and space or distance coding (e.g. Deuker et al., 2016; Vass et al., 2016). Moreover, VR allows for multisensory stimulation to study the interaction between different sources of sensory input (e.g. proprioceptive versus visual cues). Human spatial navigation can be studied in an analogous manner to rodent navigation to draw conclusions about the precise mechanisms of spatial navigation in different populations and species. For example, Stangl et al. (2018) recently provided the first evidence that grid-cell-like representations in the entorhinal cortex, which are thought to constitute a central component of the navigation system by providing the hippocampus with positional information, are compromised with advancing age. In line with findings from rodent research, the authors showed that the magnitude of these representations is reduced in older compared with younger adults while they perform an object-location memory task in a virtual environment (Fig. 3). The reduced magnitude of the signal was driven by a lower temporal stability of the grid-cell-like representations but not by changes in their spatial stability. Importantly, the magnitude of this signal was negatively related to errors in path integration in the older age group; that is, their ability to keep track of their own position while moving in space. Thus, the results of this study provide important insights into how neural computations during spatial navigation change with advancing age and affect behavior.
Compromised grid-cell-like representations in old age. (A) Setup of the object-location memory task in Stangl et al. (2018): during functional magnetic resonance imaging (fMRI), participants had to travel to one of three target objects in a virtual environment (upper row) the locations of which they had learned in a separate practice session. The pretraining ensured that task performance – expressed as the deviation of a response from the correct target object location (lower left) – was stable and comparable across age groups during the subsequent fMRI session (lower right). vm, virtual meters. (B) Age group differences in magnitude (top) and temporal stability (bottom) of the grid-cell-like representations during location encoding in VR. *P<0.05. (C) Correlation between the magnitude of grid-cell-like representations and path integration errors in younger and older adults as assessed in a separate behavioral experiment consisting of a body-based and a visual path integration task. Figure adapted with permission from Stangl et al. (2018).
Compromised grid-cell-like representations in old age. (A) Setup of the object-location memory task in Stangl et al. (2018): during functional magnetic resonance imaging (fMRI), participants had to travel to one of three target objects in a virtual environment (upper row) the locations of which they had learned in a separate practice session. The pretraining ensured that task performance – expressed as the deviation of a response from the correct target object location (lower left) – was stable and comparable across age groups during the subsequent fMRI session (lower right). vm, virtual meters. (B) Age group differences in magnitude (top) and temporal stability (bottom) of the grid-cell-like representations during location encoding in VR. *P<0.05. (C) Correlation between the magnitude of grid-cell-like representations and path integration errors in younger and older adults as assessed in a separate behavioral experiment consisting of a body-based and a visual path integration task. Figure adapted with permission from Stangl et al. (2018).
A further advantage of VR is the possibility to combine it with portables and wearables for an easy-to-use access to spatial navigation training regimes, irrespective of any mobility restrictions or physical disability. For example, Lövdén et al. (2012) showed that daily navigational training on a treadmill attenuates age-related deficits in spatial navigation. By using VR in combination with mobile personal devices, such training regimes can be transferred to different settings (e.g. the participant's home), which enhances compliance and adherence to the training while associated costs are reduced. In addition, it can provide access to a wealth of data from millions of participants, irrespective of their location in the world. In 2016, researchers from the UK launched a mobile video game app called Sea Hero Quest (SHQ, www.seaheroquest.com) to systematically assess non-verbal spatial navigation abilities, with the aim to eventually disentangle normal from pathological navigation behavior, for example, in people developing dementia. SHQ was downloaded and played by more than 2.5 million people from 195 countries in the first year of its launch (Coutrot et al., 2018). It involves navigating a boat in a virtual environment while the player sees a map showing their current location and goals they have to navigate to (wayfinding task). In another part of the game, the player is asked to navigate along a river to find a flare gun and has then to decide which is the correct direction back to the initial starting position (path integration task). The player's movements in the virtual environment are continuously tracked. In the first analyses of their data, Coutrot et al. (2018) showed that spatial ability declines with age and is better in males than in females across countries. Whereas general navigation ability was linked to the economic status of the respective country of origin, gender inequality predicted the size of the male advantage in playing the game. In SHQ VR, a VR extension of the game, the player experiences the virtual environment in a HMD while their head movements are tracked. This example impressively shows how VR can be applied in spatial navigation research to acquire high-resolution data from people at all ages to better understand the fundamental mechanisms of aging and disease for the development of interventions that promote healthy aging.
Navigation in VR versus navigation in the real world
As pointed out by Taube et al. (2013), it is important to be aware that navigation in VR is not the same as navigation in the real world. In many VR setups (Fig. 1), sensory input is mainly provided via the visual system, and this input can differ from the input provided by other senses or the motor system. During passive navigation or in brain-imaging experiments, only optic flow induces the perception of movement through space whereas the participant's body remains static. This discrepancy between body-based and visual cues may lead to differences in the encoding of VR versus real-world environments, and the potential for transfer of spatial knowledge from one modality to the other might be limited. Ruddle et al. (2011b), for example, showed that participants who physically walk in VR are better at following and subsequently retracing a route than participants who only make physical rotations and use a joystick to translate (see Ruddle et al., 2011a, for a similar finding on allocentric processing). Coutrot et al. (2018 preprint) compared user data from the abovementioned SHQ mobile video game app with navigational performance in a real-world environment. A positive correlation between navigating in VR and the real world was found, with an advantage for male over female participants, which was more pronounced in VR. Although the authors did not assess the general VR experience of their participants, one might speculate whether this increased gender difference is related to differing degrees of VR and gaming experience in males versus females. There is indeed evidence that video game experience is related to navigational performance in desktop and immersive VR setups but not in real-world environments (Richardson et al., 2011). Importantly, gender differences in performance were largely attenuated in this study when video game experience was included as a covariate in the analysis. Murias et al. (2016) extended these findings by showing that experience in playing video games with navigational components but not just the experience in using game controls improved navigational performance in terms of the utilization of different navigational strategies (allocentric versus egocentric). Thus, with increasing experience in playing games such as SHQ, female players are likely to catch up with their male counterparts in the context of navigation in VR.
One should also keep in mind that participants might adapt their navigational strategy and, for example, exploit different sources of information differently depending on the task context, even if overall navigation performance is comparable between VR and real-world navigation. For example, it has recently been shown that participants rely more on distinct features (i.e. landmarks) than on the geometrical layout when asked to reorient themselves in a virtual room using an HMD in combination with a manual wheelchair as compared with navigation in a real room (Kimura et al., 2017).
In contrast to traditional navigation paradigms using static stimuli or videos, however, VR is able to create a more realistic sense of ‘being there’ than ever before. The degree of immersion in VR can be enhanced in multiple ways; for example, by integrating the participant's body into the virtual environment. Phenomena such as the rubber hand illusion (i.e. integrating external objects into the representation of the own body by synchronous visuotactile stimulation) illustrate that body representations can be manipulated quite easily in humans (see also Slater and Sanchez-Vives, 2016; Spanlang et al., 2014). Thus, through achieving synchrony between different sensations (visual–proprioceptive: virtual body is presented where it should be, visual–motor: virtual body moves in the same way as own body, visual–tactile: synchronous touch), a feeling of embodiment can be created that facilitates performance in contexts where participants do not see their own body; for example, when they wear a HMD or during brain imaging. Research in younger adults shows that the experience of having a body in VR or the sense of a virtual self facilitates performance in tasks requiring, for example, mental rotation or perspective taking (Pan and Hamilton, 2018).
Applicability of VR to study spatial navigation across different animal species
To date, there are very few studies comparing the neural processing of navigating in VR versus the real world in animals. However, in animal cognition too, VR offers major benefits over classical testing in ‘real-world’ conditions (e.g. mazes). For example, in rodents, the vast majority of studies on place cells, grid cells and other cells implicated in spatial coding have been conducted in small-scale or vista environments (i.e. circular or rectangular boxes). While many important insights have been gained from these studies, navigation in the real world occurs in much more complex environments that cannot be apprehended from a single vantage point (Wolbers and Wiener, 2014). Given that the mechanisms that govern vista- versus environmental-scale navigation only partially overlap, it is important to move beyond traditionally used experimental paradigms. In this context, VR setups (Fig. 2) offer the unique opportunity to overcome these limitations and to study navigational behavior and the underlying neural mechanisms in realistic, large-scale environments.
Moreover, in many rodent VR setups, the animal is placed on a free-moving ball with its head or body fixed while visual input is provided over a screen that covers its entire field of view (Fig. 2A; Minderer et al., 2016). In this way, precise control of the sensory cues that carry relevant information for navigation is possible and each cue's contribution to the animal's actions can be dissociated. For example, Campbell et al. (2018) used VR to systematically manipulate the gain of self-motion and landmark information while head-fixed mice traversed a linear track for rewards and recorded activity from different cell types in the MTL (i.e. grid, border and speed cells). The manipulation was implemented by changing the ratio of the rotation of the ball the mice were placed on to the speed of the optic flow. Using tetrode recordings, the results show that border cells responded to landmark information, whereas the other cell types responded to both cue types and were sensitive to changes in the relationship between the movement of the ball and the visual translation on the screen, a finding that would not have been possible without VR.
In conditions of head or body fixation, however, vestibular input is minimized and consequently is in conflict with the visual input provided by the VR system, which may alter the way the animal encodes the environment. Aghajan et al. (2015) showed that the spatial selectivity and stability of place cells in area CA1 of the hippocampus are largely reduced when body-fixed rats engage in random foraging in a 2D virtual environment as compared with real-world navigation. Moreover, only 40% of the measured cells were activated in VR. Interestingly, spatial selectivity was enhanced in VR when rats traveled to fixed locations to receive a reward; that is, when locomotion cues were more spatially informative and repeatedly paired with distant visual cues. Acharya et al. (2016) found that CA1 place cells that are modulated by the head direction of the animal during random foraging in a real-world environment show a similar modulation in VR, despite the absence of vestibular cues in this setting. Thus, despite impaired spatial selectivity, visual cues appear to be sufficient for generating directional selectivity in hippocampal place cells in rodents.
Together, these studies illustrate how VR can be applied to gain novel insights into the neural mechanisms of spatial navigation in rodents. However, there are important differences between rodent and primate spatial navigation (see Zhao, 2018, for a recent review). In humans, body-based cues may not always be critical for efficient navigation (e.g. Waller and Greenauer, 2007). A greater reliance on visual cues in humans and non-human primates during navigation may lead to differences between the representation of VR in comparison to real-world environments across species (Ekstrom, 2015). In rodents, the proportion of photoreceptors on the retina is substantially lower and they have limited binocular overlap and are more dependent on head movements to cover their visual surroundings (Huberman and Niell, 2011). The primate visual system, in contrast, is able to extract spatial information over larger distances, and some spatial cell types observed in the primate MTL are primarily tuned to visual information (Ekstrom et al., 2003). Moreover, Killian et al. (2012) found evidence for grid cells in the entorhinal cortex of non-human primates during visual exploration of distant space, a finding that was subsequently replicated and extended in humans by means of functional magnetic resonance imaging (fMRI), suggesting that the representation of the visual search space in 2D may drive the signal (Julian et al., 2018; Nau et al., 2018b). These findings suggest that primates, in contrast to rodents, may entertain multiple representations of space in parallel when lying in the scanner or when navigating in VR, resulting in a larger overlap between the processes that operate during navigation in VR versus the real world. Thus, there is growing evidence that visual exploration of space shares important features with the actual exploration in the primate brain (Nau et al., 2018a).
Using virtual reality to study older age groups
Research on the usability of VR in cognitive aging research is still in its infancy, but several studies suggest that the effects of age on spatial navigation are comparable between VR and real-world environments, although performance in VR is often generally somewhat decreased (Cushman et al., 2008; Kalia et al., 2008; Kalová et al., 2005; Taillade et al., 2015). Taillade et al. (2015), for example, tested navigational knowledge in a group of younger and older adults after route learning in the real world or in a VR version of the same environment and found similar age-related performance deficits in the two task conditions. This was independent of whether egocentric (e.g. route repetition) or allocentric knowledge (e.g. map drawing) was assessed. Similar patterns of results have been observed in patients with mild cognitive impairment (MCI) and early AD patients (Cushman et al., 2008). Jebara et al. (2014) investigated the impact of different degrees of interaction during navigation in VR on episodic memory encoding in older and younger adults. Participants encountered a specific route from inside a car (1) as passive passengers, (2) when being passively moved along the route while choosing the direction of driving at every intersection, (3) when moving the car along rails using pedals while having no influence on turns at intersections and (4) when moving the car and also turning it at intersections using pedals and a steering wheel. Performance in both younger and older participants was improved in conditions 2 and 3 that both provided some form of interaction in the virtual environment, either through making the decisions at turning points or through moving along the route. In particular, the ability to choose the driving direction proved beneficial in enhancing episodic memory, and the authors concluded that this might have been due to (covert) action planning that supports the generation of a coherent event sequence. Condition 4, with full control over the car, in contrast, seemed to be too demanding for both age groups in order to master the navigational task demands successfully. Performance in this condition was mainly associated with executive functioning as assessed in separate neuropsychological tests. The potential influence of attentional load on navigational performance deficits in older adults is further discussed in the following section. In the abovementioned studies, desktop VR paradigms were used to study potential interactions between age group and task condition. To the best of our knowledge, however, no study has yet compared age-related performance differences during navigation in more immersive VR setups (e.g. using HMDs or including the participant's body in VR) with real-world navigation.
Challenges when using VR to study older age groups
Despite the substantial benefits that VR offers, it is important to be aware of several challenges when using this approach with older age groups. First, older adults are often less experienced than younger adults with new technologies such as VR, which may result in stress and anxiety when they are asked to use them (Barnard et al., 2013). This may become further pronounced in experimental settings when they are eager to perform well in order to make a good impression. Most of today's older adults grew up in an analog world and may have never used the Internet, played videogames or walked in a 3D virtual environment. Their understanding of very basic digital and technological concepts such as using icons to navigate a website and peripherals for navigation in VR might consequently be limited (see Castilla et al., 2013, for the evaluation of a multi-application system targeted at older adults based on Internet technology and VR). Including only older adults with VR experience in studies using VR paradigms, in contrast, may result in a selection bias, making it difficult to generalize the results to the whole aging population. Whether older adults are inclined to adopt new technologies depends on factors such as past experiences with technology, their self-perception (i.e. ‘I am too old to learn new things’), the intention to learn, the characteristics of the interface and the availability of emotional and technical support (Barnard et al., 2013).
Second, cybersickness may occur if sensory inputs from the real world are in conflict with those from VR. Cybersickness is similar to motion sickness, including symptoms of nausea, headache and eye strain, and has been linked to individual factors (age, gender, VR experience), characteristics of the display system (e.g. screen size, resolution), and task design (e.g. the speed of movement and rotation, the smoothness of control, the predictability of events; Renkewitz and Alexander, 2007). Older adults seem to be more susceptible to cybersickness than younger adults (Arns and Cerney, 2005). Liu (2014) showed that scene rotation speed and the duration of exposure in VR contributes to the experience of cybersickness in older adults during passive navigation in a virtual environment displayed on a standard 2D computer screen. No adverse side effects were reported in a sample of older adults when VR was used to trigger autobiographical memories by presenting scenes on a large 320×240 cm screen including 3D sound and finger tracking that allowed in-place navigation and interaction with objects (Benoit et al., 2015). Thus, besides age, additional factors such as general VR experience and the specifics of the task at hand may mediate older adults' susceptibility to cybersickness. The less experienced they are together with experimental manipulations that, for example, are less realistic or that they do not expect, the more cybersickness they may experience.
Third, as outlined above, successfully navigating in a virtual environment requires a certain degree of immersion in VR, which can be facilitated by the experience of having a body in VR. However, there is evidence that older adults are actually less embodied than younger adults (Costello and Bloesch, 2017; Diersch et al., 2016, 2013; Kuehn et al., 2018). Presenting an avatar or body parts might consequently support task understanding but not necessarily the degree of immersion older adults experience in VR. Determining the specific circumstances and contexts (e.g. reference frames or navigational strategies) for which the presence of a body or an avatar increases immersion in VR through embodiment and consequently supports navigational learning is therefore an important goal for future research.
Fourth, the presentation of additional information such as an avatar or a high degree of interaction in VR can be distracting and can in turn negatively affect older adults' navigational performance as a result of age-related changes in attentional control. Age-related deficits in the allocation of attention to relevant information while inhibiting irrelevant information has been associated with performance declines in a variety of context-dependent tasks (Gazzaley et al., 2005; Lustig et al., 2007). In line with this, Merriman et al. (2018) found that the presence of moving avatars in a virtual environment resulted in performance decreases during spatial learning in older but not younger adults compared with navigation in an empty environment. This effect was specific to the presence of conspecifics and no such performance declines were observed when moving objects were embedded in the scene. Measuring different aspects of episodic memory encoding in younger and older adults, Sauzéon et al. (2015) showed that active versus passive exploration of a virtual room reduced false object recognition in younger adults but increased it in older adults, which was linked to their executive functioning abilities.
Fifth, during active navigation using an HMD or a treadmill, age-related changes in sensorimotor control (i.e. difficulties in balance and gait, higher movement variability) may require older adults to prioritize sensorimotor processing over cognitive processing in order to reduce the risk of falling in multiple-task situations (Schäfer et al., 2006). In line with this, Lövdén et al. (2005) showed that older but not younger adults' navigational performance during walking improves if a walking aid (i.e. holding on to a handrail while walking on a treadmill) is provided. Virtual guidance during walking in VR on a treadmill has also been found to attenuate age-related differences in navigational performance and walking variability (Schellenbach et al., 2010b).
How to address age-related challenges when using VR
Some of the problems mentioned in the previous section, particularly those linked to the first two challenges, are presumably cohort specific and might be alleviated in future generations as they become more and more used to the integration of digital devices and VR in their everyday life. In the meantime, we recommend a number of measures researchers should consider to ensure that performance data obtained from older adults are not confounded by limited understanding of and suboptimal interaction with the particular VR setup (see Table 1 for a summary). First of all, it is important to provide sufficient training in VR before testing (e.g. until a pre-defined performance criterion is reached). For example, Schellenbach et al. (2010a) showed that 20 min of training to walk on a treadmill in virtual environments is sufficient to reach stable walking patterns in older and younger adults. Providing support in the form of walking aids or virtual guidance may free up cognitive resources required to focus on navigational task demands. The virtual environment, the input devices/controllers and the task at hand should be introduced step by step in the training phase to reduce the participants' stress level due to too much information presented at the same time. The experimenter may additionally demonstrate the task at the beginning of the experiment. We further suggest simplifying the experimental setup as much as possible (e.g. using button boxes instead of joysticks if applicable) and ensuring that the movement and rotation speed in VR is quite low (e.g. close to the biological walking speed of around 1.6 m s−1). Creating the impression of a physical presence in the virtual environment, while avoiding presentation of too much distracting information, can further facilitate older adults' performance. For example, the interactive nature of VR can be stressed by including a virtual experimenter who directly interacts with the participant and practices the task with them. Finally, the participants' level of computer literacy can be assessed beforehand by means of questionnaires such as the computer literacy scale (CLS; Sengpiel and Dittberner, 2008). Their level of cybersickness can be determined with the Simulation Sickness Questionnaire (Kennedy et al., 1993). Participants who are not used to computers or whose risk of developing cybersickness is very high, might consequently be excluded from testing. For behavioral studies, instead of using standard desktop PCs, mobile devices (e.g. tablets) might be easier to operate for older adults because they are intuitively usable (Barnard et al., 2013).
Conclusion
In recent years, major advances have been made in neuroscience to uncover the neural mechanisms of spatial navigation, a complex cognitive skill that is highly relevant for effective functioning in everyday life. Spatial navigation has further been identified as being very sensitive to the process of aging and perhaps critical for elucidating the progression from healthy aging to neurodegeneration. More and more studies in this area of research are using VR paradigms to implement their experimental procedure because of the major benefits over classical experimental paradigms using, for example, static stimuli or videos. VR enables the study of spatial navigation in large-scale, ecologically valid contexts, allows for a high degree of control and standardization, is interactive and permits behavioral responses to be tracked in different ways. In addition, multisensory stimulation is possible and researchers can also go beyond reality to test their hypotheses. VR allows the study of human spatial navigation in close analogy to animal spatial navigation. In animal cognition, VR opens up novel avenues for dissociating the influence of different sources of information during navigation. VR further provides a unique way to train navigational abilities across different age groups and, in the case of stationary VR setups, to improve older adults' cognitive functioning despite potential restrictions in mobility. We have discussed differences between VR and real-world navigation across animal species and conclude that VR is particularly useful for studying spatial navigation in primates because of their higher reliance on visual input in contrast to rodents.
However, special care should be taken when working with older age groups whose experience in using VR is often limited and whose susceptibility to cybersickness might be elevated. In addition, not much is known about their ability to immerse in VR paradigms through embodiment and how this might affect navigational performance. Presenting contextually rich environments and a high degree of interaction in VR might weaken task performance as a result of age-related deficits in allocating attention to relevant information while inhibiting irrelevant information. When using non-stationary VR setups, older adults might prioritize sensorimotor over cognitive function because of limited resources. We outlined several recommendations to counteract these problems, including ways to familiarize older adults with VR before testing them. Taken together, we believe that the advantages of VR outweigh the disadvantages and that incorporating cutting-edge VR technologies in cognitive neuroscience research on aging is an extremely promising research avenue, which will contribute significantly to our understanding of memory and navigational functioning across the adult lifespan in the years to come.
Footnotes
Funding
This work was funded by a Starting Investigator Grant of the European Research Council (AGESPACE 335090).
References
Competing interests
The authors declare no competing or financial interests.