ABSTRACT
More than a century of research, of which JEB has published a substantial selection, has highlighted the rich diversity of animal eyes. From these studies have emerged numerous examples of visual systems that depart from our own familiar blueprint, a single pair of lateral cephalic eyes. It is now clear that such departures are common, widespread and highly diverse, reflecting a variety of different eye types, visual abilities and architectures. Many of these examples have been described as ‘distributed’ visual systems, but this includes several fundamentally different systems. Here, I re-examine this term, suggest a new framework within which to evaluate visual system distribution in both spatial and functional senses, and propose a roadmap for future work. The various architectures covered by this term reflect three broad strategies that offer different opportunities and require different approaches for study: the duplication of functionally identical eyes, the expression of multiple, functionally distinct eye types in parallel and the use of dispersed photoreceptors to mediate visual behaviour without eyes. Within this context, I explore some of the possible implications of visual system architecture for how visual information is collected and integrated, which has remained conceptually challenging in systems with a large degree of spatial and/or functional distribution. I highlight two areas that should be prioritised in future investigations: the whole-organism approach to behaviour and signal integration, and the evolution of visual system architecture across Metazoa. Recent advances have been made in both areas, through well-designed ethological experiments and the deployment of molecular tools.
Introduction
The human fascination with eyes and vision stretches back thousands of years. Our own visual system comprises one pair of anterior lateral eyes innervated by an anterior brain but many animals deviate from this familiar blueprint: eye number, location and function are all highly variable (Nilsson, 2013, 2021), and such visual systems are numerous, diverse and present across the animal tree of life. They employ many of the same eye and photoreceptor types, opsins and phototransduction pathways, and they fulfil familiar visual tasks. There are many excellent reviews of the structure, function and evolution of individual eyes and their components (von Salvini-Plawen and Mayr 1977; Arendt and Wittbrodt, 2001; Arendt, 2003; Land and Nilsson, 2006; Nilsson and Arendt, 2008; Lamb et al., 2009; Land and Nilsson, 2012; Nilsson, 2013, 2021; Gehring, 2014; Nilsson and Bok, 2017), whose insights and conclusions are independent of the number, arrangement and types of eyes involved and will therefore not be summarised here. Instead, this Commentary will explore the diverse approaches to overall visual system architecture (e.g. the number, arrangement and functional populations of eyes; Fig. 1). Biologists have long been aware of this variation (Box 1) and, given our understanding of binocularity and its importance (see Read, 2021 for a review), we might expect substantial implications of more complex (in the sense of having more components) architectures for visual function. However, dedicated efforts to address this have only matured in the past few decades. The first dedicated review appeared earlier this year (Buschbeck and Bok, 2023) and increasing interest in this topic – combined with the accelerated availability of new experimental tools for non-model organisms – promises a highly productive future. This field is continuing to evolve and I aim here to delineate the enormous architectural diversity of visual systems, examine its implications for vision, and set out potential directions and focal areas for future research.
The first accurate images of different visual system architectures – those of arachnids and insects – appeared in the 16th–17th centuries, as magnifying optics became available. The first descriptions of eye structures, numbers and configurations are found in systematic texts and comprehensive anatomical monographs (e.g. Poli, 1795; Semper, 1877; Plate, 1897), some of which provide astonishing structural detail.
Dedicated investigations of animal eyes and visual behaviour began in the late 19th and early 20th centuries, with the contemporaries of Grenacher, Exner, Hesse, Dakin and Homann generating detailed accounts of eye morphology, and ethologists such as von Üxkull, Hess and Crozier exploring the reactions and sensory ecologies of an astounding range of organisms. Their approach produced a wealth of descriptive data, laying extensive groundwork for subsequent studies, but the connections between the structure of individual eyes and function of whole visual systems remained murky.
In the mid 20th century, the study of animal vision was revolutionised by the quantification of resolution, focal lengths and fields of view by Barlow, Snyder, Kirschfeld and their peers. The late, great Mike Land was instrumental in this transformation, through elegant theoretical and comprehensive empirical work on many organisms, including some with distributed visual systems (e.g. Land, 1965; 1981; 1985b; Nilsson, 2021). This included the identification of basic relationships between anatomical features – such as inter-receptor angles, refractive indices and apertures – and functional parameters like acuity and optical sensitivity. Meanwhile, the application of electrophysiology enabled researchers to study the detection and integration of visual information directly (e.g. Parry, 1947; Land, 1966; Yamashita and Tateda, 1976; Moore and Cobb, 1985). Scallops and spiders emerged as key model systems for distributed vision, alongside insect compound eyes and ocelli, while the nebulous ‘dermal light sense’ yielded relatively few clues to its nature in echinoderms and others. Although researchers often worked across several taxa, the results remained somewhat isolated. From this period onwards, JEB became a natural home for vision research (e.g. Parry, 1947; Cornwell, 1955; Millott and Yoshida, 1960; Land, 1966, 1985a; Yamashita and Tateda, 1978; Schmid, 1998; Garm et al., 2007; Beltrami et al., 2010; Kirwan et al., 2018).
In the past 30 years, the overall composition and combined function of whole visual systems have been increasingly subject to dedicated study. The important work of characterising morphology, behaviour and physiology continues, with new imaging, molecular, computational and phylogenetic tools adding to the scientific arsenal at our disposal (Bok et al., 2017; Picciani et al., 2018; Alves Audino et al., 2020; Li et al., 2023). Where we now benefit from good morphological groundwork in selected taxa, this has enabled a greater focus on how these systems mediate behaviour and how they evolve. Particularly exciting are explorations of signal integration using psychophysical experiments and modelling approaches (Jakob et al., 2018; Chappell et al., 2021; Li et al., 2023), and the use of phylogenetic comparative methods to reconstruct visual system evolution (e.g. Picciani et al., 2018; Alves Audino et al., 2020). Crucially, explicit efforts to unify our knowledge across different taxa have produced comparative studies (Nilsson, 1994; Johnsen, 1997), conference symposia (SICB 2016, ICN 2018) and the first dedicated book (Buschbeck and Bok, 2023), opening new avenues for collaboration and discourse.
What are ‘distributed’ visual systems?
‘Distributed’ has been used to describe the architecture of many visual systems that exceed a single pair of lateral eyes, but this includes a variety of different configurations and capabilities (see Buschbeck and Bok, 2023 for a review). The phrase was first popularised to describe visual systems comprising large and often developmentally indeterminate numbers of eyes that are spread across the body surface and not directly innervated by an anterior neural mass. This includes the mantle eyes of bivalves (Land, 1965; Alves Audino et al., 2020), the shell eyes of chitons (Moseley, 1885; Kingston et al., 2018) and the radiolar eyes of fan worms (Hesse 1908; Bok et al., 2016). In most of these cases, cephalisation is minimal or absent, body plans are highly derived and the homology of the eyes to an ancestral urbilaterian cephalic eye (see Arendt and Wittbrodt, 2001 for a review) is unclear or unlikely. The term may also be applied to the visual systems of radially symmetrical animals such as sea stars and cnidarians, wherein the eyes are inherently non-cephalic, but their number is lower and more fixed, and their spatial distribution is more limited, or to the apparent visual systems of sea urchins and brittle stars, which lack discrete eyes but use scattered photoreceptors (also called ‘dispersed' or ‘extraocular' vision; Blevins and Johnsen, 2004; Kirwan et al., 2018; Sumner-Rooney et al., 2018, 2020).
‘Distributed’ has also been used to describe or include visual systems that exhibit ‘large’ numbers of eyes and/or functional differences between groups of eyes, regardless of their spatial arrangement or innervation. Examples include spiders, which usually have eight eyes (Morehouse, 2020), annelids that have multiple pairs of cephalic eyes (Peterson, 1984), the many lateral eyes of myriapods and the stemmata (see Glossary) of some larval insects (Buschbeck and Bok, 2023). The presence of more than one eye type is not uncommon and occurs most prominently in the lateral and medial eyes of arthropods and vertebrates. The visual systems of both insects with their ocelli (median) and compound (lateral) eyes, and spiders, which have principal (median) and secondary (lateral) eyes, have been described as distributed; this usage clashes with the expectations of very large or flexible numbers of eyes and of non-cephalic innervation, but there is a clear division of function between the eye types. By the same logic, we might also consider the lateral and parietal (median) eyes present in some amphibians and reptiles to exhibit distribution of visual function, but this has not been suggested so far.
Low-pass filter
A filter that only allows signals below a certain cut-off frequencies to pass, e.g. to remove high-frequency noise.
Oversampling (in vision)
Configuration of a photoreceptor array such that a given point in space is sampled by multiple adjacent photoreceptors.
Retinal mosaic
The spatial cellular composition of the retina, including the arrangement and distribution of photoreceptor types and support cells.
Rhopalia
Appendage found in certain true and box jellyfish, bearing multiple sensory organs such as statoliths and eyes.
Snell's window
Phenomenon by which downwelling light is viewed through a restricted angle by an underwater viewer. Gives the surface a circular appearance subtending around 90 degrees.
Somatotopic projection
The downstream preservation of the spatial arrangement of sensory structures on the body, in their neuronal projections onto subsequent neural structures.
Stemmata
A single-lens eye commonly found in larval insects.
Superposition compound eye
A multi-lensed eye in which the lenses are separated from the non-segregated retina by a gap or ‘clear zone’, resulting in the production of a single image.
As a result of this variable usage, the term ‘distributed’ vision is associated with such diversity that it is challenging to identify common themes. It may be more useful to consider the degree of distribution in the architecture of a visual system as a spectrum; the configurations of these (and all other) visual systems can be placed along two common axes: spatial distribution and functional distribution.
Spatial distribution
The physical placement of the functional units of a visual system (usually eyes) is highly variable: they may be restricted to the head (as in insects and vertebrates), populate a specific non-cephalic region (such as the mantle edge in bivalves; Poli, 1795) or be spread across the body surface (as seen in chitons; Moseley, 1885). In the most extreme cases, individual photoreceptors dispersed across the body may facilitate vision (Ullrich-Lüter et al., 2011). Quantifying spatial distribution is somewhat challenging. In bilaterally symmetrical body plans, we might consider the total proportion of the anteroposterior or mediolateral body axes spanned by the eyes, the skewness of eye distribution along these axes or the regularity of their arrangement, for example (Fig. 2). In the case of cubozoans and sea stars, the eyes are restricted to very specific areas (terminal tube feet, rhopalia; see Glossary), but these are spread out by the radial body symmetry.
Equally important, and perhaps more informative to the function of the visual system as a whole, is its sampling distribution: the combined field(s) of view (FoV) of the whole array rather than simply eye position. Although physical spacing may be linked to – or constrained by – body plan, symmetry and mobility, we might expect sampling distribution to better reflect ecologically important factors such as the nature of the visual environment.
Functional distribution
Fulfilment of a given task may demand multiple types of visual information, requiring varying spatial or temporal resolutions, or particular contrast, wavelength or polarization sensitivities. Individual eyes may be regionalised to collect different information from different parts of the visual field, or sample multiple streams of information simultaneously from the same part of the visual field, e.g. through retinal mosaic composition (see Glossary; e.g. Hesse 1908; Land, 1965, 1989; Insausti et al., 2012; Ribi and Zeil, 2018). Function can also vary between eyes, which can specialise divergently, potentially allowing them to escape selective conflict and economising on space and processing capacity. These ‘eye types’ may stem from distinct developmental and/or evolutionary origins, allowing them to also exploit fundamentally different characteristics, such as compound versus single-aperture organisation (e.g. insect ocelli and compound eyes), ciliary versus rhabdomeric photoreceptors (e.g. cephalic and dorsal eyes in onchidiids, Katagiri et al., 2002) or inverted versus everted retinas (e.g. spider principal and secondary eyes, Homann, 1928; Morehouse et al., 2017). Conversely, eyes sharing a common origin may diverge over evolutionary time, as seen among the six secondary eyes of some spiders. These develop from a single pair of primordia and share their basic structure but can diverge during development to vary in size, organisation of the retina and opsin expression (Samadi et al., 2015; Schomburg et al., 2015). Such substantial functional divergence is found overwhelmingly, but not exclusively, in visual systems possessing more than two eyes. Exceptions include the cock-eyed squid, which exhibits asymmetry in eye size and associated swimming behaviour (Thomas et al., 2017).
These two axes, spatial and functional distribution, can be applied to all visual systems, including our own (Fig. 2). Although (individual) human eyes are among the more complex, our visual system architecture sits at the lower extremes of both spatial and functional distribution (Buschbeck and Bok, 2023): we have two similar, adjacent eyes with large binocular overlap. By contrast, the shell eyes of chitons are, to our knowledge, functionally identical but widely spread across the body, whereas the median and lateral eyes of insects exhibit substantial functional distribution, but lesser spatial distribution (Fig. 2D–K). Although both could be described as ‘distributed’ architectures, they are clearly very different. Moving forward, being more specific about how a given visual system is considered to exhibit distribution will reduce confusion and may help to tease apart observed patterns: for example, high spatial and low functional distribution may be typical of sessile and non-cephalised taxa (Buschbeck and Bok, 2023), whereas the inverse may be more common in active species with more complex visual needs.
Where do we find diverse visual system architectures? Almost everywhere!
Phylogenetic spread
Variation in visual system architecture is both common and widespread across the metazoan phylogeny. Varying spatial or functional distribution appears in (at least) arthropods, molluscs, cnidarians, echinoderms, annelids, platyhelminthes, chordates, nemerteans, brachiopods, chaetognaths, rotifers and gastrotrichs – almost every major phylum that has eyes (Fig. 1G). There are multiple examples within most of these groups (e.g. von Salvini-Plawen and Mayr, 1977; Picciani et al., 2018; Alves Audino et al., 2020), appearing at various phylogenetic depths. For example, separate median and lateral eyes are likely to be ancestral to all arthropods (Paulus, 1979), but highly derived visual system architectures appear in isolated families and even genera, as seen in the gastropods Onchidiidae and Cerithidea, respectively (Semper, 1877; Houbrick, 1984). The degree of spatial or functional distribution in animal visual systems is not itself a useful systematic character across large phylogenetic distances, and there are no obvious phylogenetic patterns: diverse visual system architectures can apparently evolve regardless of body symmetry, the extent of cephalisation or developmental modes.
Ecological factors
The taxa exhibiting complex visual system architectures also encompass diverse ecologies and life histories. There are some constraints – for example, radial symmetry necessitates spatial distribution in cnidarians and echinoderms – but so far, there are few clear indications as to whether certain lifestyles, habitats or lineages favour increased visual system distribution. We have a good grasp of the links between ecology and individual eye structure [e.g. the predominance of superposition compound eyes (see Glossary) in nocturnal insects; Land and Nilsson, 2012], and between ecology and binocularity (e.g. greater frontal binocularity in predatory birds and mammals; Land, 1989; Land and Nilsson, 2012; Read, 2021). Comparable patterns are likely to exist for visual system architectures with more, or more variable, components, but our patchy understanding of whether and how these visual systems work as a whole has so far hampered their systematic identification. Casual inspection suggests several hypotheses for broad evolutionary and ecological trends, such as the association of spatially distributed vision with sessile lifestyles, but these require formal analysis.
Key concepts in describing visual system architecture
Approaches to visual system architecture
In the context of spatial and functional distribution, three general approaches to visual system configurations emerge: the repetition of identical eyes (here referred to as ‘duplicated’ visual systems; Fig. 2A), the presence of multiple types of eyes (‘parallel’ visual systems; Fig. 2B) and vision in the absence of eyes, using dispersed photoreceptors (‘non-ocular’ visual systems; Fig. 2C). Despite widespread evidence for photoreception occurring outside eyes, non-ocular vision has only been identified in sea urchins and brittle stars (Kirwan et al., 2018; Sumner-Rooney et al., 2020). As our understanding of non-ocular vision remains very poor, the validity of this category is not yet clear (see Sumner-Rooney and Ullrich-Lüter, 2023 for a full review). These three architectural approaches are not mutually exclusive; for example, annelids may have multiple eye types (cerebral, segmental, branchial and pygidial; Suschenko and Purschke, 2009) that equip them with both duplicated and parallel visual systems. Nor do these broad categories have any implication for the homology of eyes or visual systems; they simply describe the different ways in which animals might arrange them. However, they could help us better describe visual system diversity, focus our efforts to understand how they work and make more objective comparisons between them.
Comparing visual system architectures
Despite their immense diversity, there are some basic comparative metrics we can apply to at least duplicated and parallel visual systems. Eye number and diameter (both absolute and relative) are the most universal, objective and readily accessible measures, but – with appropriate definitions – we can also compare eye types, position and basic functional parameters (e.g. spatial resolution) between taxa. Such comparisons can provide insight to the fundamental relationships between eye number, size, location and function, but also between these characteristics and ecological factors.
How might visual system architecture affect the collection of visual information?
The functional properties of an individual eye can be summarised by several key parameters: contrast, wavelength and polarisation sensitivities, spatial and temporal resolutions, and FoV. In most two-eyed systems, only FoV differs between the eyes, leaving a single axis of variation: the extent of binocular overlap. Even this offers substantial opportunity for adaptation, with greater overlap providing redundancy and potentially supporting depth and motion perception, and greater separation increasing total global coverage (Read, 2021). In more complex architectures, there is more scope for variation in all these parameters, resulting in potentially complex multidimensional sampling spaces. Characterising and interpreting these in a single species is a considerable challenge, let alone comparing them between multiple taxa. Identifying some basic principles of how whole-system visual sampling can vary could simplify this process; some initial possibilities are outlined in the following sections.
Duplicated systems
In duplicated visual systems, including ours, only the FoV varies between the constituent eyes. Greater numbers of eyes provide additional dimensions for spatial relationships between them, depending on their spatial distribution (Fig. 2D,G,J). The resulting arrays may be sparse, sampling isolated parts of the visual scene, they may align to give continuous coverage of the scene between them, or they may overlap so that some or all parts of the scene are sampled by more than one eye. Although sparser sampling could maximise total FoV coverage, it may hamper the integration of information between eyes due to a lack of correspondence. Denser sampling could facilitate integration between adjacent eyes, provide greater protection from injury through functional redundancy (Read, 2021), and potentially improve signal-to-noise ratios and optical sensitivity (as seen in sabellid worms and ark clams; Nilsson, 1994).
Of course, eyes can be unevenly distributed, meaning sampling may not be uniform. It might be more concentrated and/or more regular in the most important parts of the visual field, analogous to a second-order fovea, whereas stochastic placement of eyes could reflect lesser constraint or selective pressure for regularity. The anterior concentration of eyes is seen in strongly cephalised animals, such as vertebrates and insects, but skewed distributions along the anterior–posterior axis are apparent in chitons, fan worms and others, while scorpions, scallops and chitons can all exhibit left–right asymmetry in eye numbers (Loria and Prendini, 2014; Whoriskey et al., 2014; Sigwart and Sumner-Rooney, 2021). Characterising FoV arrays could therefore provide crucial insight into the function of duplicated visual systems, but numerous eyes and flexible body shapes can make this difficult.
Parallel systems
Distinct eye types usually collect different information, meaning that all visual parameters can vary across parallel systems. Eye types may sample different parts of the scene if specific cues are likely to occupy different areas or if light conditions vary substantially across it. In cubozoan jellyfish, the upper and lower lens eyes sample through Snell's window (see Glossary) and below the waterline, respectively, and differ in their spatiotemporal resolutions and pupillary responses (Nilsson et al., 2005; Garm et al., 2007; O'Connor et al., 2009). Multiple eye types can also be tailored to the relevant visual task across the body. For example, the pygidial and segmental ocelli of fan worms are likely to be very simple light sensors to ensure the body stays secreted in its protective tube, but the exposed radiolar eyes respond to more dynamic stimuli (Suschenko and Purschke, 2009; Bok et al., 2016).
Alternatively (or additionally), parallel systems may re-sample the same areas for different information, as seen in insect ocelli and compound eyes (Goodman, 1970; Berry et al., 2007; Taylor et al., 2016) or the slit, pit and lens eyes in cubozoans (Garm et al., 2008; O'Connor et al., 2009). This is broadly analogous to regional specialisation or retinal mosaic patterning within a single eye but may be particularly useful when the animal and its eyes are small, wherein it can be more efficient to divide functions between separate structures (Land and Nilsson, 2006), or where optimising one eye for two divergent tasks introduces conflict. Both may be relevant in shaping the visual system of jumping spiders: ultra-high-resolution vision (Land and Hebets, 1969) is restricted to the very small but mobile visual field of the principal eyes, but this is encompassed by that of the adjacent anterior lateral eyes, which provide coarser resolution but high sensitivity to motion (Land, 1985a; Jakob et al., 2018). Thus, the spider can extract two very different streams of information from the anterior direction in a cost-effective way, despite its tiny size.
Non-ocular systems
The sampling distribution of non-ocular visual systems is determined by the receptive fields of the dispersed photoreceptors, each of which can presumably contribute a maximum of one ‘pixel’ to the global image (if there is one). Both sea urchins and brittle stars have multiple populations of putative photoreceptors, expressing c- or r-opsins, located within skeletal pores, tube feet, spines and radial nerves (Johnsen, 1997; Ullrich-Lüter et al., 2011; 2013; Delroisse et al., 2013; Sumner-Rooney and Ullrich-Lüter, 2023). At least the skeletal pore photoreceptors are thought to contribute to vision, and these exhibit substantial oversampling (see Glossary; Kirwan et al., 2018; Sumner-Rooney et al., 2021). Whether this is a universal feature of non-ocular vision is unclear. It is possible that other photoreceptor populations, such as those in the tube feet, also contribute to vision (Ullrich-Lüter et al., 2011; Lesser et al., 2011). It is also not known whether this is analogous to a parallel system; as our understanding of vision is built on discrete visual organs with near-contiguous retinas, any functional comparisons must be cautious.
How might different visual system architectures process information?
Although individual eyes have been studied in a wide range of species, their isolated functions are only one part of vision. How, if at all, is information combined between multiple inputs? This is already a highly complex question in two-eyed taxa (e.g. stereopsis, regionalisation; Read, 2021), and represents an enormous conceptual and practical challenge when considering visual systems that incorporate large numbers and/or multiple types of eyes.
Different visual tasks require different levels of processing: non-directional responses such as the withdrawal responses of fan worms do not necessitate spatial vision (Class I; Nilsson, 1994, 2013), whereas visual communication and prey pursuit in jumping spiders require high spatial and temporal resolution (Class IV; Nilsson, 2013). These more complex tasks require not only the collection of detailed information but considerable neural capacity to process it. In taxa with little or no elaboration or centralisation of the nervous system, such as bivalves, platyhelminthes and echinoderms, we might expect a greater degree of localised and peripheral processing and/or less sophisticated integration of data (Nilsson, 1994). Superficially, this appears to include species that exhibit greater spatial distribution of the visual system, but lesser functional distribution. If so, this might suggest that integration between duplicated units is less complex and/or necessary than integration between parallel units.
There is an enormous range of potential approaches to processing visual information across complex visual system architectures, and this remains a challenging research area. So far, relatively little evidence is available to support or refute these in most taxa, and the following suggestions are not exhaustive; however, they may be useful in providing hypotheses or provoking further thought.
Duplicated systems
There are many possible uses for multiple streams of similar information. A common question is whether, and how, visual information from duplicated eyes is combined to reconstruct a global view. In engineered camera arrays, many overlapping cameras can even permit light-field imaging or enhanced resolution (e.g. Harfouche et al., 2023), but in many biological systems this seems highly unlikely: in groups such as chitons, fan worms, bivalves and onchidiid gastropods, the large numbers and often flexible or irregular positions of individual eyes would present a vast computational challenge for image reconstruction, and these species appear to lack the required neural investment. The poor resolving power (if any) of the individual eyes would also limit the utility of such a presumably expensive process.
Nonetheless, there is some evidence for the combination of spatial information between eyes; Chappell and Speiser (2023) recently demonstrated that neurons from adjacent eyes in chitons project onto overlapping regions of the lateral nerve cord in an ordered fashion, and Spagnolia and Wilkens (1983) described somatotopic projection (see Glossary) of the optic nerves onto glomeruli in scallops. The preservation of spatial information at the neural level suggests some form of integration. Curiously, although scallops use their vision for a variety of tasks, potentially including predator detection (Wilkens and Ache, 1977), guiding tentacle extension (von Buddenbrock and Moller-Racke, 1953; Chappell et al., 2021), assessing feeding opportunities (Speiser and Johnsen, 2008) and habitat selection (von Buddenbrock and Moller-Racke, 1953; Hamilton and Koch 1996), measured visual behaviour in chitons is so far limited to defensive clamping responses (Speiser et al., 2011; Chappell and Speiser, 2023).
Conversely, isolated eyes may produce localised responses (such as tentacle extension) or global responses (such as defensive retractions) without any further comparison or combination with other eyes. However, such a system may be vulnerable to visual noise and false alarms; an obvious intermediate would be a summative or threshold approach, whereby the stimulation of several eyes is required to trigger a response, as is probably the case in fan worms (Nilsson, 1994).
Parallel systems
One likely advantage of parallel visual systems is that they allow an extended range of detection, but combining different classes of information can be complicated, whether it occurs within or between eyes. The generation of composite views is presumably only relevant where different eye types sample overlapping parts of the scene and contribute to the same visual task. Although it is unclear whether composite views are generated between parallel visual systems, physiological evidence indicates that at least some arthropods have interneurons that integrate signals between them. In the blowfly, lobular plate interneurons are tuned to respond maximally to aligned axes in the compound eye and ocellus (Parsons et al., 2006, 2010), whereas in jumping spiders, each eye projects to a dedicated lamina, but input from both the principal and anterior-lateral eyes is required to stimulate firing within the arcuate body (Menda et al., 2014; Long, 2020).
However, this level of integration requires an additional, ‘expensive’ layer of processing that is likely to be unnecessary for many animals and tasks. Instead, they could treat streams of incoming visual information as sequential (e.g. the use of the secondary eyes to detect motion and then the principal eyes to track it in jumping spiders; Jakob et al., 2018) or hierarchical (e.g. ocellus-mediated horizon stabilisation in flight overriding feature detection by the compound eyes), for example.
Non-ocular systems
The level of integration in non-ocular vision remains unclear: is information combined in one body region or photoreceptor population, across the whole body, or not at all? Are these integrated regions or populations functionally analogous to eyes? To confer spatial resolution, information from these scattered photoreceptors must be combined or compared at some level, locally or globally across the animal. Although it might be coincidental that non-ocular vision has so far only been described in echinoderms, their radial symmetry may be advantageous. The oversampling among photoreceptors is likely to impair the animal's ability to detect the directionality of incoming light at a fine spatial scale, but this could be coarsely, artificially and simply restored by signal comparison between the radial nerve cords. Li et al. (2023) recently proposed a model in the sea urchin Diadema africanum, wherein the excitation of output neurons in the central neural ring relies on sequential inhibition of the radial nerves by photoreceptors, and Sumner-Rooney et al. (2020) noted that the brittlestar Ophiomastix wendtii often approaches visual targets with two outstretched arms, suggesting that animals might compare photoreceptor signal between the adjacent arms. This implies that there are five visual units, equivalent to one per ambulacrum (Fig. 2F,I). Each visual unit would provide very coarse spatial information, in line with observed behavioural thresholds, but may be sufficient for seeking out large, nearby objects and could theoretically act as a low-pass filter (see Glossary) for object size.
Moving forward: what next for the study of visual system architecture?
Among the potential future research directions, two stand out as being the most pressing and promising: how do these visual systems function as a whole, and what evolutionary forces shape visual system architecture? Armed with an interdisciplinary suite of tools, a collaborative community and excellent foundational knowledge of individual eye function and binocularity, we are well equipped to tackle these questions.
How is visual information combined across complex architectures to contribute to behaviour?
We have a solid understanding of how individual eyes work at a physiological and molecular level, and for many species we understand the roles that vision plays in their overall ecology and behaviour; however, the black box that sits between these remains opaque. Even our understanding of how humans combine information between two eyes, or different visual features within eyes, remains the subject of extensive research. Given the breadth of diversity in more complex visual system architectures, answering this question will require us to systematically identify and address specific hypotheses about how information is collected, processed and used to mediate behaviour.
Generating hypotheses
Functional morphology can provide clues to integration and processing. Characterising whole visual-system FoV arrays will provide insight to potential processing and inform the design of experiments targeting certain groups of eyes or parts of the visual field. The study of neuronal projections, such as that by Chappell and Speiser (2023) in chitons, will also be highly informative as to whether spatial information within and between eyes could be preserved downstream.
Computational and network modelling approaches may also be helpful in generating hypotheses. The field of multisensor fusion sets out the various possible ways of combining information, many of which may be relevant to distributed visual systems. These include competitive, cooperative and complementary configurations. These principles, and their applications to systems such as CCTV, multi-camera arrays and light-field cameras, and multimodal computer vision such as camera/LiDAR fusion (parallel systems) may be useful analogies when designing experiments and constructing hypotheses for animal systems (e.g. Harfouche et al., 2023; Huang et al., 2022 preprint).
Testing hypotheses
Electrophysiology often provides the most direct insight to integration (e.g. Moore and Cobb, 1985; Parsons et al., 2010; Menda et al., 2014), but is highly technically demanding, even in insects and other well-studied taxa, let alone non-model systems. In the latter, the novel application of electrophysiological techniques and a lack of detailed understanding of the nervous system may seriously hamper success. Additionally, the presence of large numbers of eyes or dispersed photoreceptors would substantially increase the workload and the complexity of characterising either individual photoreceptors or the processing centres, but the potential rewards are substantial.
For many taxa, carefully designed behavioural experiments represent the most accessible way to test hypotheses about integration. These can exploit the natural behaviours of the animal, potentially non-invasively and at relatively low cost. Chappell et al. (2021), for example, elegantly used tentacle extension behaviour to demonstrate that scallops can locate visual stimuli, suggesting they might compare the visual scene between adjacent eyes. Manipulation experiments can also be informative: covering the various eye pairs of spiders has demonstrated both divisions of labour and contributions of multiple pairs to certain visual tasks (Schmid, 1998; Nagata et al., 2012) and feeding conflicting cues to individual eyes has shed light on the combination of information between paired eyes in taxa including insects and molluscs (e.g. Nityananda et al., 2018).
How and why do these systems evolve?
Given the energetic cost of eyes and visual processing, there must be substantial benefits to large visual systems. Building on our understanding of the evolutionary origins and adaptations of individual eyes, and the adaptive advantages of varying degrees of binocularity, we are well positioned to explore what shapes more complex systems. Within-group and between-group comparisons offer two possible approaches to addressing this question. The first is to study the divergence of specific homologous systems within a given clade, to reconstruct morphological and functional changes and to identify their ecological correlates or developmental mechanisms. The second attempts to compare across independently evolving systems to look for unifying principles or common themes that transcend homology and phylogenetic distance.
Within clades
Working within homologous systems facilitates the study of the trajectory of evolutionary change and the underlying developmental and genetic mechanisms. Phylogenetic comparative methods allow us to trace the evolutionary history of traits across a clade and identify ecological correlates; Audino et al. (2022) recently demonstrated that the evolution of eye number in scallops is affected by mobility and habitat. Simple data like these are also relatively accessible; if similar studies were coordinated across other groups, we could rapidly improve our understanding of broader macroevolutionary trends in visual system architecture (see below).
Morphology is underpinned by development, and morphological evolution requires changes to the relevant developmental programs (Gehring, 2014). We can examine the expression of conserved genes commonly involved in eye development, such as components of retinal determination gene networks (Arendt and Wittbrodt, 2001; Arendt et al., 2002; Prpic, 2005; Schomburg et al., 2015). Where variation in expression correlates with differences in resultant morphology, manipulation of candidate gene expression using RNA interference or CRISPR allows direct assessment of their potential contribution to visual system evolution (e.g. Gainett et al., 2020). This approach requires good molecular resources; although this currently limits its suitability to selected taxa, established models such as Drosophila and Platynereis exhibit parallel and duplicated visual systems, respectively, and can be studied with such tools. The identification of separate retinal determination gene networks for the median and lateral eyes of arthropods, for example, already provides insight to the development and divergence of parallel visual systems (Morehouse et al., 2017). Meanwhile, the applicability of these molecular techniques to non-model systems is increasing dramatically.
Across clades
At a broader scale, are there certain taxa, ecologies or life histories that favour the evolution of different or more complex visual system architectures? Are there fundamental rules that shape them? Answering these questions requires us to cast a wider phylogenetic net. Despite the impressive foundation of knowledge available on selected taxa, there are still many for which we have relatively little quantitative data. Formal analysis, using simple comparative metrics (e.g. eye number and diameter), should provide sufficient taxonomic coverage and incorporate a phylogenetic framework. We already have an excellent universal theoretical framework for the evolution of discrete eyes (Nilsson, 2013); in the case of non-ocular systems, we might imagine a similar transition from non-ocular photoreceptor to non-ocular vision through the addition of screening pigment and the relevant processing of directionality and, ultimately, spatial resolution (Sumner-Rooney et al., 2020).
Conclusion
From more than a century of vision research, at the heart of which JEB has stood throughout, it is clear that the fundamental blueprint of animal visual systems is not necessarily a single pair of eyes. More complex architectures are common, widespread and incredibly diverse, can be described in terms of their spatial and functional distribution, and broadly fall into three basic functional classes (duplicated, parallel and non-ocular). The study of these systems offers unique challenges and opportunities, and unified efforts will be required to address these over the next 100 years. There are two main avenues for future work: elucidating the mechanisms of whole-animal responses to visual stimuli and examining the evolution of visual system architecture. Armed with a suite of tools ranging from the classical to the cutting-edge, this research community is surely set for another bountiful century of progress and productivity.
Acknowledgements
I am very grateful to Prof. Almut Kelber and Dr Michaela Handel for their invitation to contribute to the JEB Centenary series and their patience with revisions to the manuscript, to Prof. Dan Nilsson, Dr Sam J. England, and two anonymous reviewers for their feedback on the manuscript, and to Dr Mike Bok, Prof. Dan Speiser, and Prof. Elke Buschbeck for their invitations to contribute to the distributed vision symposium at ICN in 2018 and to the Buschbeck and Bok, 2023 volume on the subject.
Footnotes
Funding
This work was funded by the Deutsche Forschungsgemeinschaft Emmy Noether programme (SU 1336/1-1).
References
Competing interests
The authors declare no competing or financial interests.