Variation in Drosophila compound eye size is studied across research fields, from evolutionary biology to biomedical studies, requiring the collection of large datasets to ensure robust statistical analyses. To address this, we present EyeHex, a Matlab-based tool for automatic segmentation of fruit fly compound eyes from brightfield and scanning electron microscopy (SEM) images. EyeHex features two integrated modules: the first uses machine learning to generate probability maps of the eye and ommatidia locations, while the second, a hard-coded module, leverages the hexagonal organization of the compound eye to map individual ommatidia. This iterative segmentation process, which adds one ommatidium at a time based on registered neighbors, ensures robustness to local perturbations. EyeHex also includes an analysis tool that calculates key metrics of the eye, such as ommatidia count and diameter distribution across the eye. With minimal user input for training and application, EyeHex achieves exceptional accuracy (>99.6% compared to manual counts on SEM images) and adapts to different fly strains, species, and image types. EyeHex offers a cost-effective, rapid, and flexible pipeline for extracting detailed statistical data on Drosophila compound eye variation, making it a valuable resource for high-throughput studies.

Insect compound eyes consist of crystal-like lattices of elementary eyes, called facets or ommatidia. Each ommatidium is constituted of a fixed number of cells, including photoreceptors that sense light and transmit visual information to the brain, as well as a set of accessory cells (Nériec and Desplan, 2016; Wolff and Ready, 1993). Compound eye size varies greatly among insect groups and species due to differences in the number or diameter of individual facets. In fruit flies (Drosophila genus), the number of ommatidia per eye ranges from approximately 600 to 1500, while ommatidia diameter varies from approximately 15 to 22 µm (Arif et al., 2013; Gaspar et al., 2019; Keesey et al., 2019; Posnien et al., 2012; Ramaekers et al., 2019; Reis et al., 2020). Genetic variation is the primary driver of eye size variation among fruit flies, but environmental conditions such as food quality, larval density and temperature also play roles (Currea et al., 2018). On a wider phylogenetic range, the variation in eye size is much more spectacular. For example, the number of ommatidia varies from zero in some ant species up to approximately 20,000 in dragonflies (Land and Nilsson, 2012).

Variation in compound eye size is investigated from various scientific perspectives, ranging from evolutionary studies to biomedical research. Evolutionary ecologists aim to identify how eye size variation affects vision and visually guided behaviors, potentially resulting in adaptation to novel ecological niches. Research has shown that larger eyes account for improved vision in dim light or acuity (reviewed in Land, 2005; Land and Nilsson, 2012; Warrant, 2008; Warrant and Nilsson, 1998). In contrast, reduced eye size is common in insects adapted to darkness, such as cave-dwelling beetles, which rely little or not at all on vision (Luo et al., 2018, 2019).

Variation in compound eye size, especially in fruit flies, is also of interest to developmental or evolutionary-developmental biologists who investigate the genetic and developmental origins of morphological variation (reviewed in Casares and McGregor, 2020). Furthermore, it serves as a genetic model for human eye conditions, such as aniridia or retinitis pigmentosa (reviewed in Gaspar et al., 2019) and a ‘test-tube’ for studying various human pathologies (e.g. Zika virus pathogenicity, Harsh et al., 2020; or molecular determinants of spinal muscular atrophy, Maccallini et al., 2020).

Until now, the development of larger scale studies has been hindered by the lack of a simple, accurate, and cost-effective method to fully segment ommatidia. In most reports, ommatidia were manually counted from Scanning Electron Microscopy (SEM) images with limited throughput (Arif et al., 2013; Keesey et al., 2019; Posnien et al., 2012; Ramaekers et al., 2019). Statistical analyses that require larger datasets were performed using either estimations of ommatidia numbers (Ramaekers et al., 2019) or measurements of eye surface area or length instead of ommatidia counts (Arif et al., 2013; Harsh et al., 2020; Norry and Gomez, 2017). In a recent report, researchers applied a commercial segmentation tool (Amira v.2019.2, Thermo Fisher Scientific) to high-quality images obtained by X-ray tomography (Gaspar et al., 2020). Brightfield imaging could be a desirable alternative to the more laborious and expensive SEM or X-ray tomography technologies. However, due to the prominent curvature of compound eyes, segmenting ommatidia from two-dimensional (2D) images is non-trivial. Two features present specific challenges. The first feature is the diverse degree of ocular hair (named bristles) covering ommatidia in 2D eye images. Depending on their location on the eye surface, the level of coverage varies from minimal (e.g. Fig. 1, black frames) to substantial (e.g. Fig. 1, green, blue and orange frames). The second feature is the dome-shaped transparent corneas, which act as lenses overlaying the ommatidia. This causes the ommatidia to reflect light differently depending on their orientation and the angle of the incoming light, hence the varying levels of reflection observed in brightfield images (Fig. 1A). Finally, the third challenge to ommatidia segmentation is the aspect of the ommatidia themselves, varying from circular (e.g. Fig. 1, black frames) to crescent-shape (e.g. Fig. 1, blue frames), depending on their location on the eye. A recent study on eye size variation in the Drosophila virilis group exploited the reflection of light by the ommatidia, capturing and segmenting the reflection itself, rather than imaging the ommatidia themselves (Reis et al., 2020). Another recent method, named Ommatidia Detection Algorithm (ODA) (Currea et al., 2023), applies Fourier Transform to detect ommatidia on images of glue eye molds, light microscopy and SEM images of various species of insects. This method detects the periodic patterns of ommatidia in the compound eyes with filters in the frequency domain of the images. However, when applied to 2D images, this method may be limited by the high variability of the ommatidia imaging angles in distinct regions of the compound eye (Fig. 1), particularly at the eye periphery where ommatidia are imaged at markedly slanted angles. This may result in the misdetection of ommatidia or the misidentification of ommatidia's centroid locations.

Fig. 1.

The many facets of the eye. Due to the convex structure of the compound eye, the aspect of ommatidia observed on 2D images varies greatly in different eye regions. Images shown here were acquired using (A) a macroscope (this study) and (B) by SEM (Ramaekers et al., 2019). The zoom-in regions are shown on the right side of each panel with the corresponding frame colors. Note the differences in reflection, shape and inter-ommatidial hair aspect.

Fig. 1.

The many facets of the eye. Due to the convex structure of the compound eye, the aspect of ommatidia observed on 2D images varies greatly in different eye regions. Images shown here were acquired using (A) a macroscope (this study) and (B) by SEM (Ramaekers et al., 2019). The zoom-in regions are shown on the right side of each panel with the corresponding frame colors. Note the differences in reflection, shape and inter-ommatidial hair aspect.

Close modal

In this study, we present EyeHex, a comprehensive Matlab-based toolbox for the automated segmentation of ommatidia in Drosophila compound eyes, and compare its performances with the recently published segmentation method ODA (Currea et al., 2023). EyeHex implements a hybrid approach that first preprocesses the compound eye images with supervised machine learning and then maps ommatidia one-by-one to an expanding hexagonal grid, making it robust to local perturbations and shape variations. The pipeline is coupled with an analytical tool that automatically generates high-precision metrics, such as ommatidia counts along the eye anterior–posterior axis and ommatidia diameter distribution across the eye.

The EyeHex toolbox

The EyeHex toolbox is a set of MATLAB-based toolkit designed for the semi-automatic segmentation of all visible ommatidia from 2D images of Drosophila compound eyes. EyeHex segmentation consists of two sequential phases (Fig. 2). Phase 1 uses a machine learning-based classifier (Trainable Weka Segmentation, ImageJ (Arganda-Carreras et al., 2017) to isolate the compound eye from the background and delineate individual ommatidia. The classifier can be trained using groups of ommatidia sampled from different eye regions, ensuring robustness to variations in hair coverage, light reflection, and ommatidial shape (Fig. 1). Notably, the classifier can be trained with diverse datasets, making EyeHex compatible with multiple image types, including brightfield and SEM images. Phase 2 implements a locally adaptive strategy that exploits the hexagonal organization of ommatidia to precisely map their location and quantify their numbers based on the trained classifier's output. Specifically, each ommatidium is added based on the location of identified ommatidia, making the segmentation process robust to variation in ommatidia density, particularly at the eye periphery (Fig. 1). Limited user attention through Graphical User Interfaces (GUIs) is required during the training and manual correction step. EyeHex output includes labeled eye images depicting the ommatidia masks (Fig. 2E), as well as ommatidia counts and positions in comma-separated-values (.csv) file format (Fig. 2F) for further statistical analyses. The toolbox, its manual and sample images are available for download on GitHub at https://github.com/huytran216/EyeHex-toolbox.

Fig. 2.

Overview of ommatidia segmentation steps using EyeHex toolbox. The process consists of two phases. The ‘Training and Classification phase’ involves: (A) the preparation of training data for (B) the automatic classification of ommatidia and compound eye regions. This phase converts raw input images into probability maps of the eye region and facet region. The ‘Complete segmentation phase’ involves: (C) automatic ommatidia segmentation by expanding a hexagonal grid to fit the facet probability map within the eye region and (D) manual correction with the built-in graphical user interface. The outputs include (E) the labelled images for the ommatidia facet and boundary regions and (F) the ommatidia count and the location of each individual ommatidium.

Fig. 2.

Overview of ommatidia segmentation steps using EyeHex toolbox. The process consists of two phases. The ‘Training and Classification phase’ involves: (A) the preparation of training data for (B) the automatic classification of ommatidia and compound eye regions. This phase converts raw input images into probability maps of the eye region and facet region. The ‘Complete segmentation phase’ involves: (C) automatic ommatidia segmentation by expanding a hexagonal grid to fit the facet probability map within the eye region and (D) manual correction with the built-in graphical user interface. The outputs include (E) the labelled images for the ommatidia facet and boundary regions and (F) the ommatidia count and the location of each individual ommatidium.

Close modal

Segmentation on SEM images

To evaluate the efficiency of EyeHex segmentation, we compared the number of ommatidia generated by EyeHex toolbox with manual counts from previously published SEM datasets (Ramaekers et al., 2019) for three D. melanogaster strains: Hikone-AS (Fig. S1), Canton-S BH (Fig. S2), and DGRP-208 (Fig. S3). For training datasets, we manually segmented approximately 80 ommatidia (∼10% of the total per eye) in a single SEM image. Subsequent steps included classification, automatic segmentation, manual corrections, and counting (Table 1; Table S2). Compared to manual counts, EyeHex ommatidia numbers deviated by 0 to a maximum of 8 ommatidia per eye, achieving an average accuracy exceeding 99.6% (Table 1; Table S2). We further tested EyeHex to SEM images of D. pseudoobscura, a different Drosophila species with significantly larger compound eyes (D. pseudoobscura: ∼1200 ommatidia per eye versus D. melanogaster: ∼800 ommatidia per eye), and compared the EyeHex results to published manual counts (Ramaekers et al., 2019). For this species, EyeHex also yielded highly accurate counts, with differences ranging from 0 to 3 ommatidia per eye (99.9% accuracy; Table 1; Fig. S4, Table S2). These results demonstrate our tool's applicability beyond D. melanogaster to other Drosophila species.

Table 1.

Ommatidia counts on SEM and brightfield images

StrainImage typeImage collectionnEyeHexEyeHexManualEllipse
pre-correctionpost-correction
Hikone-AS SEM Ramaekers et al., 2019  722.7±26.2 741.2±21.8 739.7±21.8* 764.5±26.7 
Canton-SBH SEM Ramaekers et al., 2019  948.3±31.4 857.8±30.1 859.3±27.4* 888.2±33.1 
DGRP-208 SEM Ramaekers et al., 2019  859.0±82.1 754.5±44.1 754.6±43.4* 797.8±48.1 
D. pseudoobscura
Catalina Island 
SEM Ramaekers et al., 2019  1192.8±14.6 1176.2±22.2 1175.0±22.1* 1216.9±22.0 
Hikone-AS Brightfield This study 12 787.6±18.2 752.7±15.6 778.7±18.3 
WTJ2 (white eyes) Brightfield Ramaekers et al., 2019  870.3±73.3 789.9±27.2 819.6±28.7* 
StrainImage typeImage collectionnEyeHexEyeHexManualEllipse
pre-correctionpost-correction
Hikone-AS SEM Ramaekers et al., 2019  722.7±26.2 741.2±21.8 739.7±21.8* 764.5±26.7 
Canton-SBH SEM Ramaekers et al., 2019  948.3±31.4 857.8±30.1 859.3±27.4* 888.2±33.1 
DGRP-208 SEM Ramaekers et al., 2019  859.0±82.1 754.5±44.1 754.6±43.4* 797.8±48.1 
D. pseudoobscura
Catalina Island 
SEM Ramaekers et al., 2019  1192.8±14.6 1176.2±22.2 1175.0±22.1* 1216.9±22.0 
Hikone-AS Brightfield This study 12 787.6±18.2 752.7±15.6 778.7±18.3 
WTJ2 (white eyes) Brightfield Ramaekers et al., 2019  870.3±73.3 789.9±27.2 819.6±28.7* 

Ommatidia counts (mean±s.d.) for SEM or brightfield images. The images were either reused from a previous study (Ramaekers et al., 2019) or newly collected. We compare ommatidia counts performed using EyeHex segmentation before and after manual correction, together with fully manual counts (‘Manual’) or elliptic estimations (‘Ellipse’). All strains are from D. melanogaster species, except where otherwise indicated. n: sample size; *: counts from Ramaekers et al., 2019.

Segmentation on brightfield images

Sample preparation and image acquisition for SEM images are relatively time consuming and expensive. To reduce time and cost, we applied EyeHex to brightfield image multifocal stacks taken with a macroscope on non-fixed samples. However, manual counts on such images are prone to errors due to navigation between focal planes when performed on multifocal stacks, or due to blurred edges on focused 2D images. Therefore, we compared EyeHex-derived ommatidial counts from brightfield images with established manual counts from SEM images of the same genotype. We generated training datasets by manually segmenting 150 ommatidia from two newly acquired Hikone-AS brightfield images. We then used EyeHex for classification, segmentation and counting (Fig. 3A-B, Table 1; Fig. S5 and Table S2). We found out that EyeHex counts closely matched previous manual counts on SEM images of Hikone-AS eyes (Table 1), suggesting that brightfield imaging combined with EyeHex provides a reliable representation of biological samples. Of note, the brightfield and SEM images were acquired in different laboratories with a 5-year interval, raising the possibility that the size of Hikone-AS eyes in our current strain differs from that in our previous study (Ramaekers et al., 2019) due to genetic drift or slightly different culture conditions. To address this concern, we applied an additional method that estimates the number of ommatidia assuming a perfect ellipse shape (see Ramaekers et al., 2019 for the method). Although this method systematically overestimates ommatidia counts by 20 to 40 on average, it showed consistent statistics in both SEM and brightfield datasets, suggesting stable eye size in the Hikone-AS strain between the two studies (Ramaekers et al., 2019 and Table 1).

Fig. 3.

EyeHex output: quantitative analysis of compound eye features. (A) Focused brightfield image of a Hikone-AS eye, with the overlayed hexagonal grid axes. (B) Detected ommatidia centers, sorted into columns (separated by color). Ommatidia in the most posterior (rightmost) column are highlighted in bold. (C) Contour map of ommatidia altitude (in μm, calculated relative to the lowest focal plane). (D) Contour map of the average spacing between adjacent ommatidia (in μm) projected onto the eye in panel A. Black crosses indicate the locations of greatest inter-ommatidial distance (largest ommatidia) for individual Hikone-AS eyes; data from n=12 eyes. (E) Number of ommatidia in each column from posterior (P) to anterior (A) (line and shading represent the mean and standard deviation, respectively, n=12). (F) Comparison of mean ommatidia count profiles between left (blue curve, n=8) eyes and right (red curve, n=4) eyes. Scale bars in C and D: 200 μm. Eye orientation is shown in panel A.

Fig. 3.

EyeHex output: quantitative analysis of compound eye features. (A) Focused brightfield image of a Hikone-AS eye, with the overlayed hexagonal grid axes. (B) Detected ommatidia centers, sorted into columns (separated by color). Ommatidia in the most posterior (rightmost) column are highlighted in bold. (C) Contour map of ommatidia altitude (in μm, calculated relative to the lowest focal plane). (D) Contour map of the average spacing between adjacent ommatidia (in μm) projected onto the eye in panel A. Black crosses indicate the locations of greatest inter-ommatidial distance (largest ommatidia) for individual Hikone-AS eyes; data from n=12 eyes. (E) Number of ommatidia in each column from posterior (P) to anterior (A) (line and shading represent the mean and standard deviation, respectively, n=12). (F) Comparison of mean ommatidia count profiles between left (blue curve, n=8) eyes and right (red curve, n=4) eyes. Scale bars in C and D: 200 μm. Eye orientation is shown in panel A.

Close modal

We also tested EyeHex on white mutant eyes (strain WTJ2, Fig. S6), as this is one of the most commonly used genetic markers in fruit flies. Those eyes lack the red pigmentation and exhibit reduced contrast. While ommatidia segmentation accuracy remained comparable to other datasets, more manual correction was required, up to 14% of the total ommatidia (Table 1; Table S2). This primarily involved falsely detected ommatidia at the eye boundary, caused by overestimation of the eye region during the classification step. However, ommatidia segmentation performed equally well compared to the other datasets, confirming EyeHex's adaptability to low-contrast samples.

Ommatidia size profile and ommatidia column count

Insect compound eyes often exhibit local specializations – named acute or bright zones – that enhance acuity or signal-to-noise ratio in specific regions of the visual field (Horridge, 1978; Land, 1997). The presence and location of these acute/bright zones correlate with distinct visually guided behaviors. For example, upward-pointing zones are frequently associated to predatory behavior, whereas male-specific forward-pointing zones are linked to sexual pursuit. In many insect species, including flies, these zones are characterized by enlarged ommatidia. In addition to quantifying ommatidia number, EyeHex can automatically detect these regions of enlarged ommatidia by analyzing ommatidia size variation across the eye. By analyzing brightfield multifocus image stacks, EyeHex can reconstruct the three-dimensional (3D) position of each ommatidium (Fig. 3C). This, in turn, enables the calculation of the average distance in 3D space between each ommatidium and its six adjacent neighbors (see Materials and Methods). Because the lenses of adjacent ommatidia are closely juxtaposed, this measure provides a proxy for ommatidial diameter across the entire eye.

In all eyes analyzed (e.g. Fig. 3D), ommatidial density and diameter varied by up to 20% along the posterior–dorsal to anterior–ventral axis of the eye, consistent with observations by (Gaspar et al., 2020) and (Buffry et al., 2024) (Fig. 3D; Fig. S7). Projecting density maps from multiple specimens onto a common reference revealed a consistent zone of enlarged ommatidia in the anterior-ventral part of the eye (black crosses in Fig. 3D; Fig. S7). In Drosophila, acute zones have been documented in males, where they occupy an antero–dorsal position. These zones are reportedly absent in females, though the existence and location of analogous structures in female eyes remain unclear. Since our study focused on female eyes, further investigation is required to determine whether the observed antero–ventral ommatidial enlargement represents an acute zone in this sex. Notably, at the extreme edge of the eye, ommatidia diameters measured by EyeHex were unexpectedly small (<12 µm). In these regions, some ommatidia adopt an elongated, non-hexagonal shape, and the diameters reported by EyeHex reflect their reduced width. However, identifying the xy- and z-coordinates of highly slanted ommatidia can be challenging, making the determination of inter-ommatidial spacing at the very edge of the eye prone to errors. Thus, such small diameter may not always fully represent true morphological dimensions.

In the compound eye, ommatidia are arranged in regularly spaced columns along the anterior–posterior (A–P) axis of the eye (Fig. 3B). This organization reflects the progressive determination and spacing of ommatidia founder cells (termed R8 cells) along the A–P axis of the eye primordium during the progression of a differentiation wave named ‘morphogenetic furrow’ (Ready et al., 1976; Wolff and Ready, 1993) (for a recent review, see Warren and Kumar, 2023). Thus, quantifying the number of A–P columns in adult eyes provides a readout of the R8 determination process and of its robustness. Here, we used EyeHex to extract this information from a set of brightfield eye images from 12 individuals of the Hikone-AS wild-type genetic background. Each eye consisted of 33.89±1.05 columns (mean and standard deviation). We also extracted ‘column profiles’ by plotting the number of ommatidia per column along the A–P axis of the eye (Fig. 3B), thereby providing a simple representation of the eye geometry. We then aligned the 12 column profiles for comparison. The mean aligned column profile from 12 brightfield eye images is shown in Fig. 3E. Interestingly, the standard deviation (shaded area of the plot, Fig. 3E) of the mean is on average ∼1.08 ommatidia, with a maximum of 3.73 ommatidia per column. Therefore, the highly inbred Hikone-AS laboratory individuals show very little variation in eye organization and size, indicating that despite its complexity, compound eye development is remarkably robust. Comparison of the mean aligned column profiles between left and right eyes from separate individuals also did not reveal any significant bias (an average difference of 0.45 ommatidia per column) (Fig. 3F).

Comparison of ommatidia segmentation using EyeHex and Ommatidia Detection Algorithm (ODA)

The segmentation method developed by Currea et al. (2023) is applicable to various image types and exploits periodic ommatidia patterns via filtering in the frequency domain (Fourier transforms), fitting hexagonal/orthogonal grids to overlapping regions, following a fixed ommatidial spacing. While efficient, we predicted that, in comparison with EyeHex, this semi-global approach may struggle with variation in ommatidial spacing caused by the eye curvature, or be affected by small irregularities in the eye organization. By contrast, EyeHex employs an adaptive, local strategy: ommatidia are identified sequentially and integrated into the growing eye grid based on their three nearest neighbors, accommodating local variation in ommatidial density, including at the eye edges.

To test these predictions, we applied the two methods on shared datasets consisting of SEM and brightfield images from Hikone-AS eyes, presenting the typical challenges in ommatidia segmentation from 2D images, including varying degrees of ocular hair coverage, of imaging angle, and imperfect focus at the eye edges (in brightfield images) (Fig. S8). For both image types, ODA's average ommatidia counts were significantly closer to the reference counts than those of EyeHex, which systematically overestimated ommatidia numbers (Table S1; Table S3, Figs S9-S10). However, closer inspection revealed that ODA's apparent accuracy stemmed from both over-detections and under-detections that compensated for one another. Likely due to its Fourier transform-based approach, which assumes a constant ommatidia spacing across the eye, ODA underestimated ommatidia density at the edges of the eyes. Occasionally, it produced false positives, incorrectly detecting ommatidia both inside and outside the eye area. In contrast, EyeHex produced no false positives within the eye region. As EyeHex segmentation method adaptively adds individual ommatidia to an expanding hexagonal grid, it correctly predicted the increased ommatidia density at the eye edges, rarely missing ommatidia. Both methods generated over-detections of ommatidia outside the eye area. However, because EyeHex predicts a higher ommatidia density at the eye edge than ODA, its overall overestimation was also more pronounced. Beyond ommatidia counts, in contrast to EyeHex, ommatidia locations detected by ODA deviated from ommatidia centers in both SEM and brightfield images, with the degree of offset varying across different eye regions, potentially distorting the ommatidia spacing profiles. Thus, as ODA produces more heterogeneous errors, it may necessitate more laborious user's inspection and manual correction prior to morphological analysis. Of note, EyeHex toolbox further streamlines post-segmentation refinement with an integrated manual correction module with ‘quality of life’ features including save/load correction progress, zoom and drag navigation tools to inspect different eye regions – functionalities currently absent in ODA.

The EyeHex toolbox automates segmentation and mapping in 2D space of ommatidia in Drosophila compound eyes. Coupled with a dedicated analysis pipeline, it quantifies metrics of the eye organization, including total ommatidia number, ommatidia count profiles along the A–P axis, and contour maps of ommatidial diameters. Using a machine learning-based approach, EyeHex adapts to diverse image types, with performance improving as datasets expand. EyeHex employs a local, adaptive segmentation strategy where ommatidia are identified sequentially and incorporated into a growing eye grid based on their three nearest neighbors. Thanks to this approach, EyeHex accommodates local variation in ommatidial density, particularly at the eye edges where the eye curvature reduces apparent ommatidial spacing. In contrast, current tools for automated ommatidial segmentation typical employ global approaches with fixed ommatidial spacing. Reis et al. (2020) used brightfield images to segment ommatidia reflections as discrete cells separated by a predefined minimum distance, which may produce false negatives in case of damaged or non-reflective ommatidia. More recently, the Ommatidia Detection Algorithm (ODA) by (Currea et al., 2023), consists of a sophisticated method that identifies periodic ommatidia patterns through frequency domain filtering (Fourier transforms) to define fixed ommatidial spacing parameters. Comparison between ODA and EyeHex revealed that their performance reflects their fundamental methodological differences (global versus local strategies). While ODA can be readily employed without the need of training data, EyeHex proved more robust in handling local variation in ommatidial spacing and appearance, particularly in regions of high curvature when processing 2D images.

EyeHex is effective for quantifying eye morphology in non-melanogaster Drosophila species, which share conserved eye organization, as shown in this study for large-eyed D. pseudoobscura. It is also adaptable to images exhibiting lower contrast, such as white mutant eyes. However, mutant eyes with severe structural abnormalities may not be accurately segmented by EyeHex. Similarly, highly bulged eyes (obscuring edges in brightfield image stacks) may require alternative imaging techniques such as µCT scans (Gaspar et al., 2020) or indirect imaging of eye molds flattened by incision (Ramirez-Esquivel et al., 2017).

Manual counts of ommatidia from SEM images ommatidia remain the gold standard for ommatidia enumeration. EyeHex achieves >99.6% concordance with this reference after minimal manual corrections. When applied to macroscope brightfield images of unfixed samples, EyeHex maintains high accuracy while streamlining sample preparation and processing. However, fully automated segmentation methods cannot achieve 100% accuracy, and the user-guided inspection and manual correction included in the EyeHex toolbox remain essential. After manual correction, the average deviation of EyeHex from manual ommatidia counts (based on SEM images) is, on average, 0.3% (equivalent to 2.17 ommatidia per eye). For comparison, the documented natural variation in ommatidia number in drosophilids is at least an order of magnitude larger (data from (Ramaekers et al., 2019) and (Posnien et al., 2012)): from 3 to 15% between populations of the same species or between sexes. Similarly, a small effect, engineered single nucleotide substitution in the enhancer of the eye selector gene eyeless resulted in a 3.25% increase in eye size (corresponding to 23.2 ommatidia/eye). These comparisons demonstrate that the accuracy of EyeHex is well below the range of natural and engineered biological variation, supporting its reliability as a tool for quantifying ommatidia number in Drosophila.

Compound eyes, though widespread across the animal kingdom, exhibit lower acuity than other eye types, such as the vertebrate camera eyes (Kirschfeld, 1976; Land, 1997; Mallock, 1894). To achieve human-level resolution, compound eyes should consist of millions of ommatidia, reaching a radius of 6 meters. Many species compensate via specialized zones (‘acute’ or ‘bright’ zones), often characterized by enlarged ommatidia, which enhance acuity for specific parts of the visual field (Land, 1997). Despite the adaptive relevance of this trait, most Drosophila studies report only average or local measures of ommatidial diameter (Gaspar et al., 2020; Keesey et al., 2019; Posnien et al., 2012; Ramaekers et al., 2019). Exceptions include a recent study applying a 3D version of the ODA segmentation tool (Currea et al., 2023) to synchrotron microtomography images (Buffry et al., 2024). Similarly, EyeHex derives 3D altitude data from brightfield multifocus stacks to map ommatidial diameter across the eye surface, enabling identification of potential acute zones or subtle patterns of variation.

Variation in Drosophila compound eye size is a broadly used model in evolutionary developmental biology (Casares and McGregor, 2020). Despite well-documented eye size variation, technical limitations have historically limited large-scale studies, precluding the compound eye's use in addressing fundamental evo-devo questions. For instance, quantifying subtle morphological variation – a critical metric for assessing developmental precision, stochasticity, or robustness – demands expansive datasets. This bottleneck has relegated studies of phenomena like fluctuating asymmetry in Drosophila to simpler traits (e.g. wing morphology and mechanosensory hair patterns (Debat et al., 2009; Polak and Starmer, 2001), which are more readily quantified. The compound eye, with its extensively characterized development, offers an ideal system to explore these questions in a complex tissue, uniquely positioned to link developmental mechanisms with phenotypic outcome. EyeHex bridges this methodological gap by enabling accurate segmentation and extraction of eye morphology statistics across imaging platforms, including accessible brightfield macroscopy. Our pipeline, from imaging unfixed heads to EyeHex analysis, offers a rapid, cost-effective solution for large-scale quantification of eye morphology.

Data acquisition

Drosophila strains and culture

Fly stocks were cultured on standard cornmeal diet food at 25°C in density-controlled conditions: batches of 10 females and 10 males were raised at 25°C and transferred into a fresh vial every day. For each vial, eye measures were performed only on females eclosing during the first 2 days of eclosion. Genotypes used in this study were: for Drosophila melanogaster: Canton-S BH (Ramaekers et al., 2019); Hikone-AS (Kyoto DGRC 105668); DGRP-208 (WT25174; RRID: BDSC_24174); w;;;+/Df(4)J2 (abbreviated as WTJ2; Ramaekers et al., 2019); for Drosophila pseudoobscura: Catalina Island, California, Isofemale line (National Drosophila Species Stock Center, 14011-121.121).

Scanning electron microscopy (SEM) images

SEM images are reused from (Ramaekers et al., 2019).

Light-microscopy images

Light-macroscopy images were either reused from (Ramaekers et al., 2019; WTJ2) or newly acquired (Hikone-AS). In the latter case, they were acquired from non-fixed, freshly cut adult heads glued laterally on glass slides using a solvent-free all material glue (UHU®, catalogue number 64481). 3D image stacks (Z distance, 8 µm) were acquired at 400-600X magnification with a Keyence digital macroscope VHX 2000 using optical zoom lens VH-Z20R/W. Single focused 2D images were generated using Stack Focuser plugin from ImageJ (Schindelin et al., 2012).

Training and classification phase

Training data preparation

This step is required when a new image type (e.g. SEM, brightfield microscope) is introduced. A first set of 2D sample images is used as training data for the classification of the eye area (versus non-eye area) and the ommatidia surfaces (versus ommatidia boundaries) (Fig. 2A). Using the Graphical User Interface (GUI) included in the EyeHex toolbox, users are invited to manually delineate the compound eye area and segment small groups of ommatidia at different locations of the eye (e.g. center versus edges), thus presenting different shapes, reflections and bristle coverage (Fig. 1A). This training data will be used on a classifier to convert each raw input image into probability maps of the eye area and ommatidia surfaces (Fig. 2B). Provided that the imaging settings are unperturbed, a single training image is sufficient to achieve accurate classification.

Automatic classification

Classification is performed using the Fiji Weka plug-in (Arganda-Carreras et al., 2017) through a provided macro. Training datasets are used to train two separate pixel-based Fast Random Forest classifiers, each containing 100 decision trees (Breiman, 2001). The trained classifier will then convert the focused 2D images into probability maps of the eye area and the ommatidia surfaces. The compound eye region is defined by thresholding the eye probability map with the Otsu algorithm (Otsu, 1979).

Complete ommatidia segmentation phase

EyeHex automatic segmentation takes advantage of the remarkably robust hexagonal organization of the compound eye. During this step, ommatidia will be automatically and comprehensively segmented by expanding a hexagonal grid to fit the ommatidia probability map within the eye area (Fig. 2C). Users can validate and manually correct the segmentation process for each image using the provided GUI (Fig. 2D). The output of the segmentation consists of (1) a labeled image carrying the information on the location of ommatidia surfaces and boundaries, (2) the count of ommatidia number and (3) data on each ommatidium (2D position and coordinates on the hexagonal grid).

The ommatidia hexagonal grid

EyeHex maps each ommatidium to a unique coordinate on a hexagonal grid. Thus, each ommatidium is characterized by two sets of coordinates: an integer coordinate (xhex, yhex) on the hexagonal grid and a continuous Cartesian coordinate (x, y) on the 2D ommatidia probability map (Fig. 4A). Note that each ommatidium of the hexagonal grid shares edges with six neighbors, instead of four as in a normal orthogonal grid. The segmentation process includes two steps:

Fig. 4.

The ommatidia hexagonal grid. (A) The grid is seeded from three original ommatidia (purple). Each ommatidium is identified by a unique integer coordinate (xhex, yhex) on the hexagonal grid, alongside its continuous Cartesian coordinate (x2D, y2D) on the ommatidia probability map. Note that the hexagonal axes and the Cartesian axes are not necessarily parallel. (B) New ommatidia (red) are added individually to the grid by mirroring the seed ommatidia (purple): ommatidium 4 is spawned by mirroring ommatidium 1 through ommatidium 2 and 3; ommatidium 5 by mirroring ommatidium 2 through ommatidium 1 and 3 and ommatidium 6 by mirroring ommatidium 3 through ommatidium1 and 2. (C) New ommatidia (red solid circles) are segmented and added to the outer row of existing hexagonal grid set by 3 seed ommatidia (purple), shown here on top of the ommatidia probability map Iprob generated during classification phase.

Fig. 4.

The ommatidia hexagonal grid. (A) The grid is seeded from three original ommatidia (purple). Each ommatidium is identified by a unique integer coordinate (xhex, yhex) on the hexagonal grid, alongside its continuous Cartesian coordinate (x2D, y2D) on the ommatidia probability map. Note that the hexagonal axes and the Cartesian axes are not necessarily parallel. (B) New ommatidia (red) are added individually to the grid by mirroring the seed ommatidia (purple): ommatidium 4 is spawned by mirroring ommatidium 1 through ommatidium 2 and 3; ommatidium 5 by mirroring ommatidium 2 through ommatidium 1 and 3 and ommatidium 6 by mirroring ommatidium 3 through ommatidium1 and 2. (C) New ommatidia (red solid circles) are segmented and added to the outer row of existing hexagonal grid set by 3 seed ommatidia (purple), shown here on top of the ommatidia probability map Iprob generated during classification phase.

Close modal

(Step 1) setting the origin and orientation of the hexagonal grid:

The origin and orientation of the grid are established based on three original ommatidia adjacent to one another (purple cells in Fig. 4A; with coordinates (xhex, yhex) equal to (0,0), (1,0), (0,1)), selected by the user at the center of compound eye.

(Step 2) expansion of the hexagonal grid:

The hexagonal grid is expanded by adding individual ommatidia to the current grid. For each three adjacent ommatidia (referred as omtd1, omtd2, omtd3; purple cells in Fig. 4B), we have their respective coordinates on the hexagonal grid (xhex1, yhex1), (xhex2, yhex2), (xhex3, yhex3) and on the Cartesian grid (x1, y1), (x2, y2), (x3, y3). Those coordinates are used to predict the position of a fourth ommatidium, omtd4, which mirrors omtd1 through omtd2 and omtd3 (Fig. 4B) via Eqn 1-4:
(1)
(2)
(3)
(4)
Based on the ommatidia probability map, the program will find an ommatidium within the vicinity of the mirrored position (x4, y4) and add omtd4 to the hexagonal grid at the coordinates (xhex4, yhex4) (see below). Only the closest ommatidium within a distance of Lgrid/2 from the predicted position (x4, y4) is added to the grid. Here, Lgrid is the average length of the hexagonal ommatidial edges:
(5)
During the automatic hexagonal expansion (Fig. 4C), new ommatidia close to the mirrored position that best fits the ommatidia facet probability map (see below) will always be added to the perimeter of the grid, unless they are found outside the compound eye region (Fig. 4C).

Fitting new ommatidia to the ommatidia probability map

From the three adjacent ommatidia (omtd1, omtd2, omtd3), we look for the position of omtd4 (x4, y4) based on the facet probability map [denoted as Iprob(x, y)] generated with the trained machine learning model. The general strategy is ‘find Mickey pattern’ with omtd2 and omtd3 being the ‘ear’ and omtd4 being the ‘face’. A template image Imouse of size m×m (m in pixel) is constructed with three filled identical circles of radius 0.225×m, the centers of which form a perfect angle with edge 0.5×m (Fig. 5A). The pattern is then smoothened with a 2D gaussian filter with σ=m/20. We also set an ‘ignore’ region (with NaN pixel value, dashed blue in Fig. 5A) on Imouse to avoid interference from surrounding ommatidia.

Fig. 5.

Mickey patterns’ on the facet probability map. (A) The coordinate vectors (left) for the mouse image Imouse (right). The objective function does not consider the blue dashed regions (NaN value). (B) Example of the new coordinate vectors B=(va, vb) from ommatidium 2, ommatidium 3 and ommatidium 4 position (left) with the transformed Imouse in this coordinate (right).

Fig. 5.

Mickey patterns’ on the facet probability map. (A) The coordinate vectors (left) for the mouse image Imouse (right). The objective function does not consider the blue dashed regions (NaN value). (B) Example of the new coordinate vectors B=(va, vb) from ommatidium 2, ommatidium 3 and ommatidium 4 position (left) with the transformed Imouse in this coordinate (right).

Close modal
We called , and the coordinates of the centers of the three circles in the template image Imouse. We find a linear transformation from the coordinate vectors to the coordinate vectors B, in which:
(6)
and
(7)
For each non-NaN pixel in position (i, j) in Imouse, we find the corresponding coordinate on Iprob (Fig. 5B).
The objective function for the fit of the new omtd4 position Δ(x4, y4) is given by the squared error between the facet probability map and the transformed mouse image:
(8)
with θ being a free parameter to adjust for pixel brightness.

Due to frequent irregularities on the probability map, we introduce additional constraints to preserve the integrity of the hexagonal grid and account for the fact that the facet orientation changes gradually along the grid: The objective function Δ(x4, y4) is set to infinity when the new omtd4 position is farther than 0.3×Lgrid from the predicted position that mirrors omtd1, and when the omtd4 top view angle is too steep (<20°) or too different from that of omtd1.

Manual correction

Following the automatic hexagonal grid expansion, the user can manually remove or add ommatidia using the provided GUI. After manual correction, newly added ommatidia are mapped and automatically given their own coordinate in the existing hexagonal grid.

Data analyses

Evaluating ommatidia count accuracy from EyeHex

The gold standard for quantifying ommatidia numbers relies on manual counts performed on 2D compound eye images acquired via scanning electron microscopy (SEM). Thus, we compared directly the number of ommatidia identified by EyeHex toolbox, both before and after manual correction, with manual count for each SEM image from previously published datasets (Ramaekers et al., 2019) (Table 1; Figs. S8-S10 and Table S1).

For brightfield images, as manual counting is not possible in eye regions with highly slanted imaging angles (Fig. 1A), we compared the statistics of ommatidia counts (mean and standard deviation) obtained from EyeHex with those of manual counts from SEM dataset on the same strain (Hikone-AS) (see Results section for limitations).

Measurement of ommatidia size

When multi-focused brightfield images are used, the z-coordinate of each ommatidium can be determined from its position in the original image stack. We use this information to estimate the size of an ommatidium by averaging the Euclidean distances in the 3D space between its center and the ones of its adjacent ommatidia. These values are fitted to a polynomial surface to generate a smooth profile of ommatidia altitude and spacing across the eye, as shown in Fig. 3B and C.

Ommatidia column identification and column profiles

The ommatidia are arranged in columns that reflect the progressive specification of R8 ommatidia founder cells along the anterior-posterior axis of the eye primordium (eye-antennal disc) (Ready et al., 1976). To locate these columns, the origin of the ommatidia hexagonal grid (xhex=0,  yhex=0) is shifted to the detected center-most ommatidia. The grid is then reoriented so that both the x- and y-coordinates of the most anterior ommatidium in this grid take positive values (Fig. 3A). The sum of the new xy-coordinates of the ommatidia (xhex+yhex) defines the column index that corresponds to their birth order during R8 specification. This column index increases from the posterior–dorsal to anterior–ventral regions of the eye (example in Fig. 3D). From the reoriented grid, we calculate the ‘column profile’ as the number of ommatidia per column, which provides information about the eye geometry (example in Fig. 3E).

Averaging of morphological profiles

We accumulate measurements of individual ommatidia from different eyes based on their unique coordinate in the hexagonal grid (xhex,  yhex). This allows us to calculate the average ommatidia spacing in different regions of the grid. For visualization, we project this average spacing profile onto a reference eye, where the Cartesian coordinate for each segmented ommatidia is known (example in Fig. 3D).

Outputs

After manual correction of segmentation and analysis of each eye, EyeHex automatically exports the label image of the identified ommatidia and eye region. These label images can be added to the training data to improve subsequent classifications. Also exported is a .csv file containing information about each ommatidium's index, its Cartesian and hexagonal coordinates, and its birth order during R8 specification.

Comparison of ommatidia segmentation with Ommatidia Detection Algorithm (ODA)

Manual segmentation of compound eye area

Since ODA lacks a built-in function for compound eye area segmentation, we first manually defined the contours of the compound eye. For each image, the compound eye area was initially cropped according to the contours of the compound eyes using FIJI (Schindelin et al., 2012). To simulate imperfect eye masks, we intentionally included small marginal areas outside the true eye boundaries (examples in Fig. S8). We next proceeded to automatic ommatidia segmentation using EyeHex and ODA on this common set of manually preprocessed eye images.

EyeHex ommatidia segmentation

The EyeHex ommatidia classifier was trained with a dataset comprising 150 manually segmented ommatidia derived from two images of the corresponding image type (either SEM or brightfield). Automatic segmentation was then performed as described above.

ODA ommatidia segmentation

We used ODA (version available on GitHub as of March 2025), following the authors’ guidelines available at https://github.com/jpcurrea/ODA. We chose the parameter ‘bright_peak=False’ for SEM images, and ‘bright_peak=True’ for brightfield images, as they visibly produced the most accurate segmentation results.

The authors want to thank Virginie Courtier (CNRS, Institut Jacques Monod, Paris) for providing access to the Keyence macroscope. Stocks obtained from the Bloomington Drosophila Stock Center (NIH P40OD018537), the Kyoto Drosophila Stock Center and the National Drosophila Species Stock Center were used in this study.

Author contributions

Conceptualization: H.T., A.R.; Formal analysis: H.T.; Funding acquisition: H.T., N.D., A.R.; Investigation: A.R.; Methodology: H.T., A.R.; Resources: N.D., A.R.; Software: H.T.; Validation: H.T., A.R.; Visualization: H.T., A.R.; Writing – original draft: H.T., A.R.; Writing – review & editing: H.T., N.D., A.R.

Funding

H.T. is supported by Research Council of Finland (354987), Tampere University's Health Data Science Profile. This work was also supported by a grant from Institut Curie (“FlyTime”) and an Emergence grant from Sorbonne Université to A.R. Open Access funding provided by Sorbonne Université. Deposited in PMC for immediate release.

Data and resource availability

EyeHex toolbox: code and user manual available to download from https://github.com/huytran216/EyeHex-toolbox. SEM and Brightfield image collection: 10.5281/zenodo.15112004. Examples of segmentation results and analysis; comparison between Ommatidia Detection Algorithm (ODA) and EyeHex segmentation, Ommatidia counts from EyeHex on our image collection and comparison of ommatidia counts between Ommatidia Detection Algorithm (ODA) and EyeHex: are available in the Supplementary Materials.

Arganda-Carreras
,
I.
,
Kaynig
,
V.
,
Rueden
,
C.
,
Eliceiri
,
K. W.
,
Schindelin
,
J.
,
Cardona
,
A.
and
Sebastian Seung
,
H.
(
2017
).
Trainable Weka Segmentation: a machine learning tool for microscopy pixel classification
.
Bioinformatics
33
,
2424
-
2426
.
Arif
,
S.
,
Hilbrant
,
M.
,
Hopfen
,
C.
,
Almudi
,
I.
,
Nunes
,
M. D. S.
,
Posnien
,
N.
,
Kuncheria
,
L.
,
Tanaka
,
K.
,
Mitteroecker
,
P.
,
Schlötterer
,
C.
et al.
(
2013
).
Genetic and developmental analysis of differences in eye and face morphology between Drosophila simulans and Drosophila mauritiana
.
Evol. Dev.
15
,
257
-
267
.
Breiman
,
L.
(
2001
).
Random forests
.
Mach. Learn.
45
,
5
-
32
.
Buffry
,
A. D.
,
Currea
,
J. P.
,
Franke-Gerth
,
F. A.
,
Palavalli-Nettimi
,
R.
,
Bodey
,
A. J.
,
Rau
,
C.
,
Samadi
,
N.
,
Gstöhl
,
S. J.
,
Schlepütz
,
C. M.
,
McGregor
,
A. P.
et al.
(
2024
).
Evolution of compound eye morphology underlies differences in vision between closely related Drosophila species
.
BMC Biol.
22
,
67
.
Casares
,
F.
and
McGregor
,
A. P.
(
2020
).
The evolution and development of eye size in flies
.
Wiley Interdiscipl. Rev. Dev. Biol.
10
,
e380
.
Currea
,
J. P.
,
Smith
,
J. L.
and
Theobald
,
J. C.
(
2018
).
Small fruit flies sacrifice temporal acuity to maintain contrast sensitivity
.
Vision Res.
149
,
1
-
8
.
Currea
,
J. P.
,
Sondhi
,
Y.
,
Kawahara
,
A. Y.
and
Theobald
,
J.
(
2023
).
Measuring compound eye optics with microscope and microCT images
.
Commun. Biol.
6
,
246
.
Debat
,
V.
,
Debelle
,
A.
and
Dworkin
,
I.
(
2009
).
Plasticity, canalization, and developmental stability of the drosophila wing: joint effects of mutations and developmental temperature
.
Evolution
63
,
2864
-
2876
.
Gaspar
,
P.
,
Almudi
,
I.
,
Nunes
,
M. D. S.
and
McGregor
,
A. P.
(
2019
).
Human eye conditions: insights from the fly eye
.
Hum. Genet.
138
,
973
-
991
.
Gaspar
,
P.
,
Arif
,
S.
,
Sumner-Rooney
,
L.
,
Kittelmann
,
M.
,
Bodey
,
A. J.
,
Stern
,
D. L.
,
Nunes
,
M. D. S.
and
McGregor
,
A. P.
(
2020
).
Characterization of the genetic architecture underlying eye size variation within drosophila melanogaster and drosophila simulans
.
G3 (Bethesda)
10
,
1005
-
1018
.
Harsh
,
S.
,
Fu
,
Y.
,
Kenney
,
E.
,
Han
,
Z.
and
Eleftherianos
,
I.
(
2020
).
Zika virus non-structural protein NS4A restricts eye growth in Drosophila through regulation of JAK/STAT signaling
.
Dis. Model. Mech.
13
,
dmm040816
.
Horridge
,
G. A.
(
1978
).
The separation of visual axes in apposition compound eyes
.
Philos. Trans. R. Soc. Lond. B Biol. Sci.
285
,
1
-
59
.
Keesey
,
I. W.
,
Grabe
,
V.
,
Gruber
,
L.
,
Koerte
,
S.
,
Obiero
,
G. F.
,
Bolton
,
G.
,
Khallaf
,
M. A.
,
Kunert
,
G.
,
Lavista-Llanos
,
S.
,
Valenzano
,
D. R.
et al.
(
2019
).
Inverse resource allocation between vision and olfaction across the genus Drosophila
.
Nat. Commun.
10
,
1162
.
Kirschfeld
,
K
. (
1976
).
The resolution of lens and compound eyes
. In
Neural Principles in Vision
(ed.
F.
Zettler
and
R.
Weiler
), pp.
354
-
370
.
Berlin, Heidelberg
:
Springer
.
Land
,
M. F.
(
1997
).
Visual acuity in insects
.
Annu. Rev. Entomol.
42
,
147
-
177
.
Land
,
M. F.
(
2005
).
The optical structures of animal eyes
.
Curr. Biol.
15
,
R319
-
R323
.
Land
,
M. F.
and
Nilsson
,
D.-E.
(
2012
).
Animal Eyes
.
Oxford
:
Oxford University Press
.
Luo
,
X.-Z.
,
Wipfler
,
B.
,
Ribera
,
I.
,
Liang
,
H.-B.
,
Tian
,
M.-Y.
,
Ge
,
S.-Q.
and
Beutel
,
R. G.
(
2018
).
The cephalic morphology of free-living and cave-dwelling species of trechine ground beetles from China (Coleoptera, Carabidae)
.
Org. Divers. Evol.
18
,
125
-
142
.
Luo
,
X. Z.
,
Antunes-Carvalho
,
C.
,
Wipfler
,
B.
,
Ribera
,
I.
and
Beutel
,
R. G.
(
2019
).
The cephalic morphology of the troglobiontic cholevine species Troglocharinus ferreri (Coleoptera, Leiodidae)
.
J. Morphol.
280
,
1207
-
1221
.
Maccallini
,
P.
,
Bavasso
,
F.
,
Scatolini
,
L.
,
Bucciarelli
,
E.
,
Noviello
,
G.
,
Lisi
,
V.
,
Palumbo
,
V.
,
D'Angeli
,
S.
,
Cacchione
,
S.
,
Cenci
,
G.
et al.
(
2020
).
Intimate functional interactions between TGS1 and the Smn complex revealed by an analysis of the Drosophila eye development
.
PLoS Genet.
16
,
e1008815
.
Mallock
,
A.
(
1894
).
I. Insect sight and the defining power of composite eyes
.
Proc. R. Soc. Lond.
55
,
85
-
90
.
Nériec
,
N.
and
Desplan
,
C.
(
2016
).
From the eye to the brain: development of the Drosophila visual system
.
Curr. Top. Dev. Biol.
116
,
247
-
271
.
Norry
,
F. M.
and
Gomez
,
F. H.
(
2017
).
Quantitative trait loci and antagonistic associations for two developmentally related traits in the drosophila head
.
J. Insect Sci.
17
,
19
.
Otsu
,
N.
(
1979
).
A threshold selection method from gray-level histograms
.
IEEE Trans. Syst. Man Cybernet. C
9
,
62
-
66
.
Polak
,
M.
and
Starmer
,
W. T.
(
2001
).
The quantitative genetics of fluctuating asymmetry
.
Evolution
55
,
498
-
511
.
Posnien
,
N.
,
Hopfen
,
C.
,
Hilbrant
,
M.
,
Ramos-Womack
,
M.
,
Murat
,
S.
,
Schönauer
,
A.
,
Herbert
,
S. L.
,
Nunes
,
M. D. S.
,
Arif
,
S.
,
Breuker
,
C. J.
et al.
(
2012
).
Evolution of eye morphology and Rhodopsin expression in the Drosophila melanogaster species subgroup
.
PLoS ONE
7
,
e37346
.
Ramaekers
,
A.
,
Claeys
,
A.
,
Kapun
,
M.
,
Mouchel-Vielh
,
E.
,
Potier
,
D.
,
Weinberger
,
S.
,
Grillenzoni
,
N.
,
Dardalhon-Cuménal
,
D.
,
Yan
,
J.
,
Wolf
,
R.
et al.
(
2019
).
Altering the temporal regulation of one transcription factor drives evolutionary trade-offs between head sensory organs
.
Dev. Cell
50
,
780
-
792.e7
.
Ramirez-Esquivel
,
F.
,
Ribi
,
W. A.
and
Narendra
,
A.
(
2017
).
Techniques for investigating the anatomy of the ant visual system
.
J. Vis. Exp.
,
129
56339
.
Ready
,
D. F.
,
Hanson
,
T. E.
and
Benzer
,
S.
(
1976
).
Development of the Drosophila retina, a neurocrystalline lattice
.
Dev. Biol.
53
,
217
-
240
.
Reis
,
M.
,
Wiegleb
,
G.
,
Claude
,
J.
,
Lata
,
R.
,
Horchler
,
B.
,
Ha
,
N.-T.
,
Reimer
,
C.
,
Vieira
,
C. P.
,
Vieira
,
J.
and
Posnien
,
N.
(
2020
).
Multiple loci linked to inversions are associated with eye size variation in species of the Drosophila virilis phylad
.
Sci. Rep.
10
,
12832
.
Schindelin
,
J.
,
Arganda-Carreras
,
I.
,
Frise
,
E.
,
Kaynig
,
V.
,
Longair
,
M.
,
Pietzsch
,
T.
,
Preibisch
,
S.
,
Rueden
,
C.
,
Saalfeld
,
S.
,
Schmid
,
B.
et al.
(
2012
).
Fiji: an open-source platform for biological-image analysis
.
Nat. Methods
9
,
676
-
682
.
Warrant
,
E. J.
(
2008
).
Seeing in the dark: vision and visual behaviour in nocturnal bees and wasps
.
J. Exp. Biol.
211
,
1737
-
1746
.
Warrant
,
E. J.
and
Nilsson
,
D. E.
(
1998
).
Absorption of white light in photoreceptors
.
Vision Res.
38
,
195
-
207
.
Warren
,
J.
and
Kumar
,
J. P.
(
2023
).
Patterning of the Drosophila retina by the morphogenetic furrow
.
Front. Cell Dev. Biol.
11
,
1151348
.
Wolff
,
T.
and
Ready
,
D. F
. (
1993
).
Pattern formation in the Drosophila retina
. In
The Development of Drosophila Melanogaster
(ed.
M.
Bate
and
A.
Martinez-Arias
), pp.
1277
-
1325
.
Cold Spring Harbor Laboratory Press
.

Competing interests

The authors declare no competing or financial interests.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.

Supplementary information