D'Arcy Thompson was a true pioneer, applying mathematical concepts and analyses to the question of morphogenesis over 100 years ago. The centenary of his famous book, On Growth and Form, is therefore a great occasion on which to review the types of computer modeling now being pursued to understand the development of organs and organisms. Here, I present some of the latest modeling projects in the field, covering a wide range of developmental biology concepts, from molecular patterning to tissue morphogenesis. Rather than classifying them according to scientific question, or scale of problem, I focus instead on the different ways that modeling contributes to the scientific process and discuss the likely future of modeling in developmental biology.
Introduction
D'Arcy Thompson's combination of mathematics, physics and biology allowed him to articulate the parallels between forms seen in living organisms and shapes generated by non-living physical processes. He described the similarities between jellyfishes and drops of liquid, between shells and geometric series, and how the collection of tensile and compressive elements that make up the vertebrate skeleton parallels the tensile and compressive elements that comprise bridges (cables and girders, respectively). He illustrated his ideas with simple computations, but he was acutely aware of the limitations of what could be achieved in the early twentieth century, writing that: ʻThis book of mine has little need of preface, for indeed it is ‘all preface’ from beginning to end' [Thompson (1942), prefatory note]. With the power of computers, the sophistication of biological data has changed beyond recognition during the intervening century, and it is thus an opportune moment to consider how the computational modeling of developmental biology is poised to influence the field over the next decade.
There are many different formalisms and types of computer modeling (from very detailed to very abstract, as briefly summarized in Boxes 1 and 2), but here I wish to draw attention to a different distinction. Science is essentially about the interplay between ideas and reality. Until recently, our ability to explore either side of the equation was limited. Mathematical and computer modeling could help explore non-trivial ideas, but the lack of high-quality data with which to constrain the models hampered our ability to distinguish plausible hypotheses from correct hypotheses. Indeed, for many decades modeling was rightly seen as theoretical biology, with the emphasis on ʻtheoretical'. Recently, however, our ability to probe and monitor the natural world has been revolutionized by a slew of new experimental technologies (high-resolution 3D time-lapse imaging, precise genetic manipulations, single-cell transcriptomics, optogenetics, to mention just a few). Rather than rendering theory less important, this flood of high-quality experimental data lends new urgency to the task of computational modeling. Although modeling alone cannot prove a hypothesis to be true (a point we will return to in the discussion), the dynamic complexity of real biological systems means that experimental approaches alone will also be insufficient to truly understand them.
At a particularly deep level, computer modeling may be considered to always serve the same basic purpose: to treat mechanistic ideas in a more rigorous manner than we are capable of with our own minds. Much as a microscope is a tool to help us see things that are too small for our own eyes, computer modeling is a tool that can help reveal the dynamics and predictions of an idea that is too complex for our own brains. Nevertheless, it can be useful and interesting to explore how the role of modeling differs for different projects. In this Review, I have chosen four informal qualitative categories within which to highlight some recently published modeling projects: exploration of concepts, hypothesis testing, reverse-engineering, and multi-scale integrative modeling. The key distinctions between these categories concern the interplay between theory and experiment: how well developed are the ideas in the field? Are there rough ideas which need exploring or concrete hypotheses that need challenging? Or perhaps no clear hypothesis exists at all, and the modeling seeks to find one. This is not an in-depth discussion on the roles of modeling in biology – a large topic on which some excellent articles have been written (Gunawardena, 2014; Brodland, 2015) – nor should these categories be considered definitive or rigorously mutually exclusive; in some studies, modeling takes on multiple roles (and could be assigned to more than one of these categories). Instead, they represent a necessarily subjective, but I think meaningful, way to distinguish the primary role of modeling in each case.
Worth noting is the role of prediction. Rather than giving prediction its own category, I consider that making predictions is a universal outcome of all models, and therefore applies to all categories. For example, in the first category (concept exploration) qualitative predictions may emerge about the impact of increasing versus decreasing a given parameter. Conversely, in the second category (testing concrete hypotheses) predictions will be the optimal values of parameters that best explain the data.
Hypothesis generation via the exploration of concepts and plausibility testing
A very common and productive use for computer modeling is to explore informal or qualitative ideas. At one level this might be considered a kind of ʻsanity checking' – reassuring the researcher that an idea that seems reasonable is indeed theoretically plausible. At a deeper level, it might be a useful way to develop an intuitive feeling of how the parameters of a system influence the result. For example: what would be the effect of increasing proliferation rates on the shape of a tissue, or of reducing the expression level of a particular gene in a circuit? These results can be considered a qualitative type of prediction, helping to propose concrete experiments. Within this category, there is a spectrum of how much information from the real system is incorporated into the model. The model may indeed be very abstract, and might not represent a particular species but rather explore the basic principles of a general developmental process. These models ask the question: if my system were driven by the following set of rules could the behavior I am interested in emerge, or, what range of behaviors could emerge? These questions may be at the level of cellular activities or molecular patterning, or both.
A recent example of this exploratory approach at the molecular level is the work of Dhillon et al. (2017), who wished to understand the control of skin patterning in lizards. A common idea is that the mechanism underlying skin patterning involves some form of reaction-diffusion system, in which two or more diffusible molecules take part in dynamic reactions and spontaneously cause spatial symmetry-breaking to create periodic patterns of concentration – a phenomenon first discovered by Alan Turing (Turing, 1952). Such systems are believed to underlie a variety of developmental patterning processes, including the specification of digits, feather buds, hair follicles, ruggae and others (Sheth et al., 2012; Raspopovic et al., 2014; Economou et al., 2012; Marcon and Sharpe, 2012; Glover et al., 2017). The goal of modeling in Dhillon et al. (2017) was not to seek a basic understanding of how Turing patterns arise, or how external influences can alter the pattern – such an understanding already exists. Instead, the authors set out to explore how the specific 3D curvature of the lizard skin would impact these patterns, and whether this would lead to realistic lizard patterns. They developed a numerical simulation allowing them to explore the equations on variously shaped abstract surfaces (Fig. 1A), and were able to show the theoretical plausibility of the model on a realistic lizard shape (Fig. 1B). Furthermore, their approach revealed that subtle asymmetries in 3D curvatures could influence the pattern dramatically, thus revealing a feature that is impossible to explore without a computer simulation.
Vroomans et al. (2015) provides another example of this approach, this time exploring both molecular patterning and active cellular intercalations taking place during convergent extension – a morphogenetic process that drives tissue elongation. A number of known cases of convergent extension occur in tissues that already exhibit alternating stripes of gene expression, and an unresolved question is how the intercalation movements avoid disturbing this pattern. To explore this, the authors used a 2D cellular Potts model (see Box 1) to ask whether an oriented cell intercalation process would tend to disrupt a molecular pre-pattern of segments (Fig. 1C). Their purely theoretical study did not focus on a particular species, but rather explored the general principles. They showed that if patterning and intercalation were not linked in any way, then convergent extension could indeed disrupt the pattern. However, the model also allowed the authors to explore another set of rules: what would happen if cells exhibited segment-specific adhesion preferences? Not only did this solve the problem, but it also showed that such cell adhesion patterns could act as a driving force for the intercalation itself.
Ronellenfitsch and Katifori (2016) used a slightly more abstract model to explore a very different type of patterning question: how do vasculatures develop such that they are highly organized, hierarchically ordered, and optimally efficient in distributing nutrients? The hypothesis they wished to test was that growth of the tissue domain might be the key to forming networks with the desired characteristics. Again, their goal was to explore this idea as a potential universal mechanism that might explain a wide variety of cases, from the veins of plant leaves to the capillaries of mammalian eye balls, not to study it in a specific species. Their model ʻcoarse-grained' the tissue, i.e. rather than considering the tissue cell-by-cell, they modeled it as a roughly uniform hexagonal 2D network of small vessels, each of which can grow, shrink or be pruned away (Fig. 1D). Blood flow was simulated through the graph, and rules were then explored for (1) how the growth of each graph segment is controlled by the flow it experiences, and also (2) how the tissue as a whole grows over time. They were able to confirm that growth of the tissue domain is one mechanism by which the network can be shaped into a more efficient hierarchical arrangement – the gradual, progressive nature of patterning (provided by the growth) apparently helping to avoid suboptimal arrangements. Although this does not prove that the model is correct for a given species, its value is in adding to our ʻrepertoire' of potential organizational phenomena and giving us theoretical ʻbuilding blocks' with which to explain future experimental observations.
The recent study by Onimaru et al. (2016) provides another example of this exploratory approach, this time with an evolutionary angle, addressing how developmental patterns may evolve between species. Previous studies had proposed that a Turing system could explain the patterning of digits during mouse limb development (Raspopovic et al., 2014). Specifically, that a molecular circuit comprising one cell-autonomous transcription factor, Sox9, and two signaling pathways, BMP and Wnt, could represent the feedback loops capable of spontaneously producing the stripe-like pattern of the mouse digits (Fig. 1E). Onimaru et al. (2016) sought to explore whether this same basic model could be modified to produce the rather different expression patterns seen in the fin bud of the catshark Scyliorhinus canicula – a row of spots. In some senses, this model could be seen as moving towards the second category – hypothesis testing – but there was no tuning of parameters to produce the correct number of spots, nor was the comparison with data quantitative, so the project is still best described as ʻexploratory', or testing the plausibility of the idea. It did however (as with all modeling) provide some qualitative predictions. In particular, it predicted that spots of Sox9 expression should disappear upon repression of the BMP signaling pathway, and that the opposite would occur in response to inhibiting the Wnt pathway. Both of these predictions were subsequently tested experimentally and found to be correct.
It should be emphasized that although computer modeling played a similar role in all four studies discussed above – that of hypothesis exploration – the models themselves were of very different types, covering (respectively) a mesh-based finite element model, a cellular Potts model, a graph-based model, and a finite volume model. The distinctions between these common modeling formalisms are discussed in Box 1.
Hypothesis testing against quantitative data
A natural extension to testing the theoretical plausibility of an idea is to gather accurate or quantitative data to challenge a specific hypothesis in a more rigorous manner. A model can often be seen as an input-output function. A quantitative test of the model simply asks: if I accurately measure both the known input and known output (which may be data at the molecular, cellular or tissue level) does my proposed function make a good prediction of output when I give it the correct input? The distinction between this approach and that discussed in the previous section is largely that the hypotheses are already well defined (i.e. so the approach is not so exploratory), some degree of quantitative data are used, and the comparison is fairly rigorous. For this reason, such studies tend to focus on specific, named species, rather than universal principles. A good example of this distinction can be seen in the question of limb bud elongation. Two earlier computer models had explored the ʻproliferation gradient hypothesis' – the idea that elongation of the bud could be achieved simply by having higher rates of tissue growth in the distal tip than in the rest of the bud (Poplawski et al., 2007; Morishita and Iwasa, 2008). Both models supported the theoretical plausibility of the idea, thereby lending further weight to this hypothesis, but being exploratory they were not focused on measurements from a particular species. Later, the hypothesis was scrutinized against quantitative data from mouse limb buds (Boehm et al., 2010) by quantifying both the input data (the 3D distribution of proliferation rates) and the output data (the 3D shape change over 6 h of growth), and attempting to use the former to explain the latter. In fact, the exercise produced the opposite conclusion, i.e. that the simple hypothesis was not compatible with the data. Thus, while the earlier papers had shown that the idea was theoretically plausible, performing this additional step of hypothesis testing against quantitative data proved that it was not actually true, at least for the mouse. This example highlights the important role of falsification. A fit between model and data is never a proof that the model is correct, while, conversely, a model that fails does in fact provide concrete proof that something is missing from the model (or at least from that version of the model).
Even a predefined conceptual model will usually contain some parameters with unknown values and so, in practice, hypothesis testing usually requires some form of parameter fitting (i.e. exploring whether a combination of parameter values exists that allows the model to fit the data). Simple models may have few unknown parameters, which can be explored manually or even found by mathematical analysis. This is the case for the next two examples described here. By contrast, more complex hypotheses may have many parameters, which interact in non-linear and non-intuitive manners, and thus require very different parameter optimization techniques (as described later).
Tallinen et al. (2016) tackled a mechanics-based question regarding the development of the human brain. They addressed the question of how the shapes of the convoluted gyrifications of the cortex are controlled. A few distinct ideas have been put forward over the years to explain this process of gyrification. One of the oldest proposals suggests that the convolutions are a mechanical buckling effect of differential growth rates in the different cortical layers. However, the complexity and reproducibility of these convolutions had also led to alternative suggestions; for example, that the process was driven by specific axonal tensions or by more direct biochemical/genetic control (van Essen, 1997; Sun and Hevner, 2014). Earlier mathematical modeling supported the differential growth hypothesis, but due to technical challenges these studies were either focused on individual folds or were unable to reproduce a realistic whole-brain morphology (Richman et al., 1975; Nie et al., 2010). Tallinen et al. (2016), taking advantage of accurate 3D brain imaging and sophisticated computer modeling, were able to ask the question: to what extent can this simple but specific mechanical hypothesis explain the known morphology? The input data were an MRI scan of a fetal brain at gestational week 22, and the output data were similar scans for older fetal brains or adult brains. By developing a finite element model with suitable material properties for the different cortical layers, and then estimating the relative growth rates from morphometric analyses of the real brains, the authors were able to test the hypothesis. Their results showed that a large part of the morphology could indeed be explained by this simple idea (Fig. 2A). The model also took them a little further, indicating that the alignment of some of the folds may be controlled by the local curvature of the tissue.
Another recent case of systematic and quantitative hypothesis testing concerns how a growing tissue controls its final size. Vollmer et al. (2016) sought to explore the molecular mechanisms that could control the known progressive decrease in growth rates during Drosophila imaginal eye disc development. In particular, they wished to test a few simple distinct hypotheses against quantitative data to see which were compatible with reality. Unlike the studies described above, the authors were able to simplify the problem to a small set of differential equations, such that the spatial coordinates of the tissue did not have to be represented explicitly. Instead, the equations represent the areas of two different growing regions of the disc (the regions anterior and posterior of the morphogenetic furrow – a dynamic furrow that moves across the disc) and the changing growth rates over time. As such, this case is best described as a mathematical model, rather than a computational model. The simplest hypotheses explored were purely time-dependent models in which a cell-autonomous growth factor is degraded over time, either at a fixed rate or following a power law. A third model was that the growth factor does not degrade appreciably, but rather is diluted as a direct consequence of the increasing size of the imaginal disc over time.
To test these simple models, Vollmer et al. (2016) performed extensive quantitative measurements of the increasing disc area and the changing growth rates over time. Remarkably, all three models were able to fit the data very well; in other words, the data thus far could not distinguish between these very different molecular mechanisms, i.e. it could not falsify any of them (Fig. 2B). The modeling, however, helped to define which experiments might be more powerful at distinguishing between the hypotheses, and revealed that the measured robustness to various perturbations (e.g. temperature shifts and grafting experiments) should be more informative. Performing equivalent virtual perturbations to the mathematical models was indeed able to argue strongly against the power law model, and to favor the area-dependent growth law. However, even with these extra data, at least one version of each model (growth factor decay versus growth factor dilution) could still fit the data well. It was noted that the dilution model made a specific prediction – that the growth factor should be very stable and diffusible because although growth rates decline uniformly, the actual rates of growth (and thus dilution) are non-uniform. Diffusion (or active transport) would thus be necessary to ensure that the rate of growth rate decrease was integrated across the whole tissue. In a follow-up study (Vollmer et al., 2017), the authors therefore chose one of the hypotheses – growth factor dilution – and tested it experimentally. They focused on the cytokine Unpaired (Upd), which had previously been suggested to behave appropriately for the dilution model (very stable and diffusible), and performed experiments in which the levels of this protein were genetically manipulated. Again, modeling entered the study, but this time was used to test just this one specific hypothesis, rather than comparing a collection of alternatives. In this study, the model and the data fitted very well, so although the simpler time-dependent model had not strictly been ruled out, the study was able to provide very strong evidence in favor of the dilution model (Fig. 2C).
The two examples of hypothesis testing described above involve very few unknown parameters. However, testing a specific model that contains many unknown parameters is also possible. For instance, Cohen et al. (2014) provided an example in which a specific regulatory circuit involving five different molecular factors was fitted to quantitative data on the patterning of these factors along the dorsoventral (DV) axis of the mouse neural tube (Fig. 2D). A collection of these factors with restricted DV expression patterns are known to be under the control of the GLI1 transcription factor, which itself is regulated by the sonic hedgehog (Shh) morphogen. A simple idea called the ʻaffinity threshold model' had proposed that the targets that were only expressed close to the morphogen source would have less sensitive binding sites for morphogen-regulated transcription factors (MR-TFs), whereas those expressed further way must have more sensitive binding sites. However, this fails to explain results from mutants in which the DV expression boundaries of different genes shift in opposite directions. Cohen et al. (2014) wanted to explore whether a circuit based on the bi-functional nature of GLI1 could explain this. GLI1 is known to exist in two forms: the native expressed form acts as a transcriptional repressor, whereas in the presence of Shh signaling it is processed to an activator, which nevertheless still binds to the same regulatory sequences. Could a specific proposed gene circuit that included this bi-functional MR-TF explain the observed data, both for the wild type and mutants? The number of unknown parameter values for a model like this is high; even after choosing fixed values for the production and decay rates of each gene product, there remain 15 parameters representing binding affinities. To determine if a successful combination of parameter values exists, the authors first measured the output of the system i.e. the expression patterns for the four downstream genes of the circuit along the DV axis, in the wild type and in six other mutants (including both single and double knockouts). A variety of methods exist for finding optimal parameter values in cases like these: gradient descent, simulated annealing, evolutionary algorithms and others. Among these are Bayesian approaches, which distinguish themselves by not just finding solutions per se, but directly taking the likelihood of the solution into account. The authors used an approach related to a Bayesian method to find optimal parameter values, and were indeed able to find a solution that agrees well with the output data (Fig. 2D). Thus, the authors had tested a specific mechanistic hypothesis for DV patterning, which had emerged from prior work on the topic, and were able to confirm that it was indeed compatible with quantitative data from the real system.
Inference or reverse-engineering of predictive models from quantitative data
A different approach from the two mentioned above is to start without any intuitive model at all. This can be thought of as a conceptual extension of the parameter optimization discussed previously, but instead of searching for the best-fitting parameters for a predefined model, we expand the nature of the search to perform a more open exploration through ʻmodel space' to find many different models that fit existing data. This is a very powerful approach that is likely to become increasingly common in developmental biology, since the complexity of biological systems makes it hard to intuitively imagine all possible models that could fit our existing observations. ʻReverse-engineering' is a suitable term, as it implies not just tweaking parameters on a given proposal, but really finding different combinations of the ʻnuts-and-bolts' that might make a system work. The idea is particularly well suited for models based on gene circuits because even small changes to a circuit can make it function in a very different way. This has been well illustrated by Cotterell and Sharpe (2010), who showed that multiple distinct gene circuits can explain stripe formation, despite each using a qualitatively different dynamical mechanism to explain the same result.
For reverse-engineering, if no models fit the data well, then this is a useful concrete statement that the models are missing something important. If multiple models fit, then this is also a productive scenario. On the one hand, various criteria can be used to judge the likelihood of a model, for example how constrained the parameter values are. Models that require very specific parameter values are fragile, and less likely to be true than those with more relaxed constraints. This can be explored by parameter sensitivity analysis or by using Bayesian approaches. On the other hand, if the same data can be explained by multiple models with qualitatively different underlying mechanisms, then this allows distinct testable predictions to be made, thus guiding a new set of experiments that will distinguish between the models. The rare scenario in which only one model fits the data is the most suspicious, as it suggests that the modeling assumptions have already biased the result towards a particular outcome.
The reverse-engineering of circuits has a long history in Drosophila developmental biology, where it was pioneered by the Reinitz and Jaeger groups to explain gap gene expression patterns (Reinitz and Sharp, 1995; Jaeger et al., 2004; Crombach et al., 2014). However, this general idea has recently been applied to a wider variety of developmental systems. For example, Uzkudun et al. (2015) used a gene circuit approach that was very similar to the earlier Reinitz models, but applied it to investigate mouse limb bud development. Whereas in the original Drosophila studies the embryo was abstracted to a fixed 1D domain, Uzkudun et al. (2015) first incorporated a 2D dynamic representation of the limb bud growing over time (Marcon et al., 2011). They generated data sets on the dynamic expression patterns of key genes for proximodistal patterning (Fig. 3A), and then used the ʻconnectionist' framework to simulate hypothetical gene circuits, in which the positive and negative inputs to a given gene are added linearly and then transformed by a simple sigmoid Hill function (Mjolsness et al., 1991). The sigmoid function was not intended to represent any accurate biochemical details, but rather to capture two key qualitative features of gene regulation: (1) the necessary saturation of concentrations for any real-world system; and (2) the potential for gene regulatory circuits to exhibit cooperative activation. By performing parameter optimization on all of the possible circuit models, the authors were able to reveal the circuit best able to explain the data. Furthermore, they could then go beyond the wild-type data and test the model against a variety of published experimental perturbations, which included genetic experiments, grafting experiments and bead implantation. The results of these virtual experiments further refined the network, leading to a circuit that had not previously been proposed. This non-obvious mechanism was named the ʻcross-over model' as it suggested that the distal signal (FGF) controls the more proximal Hox gene (Hoxa11), whereas the proximal signal (thought to be retinoic acid) controls the more distal Hox gene (Hoxa13).
A further advance on this approach was reported by Lobo and Levin (2015), who developed a computational system for reverse-engineering the circuit that controls the regeneration of planarian flatworms, which show a striking capacity to regenerate their bodies after dramatic injuries. Rather than using only wild-type data to train their network, the authors first developed a data set that formalized the published results of various different regeneration experiments including both phenotypic and molecular data (Fig. 3B). This allowed the computational system to automatically test models against experimental perturbations, rather than just the wild-type scenario (as in Uzkudun et al., 2015). They also allowed nodes in the circuit that represent unknown genes, thus including the potential to predict new genes that might be required to obtain certain functional results. Finally, their model-generation algorithm was not limited to varying a fixed set of regulatory parameters, but was also allowed to reorganize the topology of the circuits to be tested. By combining all these features together, Lobo and Levin (2015) were able to find (and thus predict) a genetic circuit that could explain all known features of planarian regeneration. Satisfyingly, without putting any knowledge of known regulatory interactions into the model, certain core features arose naturally in the predicted circuit (such as canonical Wnt activity through β-catenin, its repression by APC, and its ability to inhibit head structures). Additionally, the inclusion of new unknown molecular components led not only to predictions of where they would sit in the circuit, but also to predictions of their dynamic activity over time and space, which helps in experimentally searching for molecular candidates.
Two more recent examples contest to the increasing relevance of reverse-engineering in developmental biology. Crocker et al. (2016) built on their previous modeling of Drosophila patterning (Ilsley et al., 2013) to explore how the pair-rule gene even-skipped (eve) is regulated (Fig. 3C). This gene is regulated into a more complex pattern of stripes than the gap genes mentioned above, and consequently the authors did not model the pattern as a true dynamic system evolving its state over time. Instead, they chose a single time point and provided as a ʻgiven' the spatial pattern of all genes considered to be inputs to eve at that time. The goal was to explain the spatial activity of particular eve enhancers (each of which is responsible for a subset of the full expression pattern). In particular, they sought to explain these enhancer patterns as a simple logistic regression function of the input patterns. Despite the rather complex patterns of eve enhancer activity, this approach worked very well and was even able to predict the outcome of selectively inhibiting individual enhancers (by genetically engineering sequence-specific repressors). Since no prior hypothesis had been used to define which regulatory interactions were more or less important to the eve stripes, the final optimized set of interactions represented a de novo inference of the real circuit.
The last example was performed within a study of how mouse neural tube cells integrate Shh and BMP signals to determine their position along the DV axis (Zagorski et al., 2017). The authors generated an extensive grid of experimental data on how these cells responded to different combinations of the two signals. This defined empirically the ʻinput-output' function of the cells, and their computational question was: which circuit design would be capable of replicating this function? As in the Drosophila study above, the topology of the hypothetical circuit was not altered – just the parameter values. But again, this revealed that certain regulatory connections of the circuit were not required, thus in effect making a prediction about the optimal circuit topology.
Beyond single developmental concepts, towards truly multi-scale, morphodynamic models
The modeling studies discussed above (and most projects carried out to date) have focused on what we might call ʻsingle phenomena'. How does a molecular gradient scale? How does tissue intercalate? What controls the curvature of epithelial bending? What minimal model explains growth control? In other words, we have mostly concerned ourselves with deconstructing the complexity of development into tractable and understandable modules. This reductionist approach has of course been one of the keys to success in all fields of science. However, there are certain essential features of living systems lying at the heart of biology that may not be captured in this way. Beyond being just a collection of discrete modules, organisms must also display a higher logic, which exhibits itself in the coordination of many simultaneous processes. For example, the question of limb development is normally broken down into various sub-questions: proximodistal (PD) patterning, anteroposterior (AP) patterning, the local self-organized patterning of the digital rays, and also the differential and anisotropic growth that creates non-trivial flows of tissue and shape changes (Fig. 4A). However, these multiple processes interact in at least two ways. First, many specific molecules (for example diffusible signaling proteins, such as FGF or BMP) are known to affect many, if not all, of these processes simultaneously. Second, the global tissue movements that occur during limb morphogenesis are largely controlled by molecular patterning, but these movements also feedback to alter the geometric arrangement of signaling cells, thus altering the very same molecular patterning events that drive them. This scenario raises a number of intriguing scientific questions: what benefits do the interactions between sub-processes confer on the system? How does this allow the coordination of multiple dynamic processes that must collaborate? Conversely, what constraints does this impose on the system? How difficult is it to find parameter values that satisfy all these processes at the same time? And, most fundamentally: does this change our basic understanding of how the modules themselves work? These questions cannot be addressed by only studying the sub-processes (or modules) one by one.
In the last few years, improvements in computer power have made it tractable to build complex cell-based simulations that are able to integrate multiple types and scales of process in the same model, in particular by combining dynamic gene regulatory circuits with realistic and flexible tissue dynamics. It should be noted that simple models that exhibit this morphodynamic feedback have previously been achieved. The Salazar-Cuidad group, for example, successfully developed a model of tooth development that included this feedback between molecular patterning and physical morphogenesis (Salazar-Ciudad and Jernvall, 2010), and agent-based models have also incorporated this large-scale feedback loop into a simplified model of pancreas development (Setty et al. 2008). However, these were either limited in scope, or somewhat abstracted from the real biological process, and the degree of sophistication achieved in more recent simulations has reached a much higher level. For instance, Marin-Reira et al. (2016) have created a very flexible modeling platform called EmbryoMaker that allows the specification of various active cell behaviors and modes. In particular, in addition to simpler mesenchymal cells, it includes definitions of polarized epithelial cells which can be mechanically linked into epithelial sheets, displaying stiffness with a realistic resistance to bending and other forces. It also provides a fairly comprehensive way to specify molecular reactions, or gene regulatory events, and these can in turn control the active behaviors of cell movements. One impressive demonstration of the versatility of this platform was a simulation of a generic animal gastrulation process (Fig. 4B), which included both mesenchymal cells and epithelial sheets, molecular control of cellular activities, differential growth, invagination of the ventral surface by active bending, and cellular rearrangements. A similarly impressive achievement has been made by Delile et al. (2017), who created a cell-based simulation platform able to cope with both the mechanical side and the genetic side, leading to the name MecaGen. They demonstrated the power of the system through proof-of-concept simulations of gene regulation by Wnt signaling, compartment formation through induction, epithelialization and boundary sharpening, and the large-scale morphogenetic phenomenon of epiboly in zebrafish embryos (Fig. 4C). Modeling platforms such as these will surely become central to developmental biology over the next decade or two, as they represent the only way to study the large-scale integration of multiple processes at different scales: from the molecular, through the cellular, up to the tissue level, and the feedback between them. However, we must keep in mind the general difficulty of meaningful modeling as the number of free parameters increases, and also the danger of creating models that simply replicate existing hypotheses rather than challenging them (as discussed briefly below).
Conclusions
Creating mathematical models, and exploring computer simulations, has a long, sparse history in developmental biology, but in recent years it is clearly growing in popularity. Some important examples of modeling in development could not be fitted into this Review, most notably plant development and somite patterning. The latter provides a particularly developed example of the interaction between theory and experimentation – almost forming a specialized field in its own right. However, I have tried to include a diversity of recent examples to illustrate that although computer models are fundamentally always the same thing – a tool to help treat ideas in a more rigorous manner – in reality they often fit into the scientific process in distinct ways: exploring concepts, testing the theoretical plausibility of ideas, familiarizing ourselves with the distinct roles of different parameters, testing a concrete hypothesis against quantitative data, or simply helping us to understand why a well-known model actually works (and there will be other ideas that I have not covered here).
The last category of modeling discussed above – the truly multi-scale, morphodynamic models – may present something of a particular dilemma, and thus can serve to illustrate some of the potential pitfalls of modeling. On the one hand, it is clear that the full development of an organ necessarily involves many different processes at different scales. The developmental biology community has been very successful at studying and modeling these processes separately (as testified by most of the ʻsingle phenomenon' studies discussed in this Review), but if suitable tools existed we would naturally wish to tackle the integration of these modules as well, i.e. the higher-level logic of the whole system. However, it is also well known that complex models with tens or hundreds of free parameters bring serious new theoretical problems. The most general description of this is ʻover-fitting'. As the number of free parameters increases, so does the flexibility of the model to fit anything, thus weakening its ability to be falsified. As John von Neumann famously stated: ʻWith four parameters I can fit an elephant, and with five I can make him wiggle his trunk' (Dyson, 2004). If a model can be adjusted to fit any observation (or any set of quantitative data), then our confidence in its probability of describing the correct underlying mechanism diminishes. The danger then arises that the model is nothing more than a computational replication of a hypothesis that already existed – neither challenging it, nor adding to it. The examples mentioned above concerning limb development serve as a useful illustration. The earlier models of limb morphogenesis sought only to explore the concept of the proliferation gradient hypothesis (Poplawski et al., 2007; Morishita and Iwasa, 2008) rather than challenging it with quantitative data. As such, they were able to fit the general qualitative result (elongation of the bud) and thus potentially give weight to a hypothesis that was mechanistically incorrect. In this case, the scientific question and hypothesis were still relatively simple, and the error was subsequently found relatively easily (by falsifying the hypothesis with quantitative data). But it is important to be aware that the risk of this problem rises dramatically if we start building complex multi-scale models containing vastly more free parameters.
The correct response to this dilemma is twofold. On the one hand, we should always be cautious. Modeling is not magic, nor is it infallible. It is an increasingly essential tool that should be embraced but, as in all aspects of science, we must constantly question, challenge and double-check our work in order to retain confidence in it. The importance of challenging models with data, and attempting to find their weaknesses, cannot be overstated. The second key ingredient is quantitative data. John von Neumann's quote about the elephant is only half complete. It is not just a question of how many free parameters a model has, but of the balance between free parameters versus constraining data. Again, in two of the examples discussed above (limb bud elongation and eye disc growth) it was the generation of more high-quality data which allowed the falsification of incorrect models (for the limb bud accurate data on proliferation rates, and for the eye disc data on the growth response to perturbations). Similarly complex multi-scale models will be useful as long as we constrain them with sufficient amounts of high-quality, accurate data. For developmental biology, this means data on the spatiotemporal dynamics of cells and their molecular states, which fortunately are increasingly possible to obtain by 4D time-lapse imaging. As such, it seems clear that these ʻnext-generation' multi-scale models will slowly become invaluable, as our field becomes more quantitative and data driven. Despite the caution required in using them, they will be the only way to address the bigger questions of multi-scale integration mentioned above.
In conclusion, the recent examples reviewed here show that the mathematical and computational modeling of development is a growing and exciting area. More importantly, it will become increasingly essential. Biology is characterized by complexity – many components, non-obvious regulatory logic, non-linear interactions, dynamics over time and space. It will be impossible to grasp all that we wish to understand without the help of modeling. This is true both for small-scale ʻmodular' questions (which often show non-intuitive behavior, e.g. Figs 1 and 2), as well as for large-scale examples like the development of an entire organ. Modeling should thus become a standard tool, available to any developmental biologist and employed by all in the normal course of their research. This does not mean that we should all use the same standardized modeling tools. The field is too young for single, universal platforms; it will still benefit from different groups trying to address the same modeling problem in unique ways. Nor does it imply that everyone should become expert enough to program their own new modeling frameworks from scratch. As an analogy, few of us know the maths behind the point-spread function of a microscope or how to calculate the position of the objective focal plane, and yet we do understand enough about what a microscope does to happily and productively use it in our research. Modeling tools should be more widely available, and some genuine knowledge and training in the basics of how modeling works should become an important and standard part of the field. Experimentalists and theoreticians should work together to make this a reality over the next decade – an achievement that would no doubt have made D'Arcy Thompson very happy.
Acknowledgements
The author thanks all members of the Sharpe lab for numerous stimulating discussions about the role of modeling in development.
Footnotes
Funding
J.S. was supported by the Centre for Genomic Regulation (CRG), the Catalan Institute for Research and Advanced Studies (ICREA) and the Spanish Ministry of Economy and Competitiveness, through ‘Centro de Excelencia Severo Ochoa 2013-2017’ (SEV-2012-0208).
References
Competing interests
The author declares no competing or financial interests.