ABSTRACT
Human obesity has a large genetic component, yet has many serious negative consequences. How this state of affairs has evolved has generated wide debate. The thrifty gene hypothesis was the first attempt to explain obesity as a consequence of adaptive responses to an ancient environment that in modern society become disadvantageous. The idea is that genes (or more precisely, alleles) predisposing to obesity may have been selected for by repeated exposure to famines. However, this idea has many flaws: for instance, selection of the supposed magnitude over the duration of human evolution would fix any thrifty alleles (famines kill the old and young, not the obese) and there is no evidence that hunter-gatherer populations become obese between famines. An alternative idea (called thrifty late) is that selection in famines has only happened since the agricultural revolution. However, this is inconsistent with the absence of strong signatures of selection at single nucleotide polymorphisms linked to obesity. In parallel to discussions about the origin of obesity, there has been much debate regarding the regulation of body weight. There are three basic models: the set-point, settling point and dual-intervention point models. Selection might act against low and high levels of adiposity because food unpredictability and the risk of starvation selects against low adiposity whereas the risk of predation selects against high adiposity. Although evidence for the latter is quite strong, evidence for the former is relatively weak. The release from predation ∼2-million years ago is suggested to have led to the upper intervention point drifting in evolutionary time, leading to the modern distribution of obesity: the drifty gene hypothesis. Recent critiques of the dual-intervention point/drifty gene idea are flawed and inconsistent with known aspects of energy balance physiology. Here, I present a new formulation of the dual-intervention point model. This model includes the novel suggestion that food unpredictability and starvation are insignificant factors driving fat storage, and that the main force driving up fat storage is the risk of disease and the need to survive periods of pathogen-induced anorexia. This model shows why two independent intervention points are more likely to evolve than a single set point. The molecular basis of the lower intervention point is likely based around the leptin pathway signalling. Determining the molecular basis of the upper intervention point is a crucial key target for future obesity research. A potential definitive test to separate the different models is also described.
Introduction
A review of 1698 studies of obesity [body mass index (BMI) >30] prevalence across 200 countries and territories, involving >19-million subjects, revealed that 10.8% of men and 14.9% of women had obesity in 2014 (NCD risk factor collaboration, 2016). Given a global adult population of approximately six-billion, this suggests that there are >600-million obese individuals. This is an important issue because obesity is a predisposing factor for several key non-communicable diseases such as hypertension, cardiovascular diseases, metabolic diseases such as type 2 diabetes, and certain types of cancer (Calle et al., 2003; Calle and Kaaks, 2004; Chan et al., 1994; Isomaa et al., 2001). Individuals with obesity therefore have a greater risk of all-cause mortality (The global BMI mortality collaboration, 2016) and this risk accelerates as BMI increases above 30. By 2010, the obesity problem was consuming significant proportions of the total healthcare expenditure of many countries (Withrow and Alter, 2010): for example, in 2008, the financial cost of obesity in the USA was estimated to be 147 billion US$ (Finkelstein et al., 2009).
Obesity risk has a large genetic component (Allison et al., 1996a; Maes et al., 1997). Although heritability estimates vary, meta-analyses based on comparisons of mono-zygotic and di-zygotic twins suggest that ∼65–70% of the variation is genetic (Allison et al., 1996a). Indeed, adopted children show much higher correlations in body weight/fatness to their biological parents than to their foster parents (Stunkard et al., 1986; Sorensen et al., 1989). Similarly, twins raised apart show almost no correlation in body fatness to siblings they are raised with, but high correlation to their other twin when monozygotic, but much less so when dizygotic (Stunkard et al., 1990; Price and Gottesman, 1991). The time course of the epidemic, which only began ∼60 years ago, means that large-scale changes in the genetic make-up of the population are unlikely. There could be some contribution of genetic changes owing to assortative mating for obesity (Hur, 2003; Hebebrand et al., 2000; Allison et al., 1996b; Silventoinen et al., 2003; Jacobson et al., 2007; Speakman et al., 2007); however, calculations based on population models and observed heritability suggest that, at most, only approximately a third of the observed increase in the prevalence of obesity can be attributed to assortative mating (Speakman et al., 2007). It is generally assumed that this genetic component is a consequence of events in our evolutionary history. Hence, as well as being an issue of considerable medical importance, obesity also raises some interesting intellectual questions. In particular, how is it possible that evolution has favoured development of a trait with such manifestly disadvantageous consequences?
Body fat appears to serve several important metabolic functions. First, it may act as an insulator, reducing the energy demands required in endothermic animals to maintain core body temperature. By virtue of its lower water content, fat tissue has a lower thermal conductivity than lean tissue (Cohen, 1977; Cooper and Trezek, 1971) and, hence, when fat tissue is distributed in a continuous subcutaneous layer (such as the layer of blubber in pinnipeds and cetaceans) it may provide substantial thermal insulative benefits. Studies of humans immersed in cool water (15–20°C) show that individuals who have greater adiposity cool down more slowly than leaner individuals and do not need to elevate their metabolic rate as much to defend this slower cooling rate (Cannon and Keatinge, 1960; Buskirk et al., 1963; Kollias et al., 1974; Keatinge, 1978; Hayward and Keatinge, 1981; Tikuisis et al., 1988; Glickmann-Weiss et al., 1991). These studies indicate that obesity in humans also provides an insulative advantage. However, in other species, this role is less clear. Indeed, by shifting the thermal response curve downwards, a thick insulative layer of fat may be disadvantageous when it comes to heat dissipation. There are several studies suggesting humans with obesity suffer more from heat stress than leaner individuals. Given that problems dissipating heat may be an important limiting factor for many endothermic animals (Speakman and Król, 2010), a thick layer of subcutaneous fat may be a disadvantage in many circumstances and the distribution of fat stores in tropical animals may reflect these heat dissipation issues (Pond, 1992). Indeed, a recent study suggested that adiposity in small mammals (mice) is not related to whole body thermal insulation (Fischer et al., 2016), although the way in which the analysis was performed in this study may have obscured any connection (Speakman, 2017).
Second, fat tissue also serves as a store of energy. Fat is useful for this purpose because the energy density of fat is approximately twice that of carbohydrates and protein (Blem, 1990) and does not need to be stored with water. Compared with external energy stores (such as caches of nuts and seeds), internal energy stores have several advantages; the main advantage being that they are readily available to be mobilised to meet immediate demands. Furthermore, internal energy stores cannot be stolen by other individuals (of the same or different species) or decayed by fungi or bacteria (van der Wall, 1990; Humphries et al., 2001). However, much less energy can be stored internally than in external caches. For example, in autumn, the chipmunk (Tamias striatus), which relies principally on stored acorns to survive the winter, can deposit up to 4600 kJ day−1 of acorns into its winter cache (Humphries et al., 2002). This process is repeated for many days in a row, creating a cache containing potentially 10× this amount. A cache of 46,000 kJ would be equivalent to about 1220 g of fat (assuming 36 kJ g−1: Blem, 1990), which is more than 10× the animal's body weight. In contrast, the limit of internal fat storage is approximately 40% of body weight (Kunz et al., 1998; Buck and Barnes, 1999).
The example of the chipmunk provides one answer to the question ‘why would animals need a store of energy?’. Food supply in the environment may follow seasonal patterns, which means that at some points during the year the amount of available food is predictably low. Animals, particularly those that live in temperate and arctic conditions where winter is often a period when obtaining a sufficient energy supply can be a challenge, may use fat to help them through these meagre times. However, if the animal is to remain endothermic, relying on stored fat alone would be insufficient to allow it to survive an entire winter. Hence, for animals that maintain endothermy throughout winter, either some food must be available for them to feed on to supplement their fat reserves, or they display hoarding behaviour in autumn to accumulate a cache that can be used in the winter months. An alternative strategy is to combine fattening up with a profound suppression of metabolic rate to minimise energy expenditure (or hibernation). Hibernating animals thus show an annual cycle of fat accumulation prior to hibernation (Speakman and Rowland, 1999; Martin, 2008; Xing et al., 2015) that they can then utilise through the winter months. Migrating birds, anticipating the energy demands of migration where opportunities to feed will be reduced because they must fly across an open water area, and/or travel at high altitude, also accumulate fat prior to setting out and at stopover sites en route (Moore and Kerlinger, 1987; Moriguchi et al., 2010; Weber et al., 1994; Gómez et al., 2017; Bairlein, 2002; Alerstam, 2011).
A third reason to accumulate fat for a predictable period where requirements may exceed demand is reproduction. Animals are broadly categorised as either income breeders, which means that their reproduction is fuelled primarily by the energy that they collect during the reproductive event, or as capital breeders, which means that they store energy in advance to utilise when reproducing (Gittleman and Thompson, 1988; Jonsson, 1997; Meijer and Drent, 1999; Doughty and Shine, 1997; Houston et al., 2007). Mice, for example, do not appear to store any fat specifically to support reproduction, and although they generally run their fat stores down to almost nothing during peak lactation, this fat generally contributes less than 5% of the total energy requirements during reproduction (Johnson et al., 2001; Speakman, 2008a). Indeed, they may run down their reserves to reduce any insulative effect of the fat, to maximise heat dissipation, which is probably a key limiting factor on lactation performance (Król and Speakman, 2003; Król et al., 2007a,b; Johnson and Speakman, 2001; Speakman and Król, 2011). At the other extreme, some breeding seals accumulate body fat and then do not feed at all during lactation, relying entirely on this reserve to synthesise milk that is continuously transferred to their offspring (Bowen et al., 1985,, 1992; Kovacs and Lavigne, 1986; Iverson et al., 1993). Numerous animals follow intermediate strategies (e.g. the cotton rat Sigmodon hispidus) (Rogowitz and McClure, 1995; Rogowitz, 1998). Another factor influencing fat storage in reproduction is whether the female provides the effort alone, or has helpers. A phylogenetic analysis of 87 mammal species showed that if females have helpers, the amount of stored fat reserves called upon for reproduction is reduced (Heldstab et al., 2017).
Although many animals accumulate fat in anticipation of a particular future event where either food supply will be reduced or expenditure will be increased, most animals also store some fat when no such event is anticipated. It has been suggested that the reason for storing fat in these situations is to provide a source of energy that would cover any period of unanticipated failure in the food supply, i.e. as a hedge against the risk of starvation (McNamara and Houston, 1990; Houston et al., 1993; Houston and McNamara, 1993; McNamara et al., 2005); with respect to fat storage in humans, this is also called the food unpredictability hypothesis or the food insurance hypothesis (Nettle et al., 2017). The potential of fat as a reserve to endure periods of catastrophic food supply failure (i.e. famine) was the first suggested hypothesis for why humans have a genetic predisposition to obesity (Neel, 1962). Neel was primarily concerned with the evolution of diabetes, which causes a similar evolutionary conundrum to obesity, i.e. it is disadvantageous but highly heritable. He suggested that insulin resistance (a precursor to type 2 diabetes) promotes efficient storage of fat, and that such fat would enable individuals to better survive periods of famine. The genes enabling such efficient fat deposition (which he coined ‘thrifty genes’) would thus be selected for (Neel, 1962). This idea became known as the ‘thrifty gene hypothesis’ (TGH). It was soon recognised that the key to this argument was fat deposition and, hence, genes unrelated to diabetes that also promote fat deposition would also be ‘thrifty genes’ and selected for. As a result, the idea developed into an evolutionary explanation for the genetic basis of obesity, as well as type 2 diabetes (Dugdale and Payne, 1977; Chakravarthy and Booth, 2004; Eknoyan, 2006; Prentice, 2001; Watnick, 2006; Wells, 2009, 2010, 2012; Prentice, 2005).
Although the contention that survival in historical famines created the modern-day genetic susceptibility to obesity is superficially attractive, it has many flaws that have been elaborated in detail elsewhere (Speakman, 2006, 2007, 2008b). A major issue with the TGH is that if, as is claimed, there have been numerous famines over the course of human evolution and massive mortality of lean individuals during these famines, then the force of selection on alleles that cause obesity would be enormous and they would move rapidly to fixation. Indeed, given a rate of famines of approximately one per 150 years, and a selective advantage of an obesity allele of only 0.5%, a mutation that was present in a single individual would be found in the entire population after only 6000 famine events (Speakman, 2006, 2007). In which case, we would all have thrifty gene mutations and, hence, we would all have obesity; however, we are not. A second problem is that the pattern of mortality in famines does not match the underlying assumption of the TGH, i.e. the TGH assumes that the major factor driving mortality differences is body composition. In fact, the main factor is age. Famines kill the very young and very old (Speakman, 2006, 2007, 2013). There is also evidence for greater mortality among males (Speakman, 2013). There is no evidence that famines kill lean individuals in preference to obese ones; however, the absence of evidence is not evidence of the absence of an effect. Furthermore, if the TGH is correct, observations of hunter gatherer populations should confirm that between periods of famine at least some individuals become obese (because it will be these individuals who survive the next famine). However, direct observations of hunter-gatherer and subsistence agriculturalists indicate that they do not become obese between famines (Speakman, 2006, 2007).
These issues with the original formulation of the TGH have been addressed by suggesting that selection has not been acting throughout the whole of human history, but rather that famines are a feature only of the time since the development of agriculture ∼12,000 years ago (Prentice et al., 2008). This shorter period of more intense selection might explain why there are still a large number of unfixed obesity-related alleles and a diversity of weight deposition responses to the modern glut of food. This idea has been called the ‘thrifty late’ hypothesis (Ayub et al., 2014) and would explain why hunter gatherer populations do not become fat between famine events (i.e. they have not undergone the same strong selection over the past 12,000 years as populations who depend on agriculture). If this idea is correct, we would anticipate a strong signal of selection in the human genome at genetic loci (single nucleotide polymorphisms, SNPs) that have been linked to variation in obesity (Prentice et al., 2008). An early study found no evidence for such selection, but the number of SNPs and populations included was relatively small (Southam et al., 2009). To date, 115 loci (Locke et al., 2015) have been linked to BMI (which is a proxy measure for obesity: Romero-Corral et al., 2008). We explored the 1000 Genomes Project data for signatures of selection at these 115 loci: although nine SNPs showed positive selection, this was no different to the number found in a random selection of 115 SNPs across the rest of the genome. Moreover, of these nine loci, five involved selection favouring leanness rather than obesity (Wang and Speakman, 2016). This suggests that common variants linked to BMI are not ‘thrifty genes’ that have been under intense selection for the past 12,000 years.
However, this finding does not completely rule out the existence of ‘thrifty genes’ that cause obesity given that together the 115 single nucleotide common variants (Locke et al., 2015) explain only 3% of the variation in BMI, while 65–70% of the variance in BMI has a genetic basis (Allison et al., 1996a). It is possible that specific populations carry variants that are extremely rare, or absent, in other populations and, hence, would not emerge in a global genome-wide survey for variants linked to BMI. Nevertheless, in these specific populations, these variants might explain a large proportion of the variation in obesity, i.e. the shortfall between the known heritability and the proportion explained by common variants. One potential example of such a gene was found in the people living on the island of Samoa in the South Pacific Ocean (Minster et al., 2016). The variant is a mis-sense mutation that results in a coding change in the gene CREBRF. It is found in 25% of the Samoan population but is almost undetectable in other populations. The haplotype structure around the allele indicates a wide range of homozygosity, which suggests that it has been under strong selection. However, this structure might also be generated by rapid population expansion, so it is not definitive that the gene has been under strong selection, and the high levels in the population might then reflect a founder effect. The gene seems to be linked at a cellular level to adipocyte responses to starvation, and the insertion of the mutation into adipocytes protects them from starvation-induced mortality. Whether this cell-based assay also means that the gene protects people from starvation-induced mortality is unclear, although this was the basis of the claim that the gene variant is ‘thrifty’ (Minster et al., 2016). If this is shown to be the case, it would be strong evidence that extremely rare ‘thrifty genes’ favouring survival in famines may exist and be selected for in small populations. Furthermore, Budnik and Henneberg (2017) have suggested that the increase in the prevalence of obesity is related to relaxation of selection over the past few generations.
Models of fat storage regulation
Set-point theory
In parallel with the discussion regarding the evolutionary underpinning of genetic susceptibility to obesity, there has been another debate regarding how, or indeed whether, levels of body fat are regulated. The idea that the body has a regulatory feedback system controlling fatness was first suggested in the 1950s (Keys et al., 1950; Kennedy, 1953; Mayer, 1953; Passmore, 1971; Sclafani, 1971; Harris and Martin, 1984). This feedback loop became known as the lipostat, and the whole idea was called the lipostatic theory of body weight regulation. The evidence supporting this proposition came primarily from experiments in rats (Kennedy, 1953). Rats are able to regulate their intake to match demands very precisely, even when those demands can vary enormously owing to, for example, exposure to cold temperatures, or lactation. The result is that body fatness remains very stable. Kennedy (1953) also showed that experimental lesioning of the hypothalamus could disrupt this regulatory pattern, repeating similar work from the 1940s (Hetherington and Ranson, 1940). Kennedy suggested that a factor produced by the depot fat might signal to the brain how fat an individual is, and that this signal is picked up in the hypothalamus, allowing the animal to adjust its intake or expenditure accordingly to keep the fat depots constant (Kennedy, 1953). Feedback systems such as thermostats and lipostats require a reference level to which the incoming signal is compared and then an output action is generated. The disruption of regulation by lesioning the hypothalamus suggested that such a ‘set point’ of the lipostatic system might be encoded in this region of the brain.
This work was followed up by elegant parabiosis experiments that demonstrated the existence of Kennedy's hypothetical circulating factor derived from depot fat (Hervey, 1959). Individual animals were surgically united so that they had a shared circulation. When an animal that had had its brain lesioned in the hypothalamus was linked with one that had not, the lesioned partner of the pair over-consumed food and became obese. However, the other rat, which was exposed to the signals coming from the fat of the obese individual, reduced its food intake and became correspondingly much thinner. Similar work by Coleman and colleagues in the 1970s using genetically obese mice demonstrated that ob/ob mice seemed to lack this signal from their fat, whereas db/db mice lacked the ability to respond to it (Coleman and Hummel, 1969; Coleman, 1973).
This model was given strong support when the molecular basis of the genetic defect underpinning the ob/ob mice was finally discovered (Zhang et al., 1994). The defect is a premature stop codon in a gene expressed almost exclusively in adipose tissue that codes for a hormone called leptin. Leptin is produced in direct proportion to the amount of adipose tissue in the body and, hence, this provides the required adiposity signal for the lipostat. As anticipated by the parabiosis experiments, the Db/Db mouse has a mutation in the signalling form of the leptin receptor in the hypothalamus (Chen et al., 1996), which was presumed to be the locus of the set-point regulatory circuitry. This receptor is located on two neuronal populations in the arcuate nucleus of the hypothalamus that express anorexigenic (POMC) and CART (Kristensen et al., 1998) or orexigenic neuropeptides (AgRP and NPY) (Mercer et al., 1996a,b). These project to second-order neurons in the paraventricular nucleus that express MC3R and MC4R (Schwartz et al., 2000). These discoveries were expanded to include an increasingly complex network of signalling pathways from the alimentary tract that are linked to food intake and the regulation of energy expenditure (Qu et al., 1996; Turton et al., 1996; Morton et al., 2006; Berthoud, 2011; Myers and Olson, 2012). Despite this system routinely being interpreted as underpinning the lipostatic control mechanism (Campfield et al., 1995; Collins et al., 1996; Friedman and Halaas, 1998; Keesey and Hirvonen, 1998; Woods et al., 1998; Cowley et al., 1999; Friedman, 2010), the molecular nature of the set point itself is unknown. Humans who have mutations in the leptin gene or its receptor are always ravenously hungry and have morbid obesity (Montague et al., 1997; Clément et al., 1998); however, those with defective leptin production can be ‘cured’ by the administration of exogenous leptin (Farooqi et al., 1999), which indicates that the same system found in mice is likely to control human adiposity.
The discovery of leptin led to the hope that the solution to the obesity problem was just around the corner. The first idea was that perhaps individuals with obesity had a leptin production defect, i.e. that the reporting system to the lipostatic control centre was under-reporting their weight. However, apart from in a few unusual cases, leptin production in individuals with obesity is completely normal and they have very high levels of circulating leptin (Considine et al., 1996; Maffei et al., 1995; Adami et al., 1998; Campostano et al., 1998). Nevertheless, injecting leptin into mice reduced their body fatness (Halaas et al., 1995; Pellymounter et al., 1995, but see Mackintosh and Hirsch 2001; Faouzi et al., 2007). This was followed up by several trials where people with obesity were given exogenous leptin; however, adding leptin to a high existing level had very little impact (Heymsfield et al., 1999). Attention then moved on to the concept of leptin resistance (akin to insulin resistance) (Fam et al., 2007; Myers et al., 2012), i.e. that obese individuals have a defect in their response to leptin levels because they no longer respond to it (Frederich et al., 1995). The interpretation of obesity as a defect in leptin production was therefore altered to consider that people with obesity had some defect in their lipostatic control system (i.e. leptin resistance: Myers et al., 2012), rather than to question the whole notion of lipostatic control.
One aspect of the lipostatic system that was seldom considered in these studies was to ask why individuals might need to regulate their body fatness at all. The TGH was built on the premise that individuals that store more fat have a selective advantage compared with lean individuals because it allows them to avoid starvation during periods of famine. This might explain why there was a selective pressure to push fatness upwards. The question is: what evolutionary selective pressure might act in the opposite direction to prevent individuals becoming too fat? In other words, why would there be a set point with regulation against fat levels that were both too high, or too low, relative to a given value, instead of a system that simply pushed people to become as fat as possible at the given level of resource availability? In parallel to researchers who were interested in human obesity, a whole field was developing in the study of wild animals that concerned exactly such a question, i.e. the proximal costs and benefits of fat storage at different levels (Lima et al., 1985; Lima, 1986; McNamara and Houston, 1990; Houston et al., 1993; Bednekoff and Houston, 1994).
Most of this work concerned birds: it was argued that although increasing levels of fat would be protective against unforeseen shortfalls in food supply (i.e. the famine argument elaborated by Neel, 1962), increasing levels of stored fat would result in increased predation risk. The reason for supposing an impact on predation risk is that the laws of Newtonian physics would mean that an individual bird carrying excess weight would have a slower take-off speed and aerial manoeuvrability than a leaner individual that is likely to be disadvantageous during encounters with predators (Kenward, 1978; Lima, 1986; Sullivan, 1989; McNamara and Houston, 1990; Houston et al., 1993; Houston and McNamara, 1993; Witter and Cuthill, 1993). Empirical data on escape and flight behaviour has often supported these assumptions (Metcalfe and Ure, 1995; Kullberg et al., 1996; Lees et al., 2014; Van Den Hout et al., 2010; Nudds and Bryant, 2002; Lind et al., 1999); however, results that contradict these assumptions have also been reported (Kullberg, 1998; van der Veen and Lindström, 2000; Jones et al., 2009; Macleod, 2006; Dierschke, 2003). Empirical studies of changes in fat storage under changes in predation pressure largely support the idea that fat storage is reduced when predation pressure (or perceived predation pressure) increases (Gosler et al., 1995; Cresswell, 1998; Rogers, 2015; Pascual and Carlos Senar, 2015; Zimmer et al., 2011; MacLeod et al., 2007; Cimprich and Moore, 2006; Macleod et al., 2005a,b; Ydenberg et al., 2004; Pérez-Tris et al., 2004; Piersma et al., 2003; Gentle and Gosler, 2001; van der Veen, 1999; Carrascal and Polo, 1999; Fransson and Weber, 1997; Pravosudov and Grubb, 1998; Rogers, 1987; Witter et al., 1994; but see Lilliendahl 1997, 1998). However, studies of birds killed by predators do not indicate that fatter birds are killed more frequently (Sullivan, 1989; Whitfield et al., 1999; Møller and Erritzøe, 2000; Genovart et al., 2010); indeed, some studies suggest that leaner birds are more prone to predation risk (Dierschke, 2003; Yosef et al., 2011).
With respect to the effect of starvation, there is substantial evidence that fat stores are modulated in relation to variations in daily ambient temperature, with birds storing more energy on days when it is colder, presumably to avoid the risk of overnight starvation (Krams, 2002; Krams et al., 2010; Rogers and Reed, 2003; Rintimaki et al., 2003; Ekman and Hake, 1990). Shortening the time food is available also increases fat storage (Bednekoff and Krebs, 1995). The assumption in these studies is that colder temperatures alter not only energy demands and average food supply but also change predictability; however, this is not always the case. Furthermore, individuals may store more fat when it is colder to exploit the insulation properties of fat and hence reduce their energy demands rather than as an energy store. However, in experiments designed to separate the impact of average supply from variation in supply the effects are not so clear cut (Cuthill et al., 2000). Several studies that have varied only the predictability of the food supply have suggested that birds may lose fat when supplies become unpredictable rather than gain it (Cucco et al., 2002; Acquarone et al., 2002; Boon et al., 1999). Nevertheless, yet other studies confirm the prediction of an increase in fat storage (Cornelius et al., 2017).
These ideas, initially developed in studies of avian ecology, were subsequently expanded to include similar arguments with respect to selective pressures on mammals where the link to the physics of escape is less certain. Nevertheless, there have been several experimental studies indicating that actual (or perceived) increases in predation risk have a suppressive impact on body weight/fatness, particularly in small mammals such as mice and voles (Heikkila et al., 1993; Tidhar et al., 2007; Macleod et al., 2007; Monarca et al., 2015a,b; Sundell and Norrdahl, 2002; Carlsen et al., 1999, 2000). This effect may come about not because greater fatness reduces the quality of escape responses, but because predation risk depends on time spent moving, and the greater energy demands of fatter/heavier animals, such as voles, will increase their exposure to predation (Daly et al., 1990; Norrdahl and Korpimäki, 1998, 2000). Alternatively, smaller individuals may have access to refugia that are inaccessible to both larger individuals and predators (Sundell and Norrdahl, 2002). Studies have started to explore the molecular basis of this effect in the brains of mice (Tidhar et al., 2007; Genne-Bacon et al., 2016). As with birds, comparisons of individuals known to be predated compared with those that are not predated does not suggest a strong bias towards heavier/fatter individuals (Temple, 1987; Penteriani et al., 2008; Daly et al., 1990) and can even suggest differential selection of the lightest (Dickman et al., 1991; Koivunen et al., 1996a,b). This could be because lighter individuals are more likely to be sick or juvenile individuals. In experiments where healthy adult animals are given extra weights to carry (e.g. radiotransmitters), an increased mortality rate among those carrying tags has been shown in some studies (Webster and Brooks, 1980), but not in others (Korpimäki et al., 1996; Golabek et al., 2008). As with studies on birds, the evidence with respect to stochastic variations in food supply are less clear cut compared with the impacts of modified predation risk. There is some limited support for the idea that increased stochasticity drives elevated fat storage (Zhao and Cao, 2009; Cao et al., 2009; Zhang et al., 2012; Zhu et al., 2014); however, other studies suggest the reverse (Monarca et al., 2015b). In humans, this idea is generally called the ‘food insecurity’ hypothesis, or the ‘hunger–obesity’ paradigm (Nettle et al., 2017; Dhurandhar, 2016), i.e. poverty leads to greater food insecurity, which engages mechanisms ancestrally linked to starvation risk and food unpredictability, to elevate fat storage. This idea has been used most frequently to explain the greater prevalence of obesity (and sequelae) in poorer communities (Crawford and Webb, 2011; Castillo et al., 2012; Dinour et al., 2007; Laraia, 2012; Larson and Story, 2011; Morais et al., 2014; Smith et al., 2009; Kaiser et al., 2012).
Settling point theory
The lipostatic set-point theory seems to neatly explain many aspects of the control of body adiposity in rodents: such as post-restriction hyperphagia and compensatory responses to over- and under-nutrition. Furthermore, genetic defects in the proposed signalling pathway of the lipostat result in the expected impacts on overall body fatness (Zhang et al., 1994). By extension, the presence of similar responses in humans to restriction and over-feeding, and the presence of individuals with similar genetic loss-of-function mutations to those seen in mice (Montague et al., 1997; Clément et al., 1998; Vaisse et al., 1998), implies that the same system also regulates body weight in humans. However, the main argument against the ‘set-point’ idea is the ongoing obesity epidemic: at least 600-million individuals appear to lack set-point-mediated control of their adiposity. One argument is that the set point is variable between individuals and that those individuals that become obese simply have a higher set point. However, if this was the case, such individuals would always have a strong drive to increase their adiposity until their set point was achieved, which does not appear to match reality. People tend to slowly increase in body weight over many years without a major biological drive to do so, unlike, for example, the voracious appetites observed in children with malfunctions in the leptin gene (Montague et al., 1997). The main problem is that some people do not seem to react to this increase in adiposity by performing any compensatory actions. Moreover, this still begs the question: why do some individuals have high set points but others do not?
The changing body weight of humans, reflecting changes in lifestyle and behaviour, led to the suggestion that there may be no active regulation of body weight/adiposity (Wirtshafter and Davis, 1977; Payne and Dugdale, 1977a,b; Speakman et al., 2002; Speakman, 2004; Levitsky, 2005; Horgan, 2011). The suggestion instead is that adiposity is simply an epiphenomenon of the levels of intake and expenditure, i.e. an individual is in energy balance when intake of energy equals expenditure. If intake increases then there is a mismatch between intake and expenditure and, hence, there will be an increase in adipose tissue to store the excess intake, but when adipose tissue expands there is always also an increase in lean tissue (Svenson et al., 2007) including pre-hibernal fattening (Xing et al., 2015) and preparation for hibernation (Mitchell et al., 2011); and this lean tissue is metabolically active. Therefore expenditure increases. At some point a new steady state will be reached where the increased intake is matched by the elevated expenditure and at that point no further increase in adiposity will occur (Speakman et al., 2002; Christiansen et al., 2005, 2008). This is why arguments that eating, for example, an extra apple a day over a long enough period will make an individual obese are fallacious. If you eat an extra apple a day containing 240 kJ then you will put on some weight until your metabolism increases by 240 kJ day−1 and then you will stop gaining weight because you are back in energy balance. If you then stop eating the apple your weight will slowly decline because expenditure now exceeds intake and you will go back to your original weight, giving the illusion that weight is being regulated at this level; however, body weight may play no role in this. This idea was called the ‘settling point’ model because it suggests that body weight simply settles to a new equilibrium depending on the lifestyle impacts on energy balance. The settling point model suggests that obesity is simply a consequence of factors that promote greater intake of food (and/or lower expenditure of energy); for example, the structure of the built environment, car and television ownership, poverty and access to healthy/unhealthy foods.
Tam et al. (2009) used a mathematical model to explore the regulation of body weight in mice, mirroring similar mathematical approaches to the same problem in humans (Speakman et al., 2002; Christiansen et al., 2008). Models were explicitly based on the set- point and settling point concepts. Leptin was presumed to act in the set-point model as the signal of body fat level, whereas in the settling point model, leptin was assumed to be a hormone involved in appetite regulation but not weight regulation. Both models worked well in terms of predicting the outcome (massively elevated adiposity) when the leptin system was disrupted. However, the models performed very differently when the system was perturbed by high-fat diet feeding that perturbs energy intake levels. In this case, the settling point model performed significantly better than the set-point model, principally because the settling point model suggested that the mice would become fat, which they did, whereas the set-point model predicted that the mice would compensate for the elevated intake by increasing expenditure and hence they would stay lean (at their set points).
Despite the superior performance of the settling point model in this situation, there remain observations that are more easily explained by the set-point model. In particular, the hyperphagia that follows a period of restriction would not be predicted to exist using the settling point model – instead individuals would be expected to slowly drift back to their original weight when they went back to their original intake and expenditure conditions. Moreover, when dieting, individuals would not experience a strong drive to break their diets – they would slowly drop body weight to a new steady state that they could maintain indefinitely as long as they remained on the diet indefinitely. Achieving this would not be difficult because there would be no drive to go back up to their previous adiposity levels, unlike the set-point model, where adiposity is driven by the difference between the existing condition and the level of the set point. However, this theory does not accord with the general experience of individuals when dieting. Furthermore, under the settling point model, we would not expect to see any compensatory changes in energy expenditure or intake to resist changes in energy balance – yet such changes are routinely observed (Norgan and Durnin, 1980; Leibel et al., 1995; Horton et al., 1995; Dulloo et al., 1997; Dulloo and Jacquet, 1998; Goldberg et al., 1998; Weyer et al., 2001; Galgani and Santos, 2016; Hall et al., 2011, 2012; Johannsen et al., 2012; Polidori et al., 2016). A review of 32 controlled feeding studies in humans concluded that the responses were most consistent with the set-point rather than the settling point model (Hall and Guo, 2017).
Dual-intervention point model and ‘drifty gene’ hypothesis
The dual-intervention point model is an attempt to reconcile observations supporting the existence of a set point, such as post-restriction hyperphagia and compensatory responses to manipulations of energy balance, with the observation that many environmental factors seem able to alter the body fatness and these are not strongly (or even weakly) compensated for (Herman and Polivy, 1983; Levitsky, 2002, 2005; Speakman, 2004, 2007a,b; Speakman and Król, 2011). The idea is that rather than there being a single set point there are two ‘intervention points’, between which there is a zone of indifference. The suggestion is that adiposity varies because of environmental factors, but that such changes only alter the adiposity within the zone between the two intervention levels. This also creates a space where hedonic signals may influence energy balance without conflicting with homeostatic mechanisms (Berridge et al., 2010; Kringlebach, 2004; Berthoud, 2003, 2006, 2007, 2011, 2012; Petrovich, 2013; Berthoud et al., 2017). It is only when the level of adiposity passes over one of the intervention points that physiological mechanisms kick in to bring the body fatness level back into the zone of indifference. This model has the benefit that it can explain multiple aspects of the phenomenon that the set-point or settling point models alone are unable to explain. Hence, the uncompensated changes in body weight that accompany lifestyle changes, and responsivity to hedonistic cues, are interpreted as changes within the zone of indifference, whereas the compensatory responses to, and post-restriction hyperphagia following restriction are interpreted as a response to body fatness falling below the lower intervention point.
The interpretation of the evolutionary background to the upper and lower intervention points has borrowed the same ideas that have been used to explain the existence of opposing forces behind the evolution of the set-point model: the upper intervention point is a reflection of predation risk and the lower intervention point is a reflection of starvation risk (Speakman, 2004, 2007a,b, 2008b). The main difference is that in the set-point model, these two forces act in concert to define a single set point, whereas in the dual-intervention point model, the two forces are effectively independent of one another. Under this model, changes in, for example, predation pressure, would be expected to affect only the upper intervention point, leaving the lower intervention point unchanged, and susceptibility to obesity is interpreted as reflecting an altered upper intervention point. Individuals who have a raised upper intervention point might then drift upwards in body weight as a consequence of changing lifestyle and environmental factors but they fail to react to this because their upper intervention points are much higher than individuals who do respond to the energy imbalance and weight gain by capping their intake and controlling their weight. The key unanswered question is, if this model is correct, why do individuals vary so widely in this important trait?
I have proposed a solution to the problem of where the variation in the upper intervention point comes from that is called the ‘drifty gene’ hypothesis (Speakman, 2008a,b). Briefly the argument is as follows: ∼4-million to 2-million years ago (mya), our hominin ancestors such as Australopithecus are likely to have been under extreme predation pressure. They were relatively small primates (29–52 kg: McHenry, 1992, 1994) living in an open savannah habitat where there were many species of large predators that are absent today (Treves and Palmqvist, 2007; Lewis, 1997). Gnawing marks made by predators are routinely found on the the fossil bones of australopiths. One hypothesis is that species belonging to the large sabre-toothed felid genus (Dinofelis), Megantereon cultridens and Panthera pardus were specialised predators that fed on australopiths and paranthropines (Brain, 1981, 1994; Cooke, 1991; Lee-Thorp et al., 1994, 2000). Isotopic signatures of Dinofelis in particular indicate specialisation on prey that were mainly feeding on C4-derived plant material, which was a large part of the australopithecine diet (Spoonheimer and Lee-Thorp (1999). Therefore, given that these hominin primates were often likely to be prey, they probably had strong regulatory control preventing excess body weight gain as an anti-predation adaptation. Two-million years ago, Homo erectus evolved and developed a number of behavioural phenotypes that probably radically altered the predation landscape (Speakman, 2004, 2007a,b, 2008b). Homo erectus associated in larger social groups than the australopiths and paranthropines, used tools and discovered fire. Moreover, they were larger (Mathers and Henneberg, 1996). These four factors are likely to have been potent anti-predator mechanisms. Some authors have emphasised the roles of fire and weaponry (Kortlandt, 1980; Brain, 1981), building on observations that modern chimpanzees will attack model leopards with sticks and stones (Kortlandt, 1980, 1989). Others have emphasised the advantages of social behaviour to permit shared tasks of vigilance (Treves and Naughton-Treves, 1999; Treves and Palmqvist, 2007). It has been suggested that co-operative social behaviour may have evolved specifically as an anti-predator mechanism (Hart and Sussman, 2005; Hart and Sussman, 2011). Dinofelis became extinct in east Africa ∼1.4-mya (Werdelin and Lewis, 2001), along with many other large-bodied carnivores between 1-mya and 2-mya (Lewis, 1997). These extinctions, combined with the behaviour of Homo species, is likely to have dramatically reduced the risk of predation.
I have suggested that this reduction of predation risk to very low levels would have removed the selective pressure maintaining the upper intervention point at a low level (Speakman, 2004, 2007a,b, 2008b). It has been implied that for this model to work predation must be reduced to zero (Higginson et al., 2016; Nettle et al., 2017); however, this is not a necessary condition – only that predation falls to a low enough level that it is no longer a significant force for differential selection on body weight. For example, predation might continue unabated on children or the elderly but such predatory activity would be irrelevant for selection on body weight: as is the case for mortality in famines (Speakman, 2007a,b). The genes then defining the upper intervention level would have been subject to random mutations and drift – particularly in early human hominid populations where the effective population size was small (Takahata et al., 1995). This would explain the large diversity of upper intervention points in modern humans. A mathematical model of this process predicted the distribution of upper intervention points and, hence, the distribution of obesity phenotypes in modern humans (Speakman, 2007a,b). This model predicted that human populations will not continue to become more and more obese but rather will approach an equilibrium distribution. Indeed, recent studies have suggested that overweight and obesity levels in several countries have started to stabilise, especially among children (France: Czernichow et al., 2009; Czech Republic: Hamřík et al., 2017; Canada: Rodd and Sharma, 2017; USA: Skinner and Skelton, 2014). Given that the upper intervention point is likely to be encoded in the brain, this model also predicts that the dominant genetic factors linked to obesity will be centrally acting genes. This prediction is strongly supported by the results of genome-wide association studies where the majority of identified polymorphic loci where a function is established have central actions (Locke et al., 2015), although this is also consistent with the set-point model, so not definitive. Finally, this model is also highly consistent with the finding that there is little evidence of strong selection at loci linked to adiposity (BMI) (Southam et al., 2009; Wang and Speakman, 2016).
Higginson et al. (2016) have criticised the ‘dual intervention point–drifty gene’ interpretation, and present instead a different model which is effectively the set-point model with some additional refinements. The Higginson set-point model starts from a number of critical assumptions that are essential to generate their observed outcomes: (1) that feeding behaviour is switched on and off by the levels of the fat stores in the body; (2) that animals either only experience a predation cost when they are feeding, or that the predation cost is substantially greater when feeding than when resting; and (3) that there is a cost to switching between resting and feeding. The model predicts: (1) when there is no cost to switching, the animal dithers around the set point, constantly switching on and off its feeding behaviour; and (2) when there is a cost to switching, the animal alternately feeds until its fat stores hit an upper limit and then rests, depleting its reserves until its fat stores hit a lower limit. Although this model is superficially similar to the dual-intervention point model, the crucial difference is that in this model the upper and lower limits are linked together because in reality they are a single limit that is split by the cost of switching between states, so that a shift in the risk of predation would affect both limits. This linkage between upper and lower limits therefore predicts that across a population the average fatness would increase or decrease in a symmetrical manner. By contrast, the dual-intervention point drifty gene model posits that there are upper and lower intervention points between which animals are insensitive to their levels of fat storage, and that release from predation has allowed the upper intervention point to drift upwards independently of the lower intervention point. Given that only the upper intervention point is drifting, the ‘drifty gene’ model predicts that the population distribution of body fatness would become increasingly right skewed. Such a prediction matches the reality of the changing shape of the distribution of BMI throughout the obesity epidemic in the USA (Flegal et al., 2012) Of the drifty gene hypothesis Higginson et al. (2016) state ‘Speakman…used a genetic model to predict the distribution of body mass index (BMI) in modern humans given the drifty gene hypothesis and claims there is a good fit to observed distributions….we show that the predictions of this genetic model are changed when both the lower and upper thresholds are allowed to drift. The changed predictions do not match observation, meaning that the distribution of body masses in human populations in fact does not support an account in which both thresholds drift’. They also state ‘these thresholds will be inter-dependent such that altering the predation risk alters the location of both thresholds; a result that undermines the evolutionary basis of the drifty gene hypothesis’. This argument has a rather strange logic. They present a model where the upper and lower intervention points are linked. They then observe that the asymmetrical distribution of BMI in modern human populations doesn't fit the predictions of this new model. But rather than drawing the logical conclusion that the new model is therefore wrong, they conclude that this lack of fit somehow invalidates the original model, which does predict an asymmetrical distribution. Higginson et al. (2016) are correct that when upper and lower intervention points are linked and drifting, the predictions do not match the observations. However, as the only model that infers upper and lower intervention points are linked is theirs, then the logical conclusion is that they have proved their own model incorrect. I will next explore aspects of their model that explain why it fails.
A critique of the Higginson et al. model
The model presented by Higginson et al. (2016) depends on three critical assumptions. The first is that all excess energy intake is stored as fat. In reality, all animals have a number of storage buffers between their intake and the possibility of depositing or withdrawing energy from adipose tissue. The main other form of energy storage is the alimentary tract itself. When animals feed, they fill up an internal bag of food (the stomach or crop) from where the stored food is slowly released into the gut and digested. While an animal is resting after feeding, it is living on the energy stored in the alimentary tract, not as assumed by Higginson et al. its body fat. Furthermore, if this alimentary store becomes depleted and for some reason it is impossible for the animal to feed, there is a second store in the liver and muscles of glycogen, which can also be drawn on before there are any calls upon fat storage. Hence, in the course of a normal day, with adequate food, the body fat stores may remain static and play no role in the initiation of feeding or resting decisions (see also below). As far as I am aware, there is no evidence that minute to minute feeding behaviour in animals is initiated or terminated by changes in the level of stored fat. Hence, the primary mechanism that Higginson et al. assume is driving the behaviour is not rooted in any known physiology of feeding behaviour.
The second key assumption is that the cost of predation is greater when foraging than when the animals are resting. There is some support for this contention (Daly et al., 1990; Norrdahl and Korpimäki, 1998, 2000); however, the difference in mortality risk is small, and in some cases follows a U-shaped curve with mobility (Banks et al., 2000). They model two scenarios: in one, there is zero predation risk when the animal is at rest; in the other, the predation risk at rest in >0, but it is still 10× more likely to be predated when foraging than when at rest. This element of the model is crucial to its predictions, but the mortality differences between states seem unrealistically high. In effect, the animal at each time point faces a decision, should it forage or should it rest. When the animal feeds it deposits fat. Although this fat has a positive impact in terms of reducing starvation risk, it also has a negative impact because of the increased risk of predation when foraging. The Higginson et al. model predicts that animals should feed to accumulate fat until the increased benefit of starvation avoidance is offset by the increased risk of predation when foraging. If the risk of predation when foraging is the same as when resting then there is no reason to stop foraging because there is a benefit in starvation risk reduction but no cost. The Higginson et al. model predicts that this animal will continue to forage and accumulate fat indefinitely. This is a vital element of the model, yet the empirical evidence does not suggest that there is the same large risk difference as assumed in the model formulation and there are no animals that have the indefinite accumulation behaviour.
Finally, the most critical element of the Higginson et al. model is the suggestion that there is a cost to switching between the two states of resting and feeding. If such a cost does not exist, this model is simply the same as the set-point model. One might imagine that there is a cost to switching between resting and feeding if, for example, the only place where the animal rested was some distance from the feeding location, which might expose the animal to elevated risks of predation during the commuting time (if the greater risk when not resting assumption is accepted). This would only be an issue if the animal could not combine foraging and commuting. However, in many (most) instances, animals can instantaneously alternate between feeding and resting; if there is no time cost to switching, there cannot be a predation cost.
The outcome of the ‘high cost of switching model’ is that the behaviour stops dithering between feeding and not feeding; however, what it becomes is equally unrealistic. The animal forages until the amount of stored fat reaches an upper limit. The animal then rests and withdraws fat until the amount of stored fat reaches the lower limit, whereupon it then starts to feed again, i.e. foraging behaviour is driven only by the level of the fat stores. However, there are no animals known to have such a physiological mechanism driving the initiation and termination of feeding behaviour.
A new look at the evolution of fat storage
Energy expenditure is continuous. Food intake happens in discrete meals. Consequently, animals need energy storage mechanisms that will sustain them between meals. The main storage mechanism includes bag-like structures in the alimentary tract – such as the crop or stomach. Life is also dominated by the circadian cycle of light and dark. Most animals are either diurnal or nocturnal, with their main activity and food intake happening either in day time or night time. This means that animals experience long periods without food on a daily basis and that they usually need to be supplied with energy from stores that exceed the gut capacity. In this case, stored glycogen in the liver and muscles plays a key role. The new formulation of the dual-intervention point model that I present here starts with the observation that feeding behaviour is switched on and off primarily by fluctuations in circulating glucose levels (Mayer, 1953), the levels of the glycogen stores, the amount of food already in the gut (and associated hormones) and the diurnal cycle. When an animal that only feeds at night wakes up at dusk its gut will be empty and its glycogen store run down. This will create a stimulus to feed: primarily owing to high levels of ghrelin produced by the stomach and low levels of other gut hormones such as peptide YY (PYY), cholecystokinin (CCK) and gastric inhibitory poly-peptide (GIP), which normally inhibit feeding. The animal will continue feeding until signals from the alimentary tract (reduced ghrelin and elevated PYY, CCK and GIP) switch off the feeding behaviour. The animal then rests while digesting the food, which enters the body and is used to fuel its metabolism and replenish the glycogen stores. Over time, a ghrelin signal from the stomach increases and satiating hormone levels decrease. At some crucial point, this balance switches on feeding again. This alternating cycle continues through the night until the animal stops feeding when day breaks. The animal will feed before daybreak to fill up its gut and glycogen stores in anticipation of the daytime energy demands. Unlike the model suggested by Higginson et al. (2016), this model indicates that fat stores are usually not involved in this cycle of feeding and resting.
In support of this description of daily energy balance, Fig. 1 shows the 24 h temporal changes in food intake and body fat content of mice measured using repeated non-invasive magnetic resonance imaging scans every 2 h. These data show that body fatness remains almost constant throughout the day, when the animal is predominantly resting. Feeding is initiated when the lights go out: for the first 4 h after feeding, body fatness actually falls, before recovering by the time the lights come on again. These data show that feeding onset and offset is regulated independently of the (minor) temporal changes in the fat stores. A useful analogy to understand this patterning is a current account at the bank. Individuals get their salary paid into their current account. This is like food intake coming into the gut and filling up the glycogen stores. They then draw money out from this account to cover their day-to-day needs until the next salary cheque is deposited. The withdrawals are continuous and the deposit is relatively discrete. Individuals may also have a deposit account. The deposit account is like a fat store. Normally in the course of a typical month, the deposit account is untouched. However, at any point an individual can top up their deposit account if the current account is in credit, or they can draw on the deposit account to supplement their income if expenditure in a given month looks as though it will exceed their income. The amounts placed into the deposit account on any particular event may be small and, hence, the deposit account may accumulate in size quite slowly and unrelated to the magnitude of the actual salary and monthly expenditure. Similarly, the fat store may play very little role in switching on and off feeding decisions but acts as a reserve that can be topped up or depleted according to the surplus or potential deficit in the gut and the glycogen stores.
The level of the fat store may be modulated by two different risks. One risk pushes up the size of the fat store. The second pushes it down. Previously, in similar models, these risks have been suggested to be the risk of starvation due to unpredictable food shortage (pushing fat storage up) and the risk of predation, which increases as the size of the store gets larger and, hence, this pushes the size of the store down (see above under set points). In this model, I will also assume that predation risk provides an elevated risk at higher levels of fatness. In this model, it is assumed that predation can occur at any time, regardless of whether the animal is at rest or foraging, and that there is no necessity that the risk of predation when foraging exceeds that when resting. The only important factor is that the risk of being caught and killed is always greater for fatter individuals.
As detailed above, there is good supportive evidence for the risk of predation increasing in relation to changes in fatness and for animals responding to increases in this risk by reducing their body weight/adiposity. However, the evidence supporting the theory that it is starvation risk and the unpredictability of food supply that pushes up adiposity is much more equivocal (see reviewed data above). Indeed, in addition to the above evidence, there is strong experimental evidence that the starvation side of the supposed starvation–predation trade off is not due to the unpredictability of the food supply. This evidence comes from numerous studies that have been performed to provide wild animals with supplementary food. These studies have usually been performed in the context of understanding whether the food supply provides a proximate limit on population size. However, studies commonly measure body weight as well as a secondary outcome measure. Providing supplementary food reduces the unpredictability of the food supply. Hence, if the starvation–food unpredictability idea is correct, food supplementation should lead animals to reduce their fat stores. It doesn't. A review of 178 published studies of food supplementation in mammals revealed that the most common response was to increase body weight (Boutin, 1990). Similar data are available for food supplementation studies of birds (Rogers and Heath-Coss, 2003). These data provide a strong reason to reject the starvation–food unpredictability hypothesis for fat storage. So why does body weight increase? One potential reason is that supplementary food reduces the time spent foraging (Boutin, 1990). Given that mobility seems to be linked to greater predation risk (Daly et al., 1990; Norrdahl and Korpimäki, 1998, 2000), supplementary food allows the animals to be less active, which is perceived as reduced predation risk, and animals can increase their reserves accordingly (see also Morosinotto et al., 2017 for interactions between supplementation and predation risk).
I suggest here a different reason that provides the selective pressure to increase fat stores that is unrelated to food predictability. That is the risk of disease. If an animal becomes infected with a disease that is in any way debilitating, then there would be two good reasons to inhibit foraging behaviour. First, a sick animal may be inefficient at foraging. The increased costs of foraging may not be covered by the increased gains. Foraging might then exacerbate the problem of energy imbalance. Second, the animal may be more susceptible to predation (Genovart et al., 2010). We have all probably experienced having an illness, and are aware that when you are ill you do not want to run around and eat food. This phenomenon has been studied extensively and is so well established that it has several names – ‘pathogen induced anorexia’ or ‘sickness syndrome’ (reviews in Konsman et al., 2002; Kelley et al., 2003; Kyriazakis, 2014). An example from a collaborative study with my own laboratory is red grouse experimentally infected with the nematode Trichostrongylus tenuis (Delahay et al., 1995) (Fig. 2). In this study, we experimentally infected some birds but left others uninfected. Sixteen days later we measured their food intake, resting metabolic rate, daily energy expenditure and determined the amount of energy that infected birds spent on activity compared with that of uninfected birds. The food intake and resting metabolic rate of infected grouse were both ∼33 to 38% lower than that of uninfected grouse. However, the biggest difference between infected and uninfected grouse was their physical activity energy expenditure, which declined by 83% in infected birds because the birds stopped moving to wait out the infection. This is a much stronger and well-founded reason for animals to store fat.
There is also a strong neurophysiological background to how pathogen-induced anorexia works in the brain (Konsman et al., 2002). During illness, we produce lots of cytokines that mediate the immune response to infection. These include members of the interleukin (IL) family (notably IL-1 and IL-6) and interferon gamma (Bluthé et al., 2000a,b). Several of these cytokines can also have impacts on food intake. Initially it was considered that the most important mediator of appetite loss during illness was tumour necrosis factor alpha (TNFα). However, during the 1990s it became clear that TNFα may not act alone but be complemented by other cytokines that work together in concert (Konsman et al., 2002; Kelley et al., 2003). In the early 2000s, IL-18 was identified as an additional player in this cytokine repertoire that interferes with our appetite (Zorrilla et al., 2007). Mice that lack IL-18 eat more food, have reduced energy expenditure and eventually develop obesity (Zorrilla and Conti, 2014), while injecting it into the ventricles of the brain increased intake (Netea et al., 2006). One problem with understanding the role of IL-18 in pathogen-induced anorexia is that there are no receptors for IL-18 in the hypothalamus. However, there are receptors for IL-18 in a small number of neurons in the bed nucleus of the stria terminalis (BST) in the amygdala.
The subset of neurons in the BST expressing IL-18 receptors send projections to the lateral hypothalamus. Injection of IL-18 directly into this area of the brain in mice resulted in a significant reduction in food intake, but interestingly had no impact on physical activity levels, body temperature or energy expenditure (Francesconi et al., 2016). Brain slice work showed that neurons in the BST that project to the hypothalamus were strongly activated by glutamate, but that this excitatory effect was reduced when they were treated with IL-18. Because these neurons project to the hypothalamus, reducing the excitatory input to the neurons had a knock-on effect on the activity of hypothalamic neurons. Treating cells in the BST with IL-18 increased the firing of neurons in the hypothalamus because the neurons projecting from the BST had reduced GABA output, which normally inhibits the activity of the hypothalamic neurons. The pattern of changes recorded in the hypothalamus was consistent with the pattern expected when animals experience a reduction in appetite. These effects were abolished in brain slices from the IL-18 knock-out mouse (Francesconi et al., 2016).
Based on these well-founded data, I suggest that animals do not store fat as a hedge against starvation owing to food unpredictability, but rather as a hedge against diseases to allow them to endure periods of pathogen-induced anorexia. Not only is there strong evidence for pathogen-induced anorexia but also there is a large body of evidence that individuals carrying more fat have a better chance of survival when they are infected. For example, human mortality from pneumonia in elderly individuals in the 30 days following the contraction of the infection is about 4× higher in underweight individuals (mortality rate 19.4%) compared with individuals with obesity (mortality rate 5.7%) (Nie et al., 2014). Carrying more fat seems to be protective against mortality from many different diseases, both infective and non-communicable: commonly called the obesity paradox (Lavie et al., 2003; Curtis et al., 2005). Given that stored fat has this beneficial impact across multiple diseases might be interpreted as meaning that it has a common mechanism – for example, providing a buffer against disease-induced anorexia. No matter how attractive this appears, there are some complexities in the mechanism that suggest the situation may be more involved. For example, with respect to kidney disease, fat people may survive longer because they are prescribed more dialysis on the basis of their greater body weight rather than any physiological mechanism (Speakman and Westerterp, 2010). In addition, there is the perplexing phenomenon that if, for example, an individual has a stroke, being more obese protects them from future mortality from stroke – but not from future mortality due to other illnesses (Bagheri et al., 2015, 2016), as might be expected if the fat was simply a buffer against energy imbalance from illness-induced anorexia. Nevertheless, despite these complexities there is substantially more data supporting the idea that stored fat serves as a hedge against disease risk rather than as a hedge against starvation because of food unpredictability. However, these risks are not entirely independent. Food shortage may potentially compromise immune defences and, hence, exacerbate disease risk. As previously noted, most people dying in modern famines die not because they run out of energy but because they have weakened immune systems that make them susceptible to disease (Speakman, 2006). Moreover, small mammals under caloric restriction show increased susceptibility to nematode infections and reduced ability to clear infections (Kristan, 2007, 2008).
Now consider a different set of parameters for the constants relating fatness to the mortality risk due to predation and disease risk (a=0.53, b=0.07, c=0.123, g=0.117, h=j=0). Using these parameters (Fig. 4), the shapes of the curves relating starvation and predation risk to mortality are much flatter. However, if we use Eqn 4 to solve for the fatness at the lowest mortality, we obtain the same answer to that in Fig. 3 (i.e. FA=5 g). In spite of this similarity in FA, there is a large difference between the curves in Fig. 4 and those in Fig. 3, and that difference is the mortality consequence as the animal’s body fat deviates from FA. The main problem with the mathematical model introduced above, which is used to derive FA, is that it assumes perfect invariant relationships between fat storage and mortality risks, and assumes that the animal involved can sense its own body fatness with extreme precision. In reality, neither of these conditions are likely to hold. Optimum levels of fat storage that minimise mortality are likely to be highly dynamic and dependent on a myriad of factors that the individual animal is unable to monitor.
Hence, while we may be able to mathematically solve Eqn 4 to get the exact optimal fatness (FA) that minimises mortality, in reality the process of selection would be unable to distinguish mortality at a point FA+ε from the mortality at FA where ε is a critical level of fat storage. Let us define the associated mortalities at FA+ε as MA+ɛ and at FA as MA. To model this situation, assume that the combined uncertainties mean that selection is unable to distinguish mortality differences of less than 5%, i.e. MA+ɛ/MA=1.05, which would then generate two body fatness values at FA+ε=FU and FA−ε=FL. Unfortunately, there is no simple solution to solve the differential of Eqn 3 for the values of FU and FL, and so this needs to be solved manually. When we do that in the case of Fig. 3, the values are FL=3.75 g and FU=6.32 g. Therefore, this indicates that selection would be unable to distinguish fat stores over a range encompassing approximately ±23% of the ‘optimum’ FA value. In the case of Fig. 4, the values are FL=1.95 g and FU=7.04 g, and selection would be unable to distinguish a range encompassing ±41% of the ‘optimum’. We might also imagine a scenario in Fig. 5 where the parameters (a=4000, b=1.5 c=0.3, g=1.5 h=j=0) generate curves that are very steep. In this situation, FL=3.0 g and FU=3.37 g and, hence, they differ by only ±6% of the optimum.
In the scenarios illustrated in Figs 3 and 4, between points FL and FU there is effectively a constant risk of mortality because of the balance of predation and disease. Regardless of what the fat storage level is in this range, there is effectively no impact of changed fat stores on mortality. What happens in this situation? If the animal starts at a fat level FA midway between FL and FU, on days when it failed to eat enough food to meet its demands, the animal's fat level would move towards FL, and on days when it overestimated its demands the animal's fat level would move towards FU. However, because such changes in fat storage would be neutral as far as mortality is concerned, they would not stimulate the animal to modulate its food intake to resist the fat storage change the following day. Fat stores in the zone between FL and FU would fluctuate about at random. However, the animal would need to be able to sense when it had reached either FL or FU because changes in the fat stores that exceeded these limits would have mortality consequences >MA+ɛ. This is the dual-intervention point model. The lower intervention point is FL and the upper intervention point is FU. These points are maintained by natural selection. If a mutation occurred in the genes defining point FU that moved FU upwards, such individuals would be at risk of increasing their body weight to a level where they would encounter significantly elevated predation risks relative to non-mutant animals. These individuals (and their mutant genes) would be purged from the population, which would sustain the intervention point at FU. I suggest that this provides an explanation satisfying the demands for ‘…an explanation for why the system should be indifferent to all the states between the thresholds’ (Higginson et al., 2016).
Although this model generates predictions with both set point (Fig. 5) and dual-intervention point behaviour (Figs 3 and 4), which of these is most likely to evolve rests on the shapes of the mortality curves. To answer this question, we might explore what the necessary conditions are to generate the respective curves. In Fig. 4, the curve relating reducing disease risk to increasing fat storage indicates that mortality from disease decreases 2.7-fold as fat storage increases 10-fold from 1 g to 10 g. Similarly, the same fatness difference results in a 1.9-fold increase in predation risk. These mortality effects of such a large difference in fat storage do not seem unrealistic given that direct assessments of mortality in the wild have failed to detect that fatter individuals are more significantly predated (see references above) and, hence, suggest the effect is subtle. However, to generate a set-point, the mortality impacts of different levels of fat storage need to be immense. In Fig. 5, for example, if fat storage changed from 1 g to 10 g, to achieve a set-point with a range of ±6%, mortality would increase 729,416-fold. This does not seem realistic because such a large impact of fat storage on mortality would be expected to show up in mortality data of prey items selected by predators, which I have detailed above is not the case. Therefore, in most circumstances the dual-intervention point systems are much more likely to evolve than set-points.
Because disease risk and predation risk are environmental phenomena that are not intrinsically linked to each other, these curves can vary independently, and the consequence is that selection would favour the evolution of the independently regulated intervention points FL and FU (Fig. 6). Thus, the disease risk curve might shift owing to a change in parameter ‘a’ (in evolutionary time) that exerts a selection pressure to push up the lower intervention point but leaves the upper point unaffected (Fig. 6A,B), or the predation risk might change (reflected by a change in parameter ‘c’), which over evolutionary time selects to push down the upper intervention point but leaves the lower point unaffected (Fig. 6C,D). Given that the positions of FU and FL are selected in evolutionary time, in the short term they are expected to be inflexible. Nevertheless, there may be predictable changes in the abundance of predators or diseases that drive seasonal cycles in these traits, which then precipitate seasonal cycles in body weight triggered by photoperiod. Other factors apart from disease and predation may also vary seasonally, such as the costs of thermoregulation and the necessity to reproduce, which may also exert seasonal pressures on the levels of fat storage. If these pressures vary the storage levels within the range defined by the dual-intervention point model, they would not impact the regulation system based on predation and disease. However, more realistically, a more complex model that includes these additional factors may be necessary for a complete picture. The molecular mechanisms underpinning such photoperiod effects have been extensively studied (Steinlechner and Heldmaier, 1982; Bartness and Wade, 1985; Bartness et al., 2002; Bartness, 1996; Klingenspor et al., 2000; Mercer and Speakman, 2001; Mercer et al., 2000, 2001; Morgan et al., 2003; Peacock et al., 2004; Król et al., 2005a,b, 2007a,b; Liu et al., 2016). These photoperiod-induced obesity models may provide excellent opportunities to explore the molecular mechanisms that underlie FU and FL.
In addition to these responses on an evolutionary timescale, animals with fat storage levels within the zone between FL and FU could improve their survival chances if they were able to modulate their fat storage based on immediate local knowledge to get as close as possible to the locally and temporally appropriate FA. Hence, it seems likely that mechanisms would evolve for individuals to detect and respond immediately to greater densities of predators or disease risks. Animals do appear to have such mechanisms: for example, animals can modulate their fat storage in response to predator odours (Tidhar et al., 2007; Campeau et al., 2007; Wang et al., 2011; Genne-Bacon et al., 2016) or sounds (Monarca et al., 2015a,b). Indeed, exposure to predator odour is a routine paradigm used to study fear responses in rodents; the molecular basis of this circuitry in the brain is already well established and involves nuclei in the amygdala and hypothalamus as well as a column of the periaqueductal grey (Dielenberg et al., 2001; Gross and Canteras, 2012; Takahashi, 2014; Cristanto et al., 2015). Future studies that combine the fields of predatory fear responses and body-weight regulation may advance our understanding of the molecular basis of the upper critical regulation system (see also below).
Physiological consequences of the disease-predation dual-intervention point model
The model outlined above indicates that there are likely to be two different intervention points for body fatness that evolved separately and which are controlled by different physiological signalling systems. The lower intervention point is likely to be regulated primarily by levels of leptin and signalling in the brain via the NPY/AgRP and POMC neurons interacting with melanocortin receptors. However, animals are also able to directly sense their own body weight, possibly via mechano-sensors in the joints and limbs. For example, if inert weights are implanted into the body cavities of small mammals they rapidly adjust their tissue mass to compensate for their increased body weight (Adams et al., 2001; Wiedmer et al., 2004).
Although this adjustment seems to be precise, it is unclear why, if the production of leptin is disrupted and the animal gains weight, this direct sensing mechanism does not step in to compensate. This would imply that the direct sensing system is also leptin dependent; However, when inert weights were implanted into ob/ob mice, they adjusted their tissue mass to compensate for their increased body weight in the same way as wild-type mice (Janson et al., 2018). The loading effect appears to depend instead on osteocytes in the weight-bearing bones (Jansson et al., 2018). If leptin is only involved in signalling the lower intervention point, rather than a set-point, this would explain a large swathe of empirical data. In particular, it explains the asymmetry in the metabolic response to decreased and increased leptin levels (Leibel, 2002), and why treatment with leptin does not reduce body fatness (Heymsfield et al., 1999). In effect, the lack of response to elevated leptin, which is widely interpreted as ‘leptin resistance’ is not because of some defect in the system, but rather because leptin is only responsible for signalling low fat levels and is not involved in the regulation at the upper intervention point (see also Ravussin et al., 2014). If this idea is correct, the ‘problem’ of leptin resistance disappears because it is a phenomenon only if one assumes at the outset that there is a set-point regulation system for adiposity. The alternative view (consistent with the dual-intervention point model above) is that (low) leptin is a starvation signal and not the adiposity signal at the hub of a lipostatic control system (i.e. a set point) (Ahima et al., 1996; Ravussin et al., 2014; Berthoud et al., 2017; Flier and Maratos-Flier, 2017) If leptin and the canonical hunger signalling pathway in the brain is only the physiological manifestation of the lower intervention point, this suggests there is an as yet unknown physiological system that mediates the upper intervention point that is independent of leptin (see also Ravussin et al., 2014). The existence of such a system is inferred from similar parabiosis experiments that preceded the discovery of leptin. Overfeeding individuals in parabiosis leads to weight loss in the parabiosed partner (Nishizawa and Bray, 1980; Harris and Martin, 1984). In addition, when mice selected for increased adiposity were cross bred with mice carrying defects in leptin production or the leptin receptor, the effects of the original selection were still apparent, indicating the presence of a second system regulating adiposity levels (Bunger et al., 2003). Understanding the nature of the signal that indicates high levels of fat storage, which is then used to regulate body weight and adiposity at the upper intervention point, and the central signalling mechanisms that respond to this signal, are key goals for future obesity research. A direct prediction of the ‘drifty gene’ hypothesis is that mutations in the genes that define this upper intervention point control system have drifted in evolutionary time, leading to the current variations in human adiposity. Responses to overfeeding should then show profound heterogeneity linked to such mutations.
Testing between the different models
There are three basic models: the set-point model, the settling point model and the dual-intervention point model as further elaborated here. All other models are refinements of these basic approaches. A two-stage process is needed to test which of these models best explains the patterns of change in body fatness. The first stage is to observe the changes in body weight on a day-to-day basis. If the set-point model is correct, there should be some negative autocorrelation structure in the data, i.e. increases in weight/fatness should precipitate counter-regulatory actions to bring weight back to the set point. The settling point model and dual-intervention point models predict that there should be no autocorrelation structure. The second stage is to expose the animals to a period of starvation, and then release them from starvation and monitor their food intake and weight changes. The set-point and dual-intervention point models both predict a period of excess consumption following starvation (in the first case, because the weight will fall below the set point; in the second, because the weight will fall below the lower intervention point). In the settling point model, there should be no excess consumption during the period when the animals resume feeding. Together these two tests should enable the best model to be determined or highlight weaknesses in all three ideas.
Epilogue
For most animals, life is a tightrope walked between the twin risks of predation and disease. Stored fat may help animals successfully navigate this tightrope. Fat levels are physiologically regulated by upper and lower intervention points that have probably evolved in response to the risks of predation and disease. Starvation owing to food unpredictability is likely to be less important or unimportant. Between these intervention points, a multitude of other factors may modulate stored fat levels. The molecular basis of the lower intervention point is probably centred around leptin signalling. Determining the molecular basis of the upper intervention point is a key target for future work. Humans have largely freed themselves from the constraints imposed by predation, but the consequence has been a relaxation of the selective pressure capping weight gain, and an epidemic of obesity.
Acknowledgements
Ideas do not develop in isolation and without financial support. I am grateful to many people for their thoughtful discussions on the regulation of body fatness that have helped me develop my understanding. In particular, I would like to thank in no particular order David Allison, Kevin Hall, Thorkild Sorensen, Xavier Lambin, David Levitsky and Steve O'Rahilly, plus all my students in both Beijing and Aberdeen. Min Li and Yingga Wu stayed up all night doing repeated mouse MRI scans and kindly supplied the data for Fig. 2 for which I am extremely grateful. Mike Richards provided valuable input regarding the model formulations. Thomas Ruf, who waived anonymity, and an anonymous reviewer, together made many helpful and perceptive comments that greatly improved the original draft.
Footnotes
Funding
My work on body weight regulation has been generously supported by the Chinese Academy of Sciences Xiandao B eGPS project (XDB13030100), the National Natural Science Foundation of China microevolution program (NSFC91431102), the 1000 talents program and a Wolfson merit Professorship from the UK Royal Society.