Philosophical Transactions of the Royal Society B: Biological Sciences
You have accessOpinion piece

Homeostasis as a fundamental principle for a coherent theory of brains

J. Scott Turner

J. Scott Turner

Environmental and Forest Biology, SUNY College of Environmental Science and Forestry, 1 Forestry Drive, Syracuse, NY 13159, USA

Stellenbosch Institute for Advanced Study, Stellenbosch, Matieland 7602, South Africa

[email protected]

Google Scholar

Find this author on PubMed

    Abstract

    ‘Brains’ may be considered to be computation engines, with neurons and synapses analogized to electronic components wired into networks that process information, learn and evolve. Alternatively, ‘brains’ are cognitive systems, which contain elements of intentionality, purposefulness and creativity that do not fit comfortably into a brain-as-computer metaphor. I address the question of how we may think most constructively about brains in their various forms—solid, liquid or fluid—and whether there is a coherent theory that unites them all. In this essay, I explore cognitive systems in the context of new understanding of life's distinctive nature, in particular the core concept of homeostasis, and how this new understanding lays a sound conceptual foundation for an expansive theory of brains.

    This article is part of the theme issue ‘Liquid brains, solid brains: How distributed cognitive architectures process information'.

    1. Introduction

    Defined most broadly, a ‘brain’ is a living system that couples cognition to action: sensory systems to motor effectors. We are accustomed to thinking of brains as nervous systems, which are specialized organ systems within self-contained bodies. Nervous systems comprise not just the brain and spinal cord—the central nervous system—but also peripheral elements: sensory systems, which gather information and construct cognitive representations of the environment; and motor effectors, which act to modify an environment. The environment can either be external or some internalized milieu.

    A nervous system might be a brain, but the inverse need not be true [1]. ‘Brains’ might exist in states other than nervous systems, and one does not have to look far for examples. Brains can be ‘liquid’, such as might exist in creatures that sit at the transition from unicellular to multicellular organization. There, the coupling between sensory and motor systems can be quite literally liquid, as in slime moulds [2].

    There is a third category of brains I would like to introduce: ‘fluid’ brains that occur at a higher level of organization, namely organisms with nervous systems (solid brains) that form emergent ‘social organisms’. These are most obviously exemplified by the social insects—the bees, ants and termites [3,4]—but they potentially include a variety of social systems, including human societies and ecosystems, and even societies of machines: swarm intelligence and swarm robotics (e.g. [5,6]).

    2. Are brains computation engines?

    If we are to recognize brains in all their potential forms—solid, liquid or fluid—is there a coherent theory of brains that could encompass them all? The broad definition above—a system that links cognition to motor action—will not on its own suffice. The territory between cognition and action is a vast terra incognita that is populated by competing metaphors of what brains are. To even ask the questions—what brains are and what they do—never mind answering them, is inevitably coloured by these metaphors.

    For much of the twentieth century (and into the twenty-first), the dominant metaphor of the brain has been (is) what we may call the cybernetic metaphor, which regards a brain as a computation engine [79]. The cybernetic metaphor for brains is dominant for good reason: it has been exceptionally fertile, with artificial intelligence being its most fecund issue [10]. The remarkable growth of digital technology has added to the cybernetic metaphor's fecundity, bolstered by the compelling simile of the synapse as a logic gate [11]. This allows nerve cells to be assembled into various kinds of ‘circuits’ that compute. In the cybernetic metaphor, all the imaginable things that brains do should be reducible to computation. It follows that there should be no reason why computing machines cannot also be intelligent. There is a compelling logic to this: if brains are organs of intelligence, and brains are computers, there should be an ‘intelligence algorithm’. Parse out that algorithm, code it and implement it, and you will have an intelligent machine.

    Running alongside the cybernetic metaphor has been an alternative, and comparatively cryptic metaphor, which we might call an ecological metaphor for brains [1215]. Within the ecological metaphor, intelligence is not a matter of computation and algorithms, but something that emerges as an epiphenomenon from an ecosystem of biological agents, with their actions and forms shaped by competition and Darwinian selection [16]. An obvious example of this may be found in the highly redundant system of chemical neurotransmitters in the animal brain (e.g. [17,18]). This redundancy poses a fundamental challenge to the cybernetic metaphor. If brains are computation engines, comprising synaptic logic gates assembled into computation circuits, why would there be dozens—perhaps hundreds—of different types of neurotransmitters? Would not a few do? The rhetorical answer to this rhetorical question is that these complex and super-redundant systems of neurotransmitters might more sensibly be explained by an evolutionary arms race of competition, defence and counter-response among the brain's many agents [19]. The rococo system of redundant neurotransmitters in the animal brain now becomes more like the visually spectacular display of the peacock: of doubtful economy of design but explicable by its serving another purpose altogether (sexual selection for the peacock, some unknown end—dominance and control of one type of neuron over another, perhaps—for the brain).

    Which metaphor—cybernetic or ecological—allows us to think more deeply about brains in all their imaginable forms: solid, liquid and fluid? The answer is far from straightforward: any choice one makes is immediately bedeviled by the deep philosophical division that cleaves through the life sciences: the pervasive and seemingly irreconcilable ‘mechanism–vitalism controversy’ [20]. On the mechanism side of this divide is the assertion that life is best (most ‘scientifically’) understood as a machine, for which the particular mode of discourse that prevails in the physical sciences is not just adequate but required. It is fair to say, I think, that the cybernetic metaphor of brains sits firmly on the mechanism side of this divide. By contrast, the vitalism side asserts that life is a phenomenon unlike any other in the universe, and as such requires special modes of discourse to explore. It is also fair to say, although it may be less self-evident, that the ecosystem metaphor sits more comfortably on the vitalism side of the divide.

    One of the appeals of metaphor is coherency: things are made sensible. There is a downside to this, however: sensibility exists within the metaphor. If there are competing metaphors for the same thing (brains, in this instance), what is sensible within one might make no sense from within another. The pernicious result is intellectual stagnation sustained by mutual incomprehension among the followers of competing metaphors [20]. Intellectual progress beyond the divide only comes when a common ground of discourse can be found between the two metaphors. For the questions of what brains are and what they do, where can that common ground most constructively be built?

    I contend that a coherent theory of brains can most constructively be built upon the foundation of a core concept of physiology: homeostasis.

    3. The many little lives

    Homeostasis is also bedeviled by its own metaphorical difficulties, however, which makes it perhaps the most trivialized and misunderstood concept in modern biology [21]. Before homeostasis can serve as a sound foundation for brains, some of that metaphorical clutter needs to be cleared away.

    The concept of homeostasis is usually attributed to the nineteenth-century French physiologist, Claude Bernard [22], although its roots extend much farther back than he [21]. Bernard's conception of homeostasis is embodied in the famous aphorism from his signature work, An Introduction to the Study of Experimental Medicine [22]. In English translation:

    The steadiness of the internal environment is the condition for a free and independent life.

    In modern physiology, homeostasis has come to be construed as a regulatory mechanism, whose operation produces a regulated state, say, of temperature, salt balance, diet and so forth. As such, the modern conception of homeostasis sits firmly embedded within the cybernetic metaphor: homeostasis is the product of a computational system, most generally some system of negative feedback control.

    The modern cybernetic concept of homeostasis owes more to Norbert Wiener than to Claude Bernard, however. Wiener drew inspiration, as did Bernard, from the extraordinary self-regulatory and self-sustaining capacities of organisms [23,24]. Where Wiener sought to tease out the mechanics of homeostasis, though, Bernard regarded homeostasis more as a fundamental property of life, revealing a surprising vitalist element in Bernard's thinking [25]. The cybernetic conception of homeostasis turns Bernard's conception on its head, and along with that, Bernard's intent in articulating it. In the cybernetic conception, homeostasis is the outcome of mechanism. In Bernard's conception, the mechanism is the outcome of life's fundamental property: homeostasis.

    Bernard was neither a conventional mechanist nor a conventional vitalist, but more of a ‘romantic’ thinker (as Stent [26] has described a similar strain of thought in molecular biology). In Stent's parlance, the romantic idea seeks common ground that acknowledges life's unique qualities but places it firmly within the realities of the physical world: melding vitalism and mechanism, as it were. Bernard's own romantic tendencies were rooted in an earlier, and similarly romantic attempt to rescue medicine from its traditional reliance on unproductive conceptions of ineffable vital essences, so-called essentialist, or metaphysical vitalism [21,27]. What eventually came to replace it was an alternative form of vitalism, known as process, or ‘scientific’ vitalism. Bernard was an epitome of process vitalism.

    Bernard's own scientific philosophy grew directly from this revolution of vitalist thought. An influential figure in that vitalist revolution was the physician Theophile Bordeu (1722–1776) of the medicine faculty of Montpellier University. Bordeu articulated a new metaphor for the organism known as the ‘many little lives’ [28,29]. In Bordeu's eyes, the organism derived its distinctive characteristics—its coherency, coordination and persistence—not through the pervasive influence of vital essences, but through an ongoing process of negotiation and mutual accommodation between the competing and often contradictory interests of the organism's ‘many little lives’: its multifarious cells, tissues and organs. Bordeu himself saw this process extending even beyond organisms. In a fascinating premonition of the superorganism idea, Bordeu saw vindication of his ‘many little lives’ idea in the seeming organism-like behaviour of swarms of honeybees [29,30]. No internal vital essence could explain such behaviour, in Bordeu's view, only the ongoing negotiation and mutual accommodation among the swarm's inhabitants. Bernard's notion of a system of agents maintaining an internal environment in the face of perturbations draws directly from Bordeu's conception of the self-regulating organism comprising ‘many little lives’.

    The ‘many little lives’ idea has obvious relevance to the question of how we might think of brains. Arguably, ‘many little lives’ covers the multiple and quasi-independent assemblages of agents within solid brains, but also the agents comprising presumptive liquid or fluid brains. Homeostasis (in the Bernardian sense) provides a useful ground for exploring how presumptive brains do what brains do: coupling cognition to action.

    4. Homeostasis and cognition

    Our present understanding of the physical nature of living systems has evolved since Bernard's day, and this allows homeostasis to be recast in a new language of open thermodynamic systems. Operationally, homeostasis is persistence of a living system in a state of specified and dynamic disequilibrium [31]. This specified disequilibrium persists in the face of perturbations imposed upon it by an unruly environment, and in the face of the ongoing degradation of orderliness demanded by the Second Law of Thermodynamics. To sustain itself, the transient and orderly assemblage of matter that is the organism manages and manipulates a flow of matter and energy through itself. Persistence comes from creating specified orderliness at the same rate at which it degrades [32]. This could describe any open thermodynamic system, both living and non-living [31]. What distinguishes the living system from the non-living is homeostasis: the active striving of living systems towards a persistent and specified orderliness [27]. In other words, the living system is embodied homeostasis: it is Bernard's romantic conception of homeostasis, recast in the language of thermodynamics [33].

    The operational aspect of homeostasis—how it works, as opposed to what it is—can now be recast in this light. Homeostasis operates through so-called adaptive boundaries (or adaptive interfaces), which subdivide environments into contained and external environments. A cell is a useful, albeit not an exclusive, example of such a contained environment. Homeostasis is the sustenance of the cell's contained environment in a specified orderly state. This involves doing work to manage flows of matter across the adaptive boundary. The cell persists as long as its specified orderliness is created as rapidly as it degrades to disorder. The ongoing work rate needed to do this is the metabolic rate.

    An operational definition of homeostasis does not enlighten us, however, on why a particular living system takes on its particular persistent form: why is a brain cell different from, say, an epithelial cell, or from, say, an elephant? It does no good to fall back on the essentially tautological argument that they differ because they express different patterns of gene specifiers of function as shaped by natural selection [33]. This assertion may provide part of the answer, but it does little to illuminate what makes any of these systems distinctively living: it is a retreat to the cybernetic metaphor of life as algorithm [34]: the ‘how’ question. The ‘why’ question forces us out of the cybernetic metaphor into inevitably vitalist thinking. Homeostatic systems exist in particular forms because they are knowledgeable systems: they have a sense of what they are, indeed what they should be and can couple this knowledge to some means of manipulating environments to attain that form. This means that homeostatic systems must be cognitive systems: they must be able to construct cognitive representations of the environment in which they are embedded, they must embody knowledge of what the contained environment should be, and they must be capable of implementing a targeted defence of that persistent state in the face of ongoing perturbations to it. In short, homeostatic systems must be teleological systems [35,36].

    5. Homeostasis, extended physiology and the extended organism

    Equating homeostasis with purposefulness is sufficient, to some, to render the whole line of thought suspect (e.g. [3739]). This need not be the case, however. Homeostatic systems are also inherently ecological systems. Any managed flow of matter across an adaptive boundary will modify both the environment contained within the boundary and the external environment: the principle of conservation of mass demands this [40]. This expands upon Bernard's conception of homeostasis, which can be characterized as intensive—that is to say, that physiology is something that happens within contained environments. When environments on both sides of an adaptive boundary are modified, as they must be, physiology is inevitably both intensive, as Bernard conceived it, but is extensive as well: extended physiology, in a phrase [41]. This leads to the somewhat startling conclusion that homeostasis within an adaptive boundary necessarily imposes a kind of extended homeostasis on the environment outside the boundary. We may say that the adaptive boundary mediates a conspiracy (in the literal definition of the term: to breathe together) between contained and external environments. This has implications for both the ecology and evolution of living systems, as well as, I argue, for a coherent and expansive theory of brains.

    Homeostasis has tangible energetic costs, which are determined both by Second-Law-mediated degradation rates, but also by the vicissitudes of an unruly external environment. These costs can be mitigated by bringing an unruly and unpredictable external environment under control, which can be accomplished by ‘internalizing’ external environments within larger adaptive boundaries. In other words, a conspiracy mediated by an adaptive boundary may be broadened by nesting more expansive adaptive boundaries within one another. In other words, assemblages of cells coalesce into the adaptive boundary of the epithelium. Epithelia themselves coalesce into organs, which in turn coalesce into organisms. This progressive nesting of adaptive boundaries and the co-option of ever more expansive internalized environments has been a strong theme, both in the increasingly complex organization of living systems and in the broad course of evolution [42,43].

    6. Extended cognition

    This ever-broadening conspiracy represents one element of a modern expression of Bordeu's ‘many little lives’ idea [33,44]. The other and crucial element of Bordeu's ‘many little lives’ idea is the ongoing negotiation and mutual accommodation that manifests as the coherent and persistent organism. The ‘many little lives’ must necessarily be knowledgeable systems: in other words, they must necessarily be cognitive systems. It is on this ground that an expansive theory of brains may be built.

    Homeostasis, by its nature, is cognitive. Extended homeostasis must necessarily involve a form of extended cognition. What does this mean? Numerous examples of such extended cognition may be found among fluid brains, notably the colonies of social insects that inspired Bordeu. Here, I will draw on the subjects of my own research, colonies of the mound-building termites of the family Macrotermitinae, specifically the genus Macrotermes [45]. These termites are found throughout the arid savannah ecosystems of southern Africa. They are prominent features of these landscapes, owing to the large mounds they build, up to 11 m tall in some regions. The mounds themselves are an example of how function emerges from an inherent homeostasis of the termite colony superorganism. Their morphology, function and ontogeny have been described extensively elsewhere [4657], and by others in this issue.

    Like the termites that build it, the mound represents a persistent and dynamic disequilibrium. The mound sheds soil through erosion at a rate of roughly 250 kg dry mass of soil per annum. The persistence of the mound comes about through the replacement of soil by termites actively transporting soil up into the mound, sustaining the mound's morphology through time. Their persistence is impressive: a particular termite mound lasts as long as there is a colony to maintain it, which has a typical lifespan of 12–20 years.

    The mound is the expression of homeostasis of the subterranean colony. There are at least two dimensions to this expression. Nest moisture is one, and this is strongly defended against strong annual environmental perturbations: extremely dry conditions in the winter, which draw water from the nest, and episodic torrential rainfalls in the summer, which drive excess water into the nest [5860]. Nest moisture is regulated with impressive precision. In the dry winter, termites offset water loss from the nest by mining liquid water from perched water tables in the soil and transporting it into the nest. During the wet summer, the termites export excess water from the nest by transporting wet soil up into the mound to deposit it on the mound surface. Episodes of mound construction are tied closely to episodes of rainfall, and experimentally percolating extra water into the nest increases the transport rate of soil into the mound [61].

    The mound therefore represents a colony-constructed adaptive boundary between termite nest and environment. It is also a construct of the termite colony's ‘many little lives’, the individual workers, operating on a superorganismal, rather than cellular or organismal scale. Finally, the mound is an expression of Bernard's conception of homeostasis: it is homeostasis of the colony that shapes and builds the adaptive boundary, rather than the boundary that produces the homeostasis.

    The shaping of the adaptive boundary of the mound is an expression of a cognitive interaction between the termites and the self-constructed environment contained within the mound. There is another dimension of homeostasis at work here, engaging cognitive interactions of termites with soil. We have identified at least five such cognitive interactions, involving various aspects of mound maintenance, construction and repair [45,62,63]. One is the classic concept of stigmergy [57], which we prefer to call focal building, because it draws soil into foci of building. Another dynamic is dispersive building, which operates to dismantle soil and scatter it. Dispersive building is governed by worker termites' perception of friability and surface curvature of soil constructs. Vectored building transports soil along large-scale gradients in an environmental property, like large-scale soil moisture gradients, usually from wet soils to dry soils. This is the dynamic that translocates large volumes of soil from the nest into the mound following rainfalls. Finally, there are cognitive interactions that trigger the initiation of focal building. Termites exposed to turbulence-induced perturbations in the local atmosphere, for example, may be triggered to initiate focal building.

    There are undoubtedly more cognitive dimensions at play, but the point is made: termites not only inhabit a rich cognitive world, they create that cognitive world through their collective activities. They are the exemplar of a fluid brain. The brain-like behaviour of the termite swarm is illustrated dramatically by the repair of damage to the mound. Mound repair presents a significant cognitive challenge to the colony. Termites live in the underground colony, far removed from the mound. To repair damage to the mound, the termites must not only be informed there is damage to a structure that is distant from the colony, they must mobilize and direct repair efforts to the actual site of damage [64]. They do so while figuratively in the dark: termites are blind. The dynamic of mound repair involves a swarm-level decision-making process that is similar to how solid brains resolve cognitive disparities, as in the resolution of random-dot stereograms into a cognitive three-dimensional representation of a visual image [65]. In the solid brain, cognitive disparity elicits an ecological perturbation of excitotoxic stress, which is ameliorated by the brain ‘deciding’ on a three-dimensional cognitive representation of what it sees: homeostasis of the brain ecosystem. In the fluid brain, the cognitive disparity arises from the otherwise quiescent mound environment being disrupted by a breach in the mound. This imposes a demand on the colony to resolve the disparity: also homeostasis, but now of the self-created ecosystem of the termite colony. Both forms of cognitive disparity resolution are essentially ecological, rather than computational, processes.

    7. Common dimensions of cognitive systems

    Arguably, all of this could be reduced to computation. Indeed, this is the object of considerable work to parse, encode and implement the ‘swarm cognition algorithm’, to coin a phrase [6,6669]. As useful as such efforts might be, they remain firmly embedded in the cybernetic metaphor for brains. Does this provide sufficient grounds on which to build a coherent theory of brains? Cognition is key to any such theory. What is it, then, that cognitive systems do?

    At their most basic, cognitive systems do at least four broad things, what we might term the dimensions of cognitive systems. Three of these can fit comfortably into the cybernetic metaphor. The fourth does not, and the misfit undercuts any strictly computational metaphor for brains.

    The first dimension is representation, which takes sensed information about the environment and maps it onto some form of internal cognitive representation (figure 1). In a solid brain, this involves passing information through some sensory interface to the brain, like a retina, or cochlea, or network of cutaneous nerve endings, and creating patterns of excitation in some assemblage of neurons.

    Figure 1.

    Figure 1. Schematic of cognitive representation. (Online version in colour.)

    Representation is closely linked to the second dimension, tracking changes in the environment. Tracking is a way of resolving a cognitive disparity, a mismatch between the cognitive representation and the input of the senses. If a cognitive representation of an environment does not match what the sensory interface is telling the brain, the cognitive map can change to bring it into conformity with the newly changed environment (figure 2).

    Figure 2.

    Figure 2. Schematic of cognitive tracking. (Online version in colour.)

    Representation and tracking resolve cognitive disparities by altering the cognitive representation of the world. Cognitive disparities can also be resolved by coupling the cognitive representation to engines that do work on the perceived environment. These engines are activated when there is a cognitive disparity, but now the engine works to bring the environment into conformity with the cognitive representation, the opposite of tracking (figure 3). This represents a sort of intentionality [12]. With the introduction of intentionality, cognitive systems begin to move out of the purely computational and into the ecological. When a termite colony shapes a mound to conform to the termites' cognitive map of what the environment should be, this represents a form of intentional behaviour.

    Figure 3.

    Figure 3. Intentionality as the coupling of cognitive representation and tracking to engines that can manipulate environments. (Online version in colour.)

    The fourth element, creativity, exists farther outside the cybernetic realm (figure 4). Although creativity does bring in elements of vitalist thought, creativity need not involve an appeal to frank vitalism: Bernard's own romantic conception of homeostasis offers a way to think more dispassionately about creativity, and hence how we think about brains. Creativity begins with a cognitive representation that is unmoored from sensory inputs. This could result from interaction with other cognitive engines, for example, which can create an entirely novel cognitive world. Schizophrenia is an extreme example of this, but less dramatic examples of novel cognitive worlds are not hard to come by. Couple such an unmoored and novel cognitive world with the ability to bring environments into conformity with it, and there exists the potential to bring into being entirely new environments: the essence of creativity.

    Figure 4.

    Figure 4. Cognition and creativity. The creative act involves shaping the environment to heretofore unimagined cognitive worlds. (Online version in colour.)

    Cognitive systems—brains, essentially—encompass all four dimensions of cognition: representation, tracking, intentionality and creativity. Which metaphor, then—the cybernetic or the ecological—most coherently encompasses all four? Even if all four aspects of cognitive systems can be reduced to computation, will this lead us to Bernard's raison d'être for homeostasis, to wit the particular homeostatic state towards which the system tends? To use a (deliberately) loaded term, is the desire that is the expression of a homeostatic system susceptible to computation?

    There is a profound implication to Bernard's romantic conception of homeostasis: homeostasis becomes a fundamentally creative process that goes far beyond sterile and static mechanism. This has equally deep implications for how we think about brains. Let us put the matter in the form of what we may call the cybernetic syllogism:

    • A is a computing machine.

    • A composes poetry.

    • A's poetry is a computation.

    • B is a human being.

    • B composes poetry.

    • B's poetry is a computation.

    Computers can be programmed to compose poetry, or for that matter, any creative art [70]. It does not follow, however, that machines are creative in the same way their creators are. Creativity in a machine is a deus ex machina, the reflection of the creativity of another living cognitive system, in this instance, the human programmer. In living systems, however, there is no deus ex machina [71]. What, then, is living creativity? To put the matter provocatively: brains are cognitive systems that implement a telos towards which the system trends. Homeostasis—that fundamental property of life—is the source of the telos. This is fundamentally ecological in character, shaped by the extended homeostasis and extended cognition of the many little lives the extended organism comprises. It is there, I assert, that a coherent theory of brains in all their forms may be built.

    Data accessibility

    This article has no additional data.

    Competing interests

    I have no competing interests.

    Funding

    This work has been supported by a number of patrons, including the John Templeton Foundation, the Stellenbosch Institute for Advanced Study, the National Institutes of Health, the National Geographic Society, the Human Frontiers Science Program and the National Science Foundation. Part of the conceptual work for this paper was done while I was a Resident Fellow at the Stellenbosch Institute for Advanced Study, in Stellenbosch, South Africa.

    Acknowledgements

    I thank Ricard Solé for the kind invitation to submit a paper to this theme issue, and Kirsten Petersen for the invitation to present this work at the Janelia conference on Distributed and Collective Computation at Howard Hughes Medical Institute, March 2018.

    Footnotes

    One contribution of 15 to a theme issue ‘Liquid brains, solid brains: How distributed cognitive architectures process information’.

    Published by the Royal Society. All rights reserved.