Philosophical Transactions of the Royal Society B: Biological Sciences
You have accessResearch articles

From inert matter to the global society life as multi-level networks of processes

David Chavalarias

David Chavalarias

Complex Systems Institute of Paris Île-de-France, CNRS, Paris, Île-de-France, France

Centre d’Analyse et de Mathématique Sociales, EHESS Paris, Île-de-France, France

[email protected]

Google Scholar

Find this author on PubMed

Published:https://doi.org/10.1098/rstb.2019.0329

    Abstract

    A few billion years have passed since the first life forms appeared. Since then, life has continued to forge complex associations between the different emergent levels of interconnection it forms. The advances of recent decades in molecular chemistry and theoretical biology, which have embraced complex systems approaches, now make it possible to conceptualize the questions of the origins of life and its increasing complexity from three complementary notions of closure: processes closure, autocatalytic closure and constraints closure. Developed in the wake of the second-order cybernetics, this triple closure approach, that relies on graph theory and complex networks science, sketch a paradigm where it is possible to go up the physical levels of organization of matter, from physics to biology and society, without resorting to strong reductionism. The phenomenon of life is conceived as the contingent complexification of the organization of matter, until the emergence of life forms, defined as a network of auto-catalytic process networks, organized in a multi-level manner. This approach of living systems, initiated by Maturana & Varela and Kauffman, inevitably leads to a reflection on the nature of cognition; and in the face of the deep changes that affected humanity as a complex systems, on the nature of cultural evolution. Faced with the major challenges that humanity will have to address in the decades to come, this new paradigm invites us to change our conception of causality by shifting our attention from state change to process change and to abandon a widespread notion of 'local' causality in favour of complex systems thinking. It also highlights the importance of a better understanding of the influence of social networks, recommendation systems and artificial intelligence on our future collective dynamics and social cognition processes.

    This article is part of the theme issue ‘Unifying the essential concepts of biological networks: biological insights and philosophical foundations’.

    All living systems must share a common organization which we implicitly recognize by calling them ‘living’.

    —[1, p. 187]

    1. Why do we need levels to understand the world?

    The decisions we take at the individual or collective level depend on the anticipations and predictions we make on their future consequences. To name but one example, the social and political sphere has taken the problem of climate change seriously only since the emergence of a certain consensus on the potential effects of anthropogenic warming. The nature and extent of the sacrifices that will be made, in an effort to influence this global disruption, will depend on our predictions about its anticipated consequences.

    We form anticipations on a daily basis, often without even realizing it. These range from simple bodily anticipations—I extend my hand because I expect to touch an object—to elaborate reasoning concerning the relationship between the state of the world and its future states, between our actions and their consequences, between causes and effects. To think about the relationship between the present and the future, the conceptual pattern of Laplacian determinism has long served as a guide [2]. This notion, which stems from an interpretation of Newtonian physics, connects the present moment of the universe to its future state, in an unambiguous and predictable manner, subject to a perfect description of its components.

    The conception of determinism associated with this notion accommodates the types of randomness we encounter, by qualifying them as randomness from ignorance (hasard d’ignorance, [3]): the impression of randomness is simply the consequence of our ignorance of the diversity of possible causes. In this epistemological framework, perfect knowledge of the world becomes a horizon for human intelligence which, although it cannot be attained, can at least be asymptotically approached. If we could determine the state of the world at a given moment with certainty, randomness would be eliminated and mankind would fully understand its future.

    How does contemporary physics interpret this notion of determinism? While it is undeniable that a better description of the world allows us to understand it better, our approach to the question of predictability has changed radically.

    The first remarkable fact that can be noted in the recent history of physics is the distinction that has developed between determinism and predictability. In the last century, the work of mathematicians such as Henri Poincaré1 or Edward Lorenz placed the question of the stability of dynamic systems and the role of measurements in prediction at the centre of the debate [4,5]. An intuitive condition for the prediction of a dynamic system’s evolution, in the context of imperfect knowledge, is that small errors in the description of the system should have only a small qualitative impact on our prediction.

    The question is thus as to whether systems with similar descriptions evolve in similar ways. It has been shown that for some dynamic systems, two states that may be arbitrarily close in terms of their description would give rise, in the short or medium term, to radically different developments. This is called sensitivity to initial conditions. It is Lorenz’s famous butterfly effect [6], often misleadingly illustrated by the flapping of wings causing a tornado a few weeks later on the other side of the world. It would be more accurate to say that the flapping of butterfly wings is a partial cause of the tornado (or of its absence), which will owe just as much to the air movement you provoked by blinking. One great discovery of the 1960s was that dynamic systems which are sensitive to initial conditions, globally or locally (existence of singularities), are not pathological but rather constitute the majority of all possible deterministic systems. Such systems are called chaotic systems.

    The discovery of the ubiquity of chaotic systems led to an epistemological upheaval that cannot be underestimated. From the Laplacian perspective, ignorance is always contingent. However, the instability of certain dynamic systems has made randomness due to ignorance quantitatively necessary: there are always causes which, by their smallness, extent or multiplicity, remain imperceptible to us, whereas in the more or less long term they generate perceptible effects. These then appear to us as unpredictable events owing to chance. Therefore, the question as to whether the world is, or is not, deterministic becomes an undecidable problem: we know that we will never reach a yes-or-no answer.

    But what about the numerous models, which paint a determinist picture of natural phenomena? As shown by Lesne [7] with the example of Brownian motion, there is in fact a certain degree of independence between the question of whether natural phenomena are deterministic or stochastic, and that of choosing a description of the world that is appropriate for our needs. By moving up the physical scale, from observations at the molecular level to those at the macroscopic level, we can be led to think of the same phenomenon from both the stochastic and deterministic points of view.

    The formulation of laws or regularities, whether deterministic or stochastic, requires the identification of causes and effects, or at least the interdependence of phenomena, i.e. structures that evolve in a correlated manner. However, the identification of structures depends on the scale of the observation. There is nothing more chaotic than a gas at the microscopic scale. And nothing more regular than a gas at the macroscopic scale, which provides the perfect example of isotropy. Therefore, the problem is not necessarily that of knowing whether a system is deterministic or stochastic, but rather that of knowing within what time frame, with what spatial resolution, and for what purposes, it is studied.

    This is a general feature of modelling that we naturally adopt in our daily relationships with the world. The world around us is infinitely complex, because it has structures on all scales. We are composed of cells, almost all of which are renewed every few weeks. This does not prevent us from recognizing a friend on the street. For us, the continuity of an individual lies at another level. Restricting the observation of a phenomenon to a certain resolution and a certain time frame makes it possible to neglect those characteristics, which occur on smaller or larger scales, and to think of the world in terms of continuity and causal relationships. At the level of a given observation, we know that inaccuracies in the measurement will not have measurable, or at least significant consequences on our predictions within the chosen window of observation.2 Thus, although the main axiom of prediction breaks down, although causes that seem identical to us do not necessarily produce similar effects, and although there are in fact never identical causes, every day we can have experiences that seem to be somewhat causal, or to reflect a certain type of determinism.

    The significant operation on the part of the observer, by neglecting certain causes, is to favour consciously or unconsciously a specific level of observation—although we have seen that she/he cannot do otherwise. It is both the process that allows us to identify continuities in our environment, and the origin of this occasional impression of radical randomness. Here again, physics and formalisms convey valuable concepts characterizing these levels of observation. We have discussed the unpredictability of individual trajectories in some dynamic systems owing to uncertainties in the initial conditions. There is however a counterpart in terms of statistical predictability with the mathematical concept of an attractor.

    By isolating one level of observation, we can sometimes simplify the dynamics at work through comprehension. While the entities we observe may change states, they nevertheless undergo more or less stationary or recurrent configurations. With respect to the spatio-temporal accuracy set by the observer, the fluctuations of the observed entities remain sufficiently close to an average value to be statistically neglected. All of the states, thus considered to be equivalent in the manner just described, correspond to what is formally referred to as an attractor (in the above example, we recognize the body envelope of our friend that is part of the attractor of the metabolism of his/her body). The fundamental property of a state belonging to an attractor is the following: a system evolving from a state that belongs to an attractor remains in that attractor, provided the disturbances are not too great.

    In practice, it is rare that the evolution of a dynamic system can be accurately predicted. Nevertheless, it is generally possible to qualitatively determine its attractors. By determining the main attractors of a dynamic system, it is thus possible to qualitatively predict all its future behaviours, although it is not possible to accurately predict which one will be observed. This is what meteorologists do when they predict the weather for the next few days or what climatologists do when they tell us about possible scenarios for climate change in the coming decades. We can never predict with certainty what will happen, but we can try to assign probabilities to different future weather or climate regimes.

    The reality we perceive is thus made up from a multitude of entities, which to us appear to have a certain permanence, precisely because they have reached an attractor, relative to our point of view of observation. The influence of their environment is too small on our scale for them to be taken out of their attractors. However, this permanence, and therefore the notion of attractor transposed to physical reality, is always related to a scale of observation. If we cut ourselves slightly, our wound heals and our physical envelope returns progressively to its initial state. However, our physical envelope, the attractor of our metabolism, is simply a transitory state of matter, whether we look at it at the scale of the species—where it is ephemeral—or at the scale of the cell—where it is gradually renewed. Nevertheless, the different levels of observation have a certain legitimacy, insofar as the spatio-temporal quantities that characterize them are determined by the processes that take place there: the protein synthesis cycle, the life cycle of a cell, the circadian cycle, the life cycle of an organism, economic cycles, climate cycles, etc. Understanding the coupling between processes at different levels of observation is a difficult task, which lies at the heart of complex systems science.

    Biological systems are among the systems for which these entanglements of space and time scales are the most complex. The scientific developments of recent decades have made it possible to identify and conceptualize them, using graph theory and complex networks. We will now describe the underlying reasons that make these conceptual tools a natural language for the description of living systems.

    2. Levels as networks of processes

    In Les Etincelles de Hasard [9], the biologist Henri Atlan, who pioneered the theories of complexity and self-organization of the living, endorses the quotation of Szent-Györgyi which he emphasizes in his book: ‘Life does not exist’. It may seem paradoxical that biologists have reached the point of denying the existence of the subject of their research. This change in perspective has taken place under the influence of molecular biology, which has consistently demonstrated that the elementary bricks of life, previously considered by the supporters of vitalism to be irreducible to physico-chemical properties, do indeed fall under the laws of inorganic matter. ‘The same laws apply, the properties alone vary: a stone does not breathe, an amoeba does not think·· ·’ [10, p. 18]. However, from this observation Henri Atlan also invites us to recognize the legitimacy of the notion of life as one category of our life experiences, as a consequence of the specific properties displayed by living beings.

    Vitalism is dead. However, the possibility of organic being reduced to inorganic continues to raise challenging questions, despite what the omnipresence of genetics at the beginning of the twenty-first century would suggest. Contrary to the expectations raised by the sequencing of the genome of several organisms, including the human genome, there is no indication that the ‘book of life’ can be read using just the four letters A, T, G, C.3 What are the underlying reasons for this? It seems that the conceptual toolbox with which we are accustomed to assessing inorganic objects is incomplete when it comes to addressing the phenomenon of life. Although life is not an explanatory notion of organic properties that should be superimposed upon physical laws, it nevertheless corresponds to a specific type of organization of matter, requiring for its understanding a distinction between ‘strong’ and ‘weak’ reductionism.

    Let us consider for example the problem of the interconnection between the genetic level and other levels of organization. Contrary to most research carried out in recent years, some studies now show that life cannot be considered as merely the execution of a programme written on a double helix. For example, in the case of eukaryotes (cells that possess nuclei), inter-level effects occur as soon as the DNA condenses to chromatin fibre. Studies show that the DNA–protein interactions are radically different in chromatin (compared to the naked DNA), owing to the mechanical stresses that the chromatin superstructure exerts on the DNA of which it is composed [12,13]. Another remarkable example is the discovery of a relationship between the mechanical stress exerted on cellular tissues during their growth (physical pressure on tissues) and the expression of genes in the cells that compose them [14]. This reveals the presence of downward feedback between a physiological level and a molecular level. For any given organizational level, interactions between elements at that level are therefore likely to generate superstructures that will in turn have an effect on these elements, in particular by spatially constraining their interactions.

    It can be seen from these examples that we cannot understand living things without taking into account the entanglement of the levels of organization they generate. A concept that is absolutely essential to the understanding of this phenomenon and its connection with physico-chemical determinisms is the concept of emergence, for which several meanings can be found in the literature. A common feature of most of these definitions is the idea of the appearance of macroscopic structures as a result of local interactions between a large number of entities. In general, organizational levels are identifiable as such, precisely because emergent structures are identified in them. An important property of living organisms is the presence of numerous feedback reactions, from the emergent structures to the entities that generated them, thus entangling different organizational levels.

    A few billion years have passed since the first life forms appeared. Since then, life has continued to forge complex associations between the different emergent levels of interconnection it forms. Forms of organization, contingent arrangements between heterogeneous entities, have stabilized and then replicated. The advances of recent decades in molecular chemistry and theoretical biology, which have embraced complex systems approaches, now make it possible to conceptualize the questions of the origins of life and its increasing complexity. Science does not need to invoke the notion of finality to account for this, but instead relies on regularities that emerge contingently from self-organized processes. Based largely on the graph theory and the notion of complex networks, these advances make it possible to push further the frontiers of Leibniz’s well-known question ‘Why is there something rather than nothing?’ [15, p. 727] without taking immediately the same shortcut as him, ‘this last reason for things [·· ·] that we call God’. Unlike he did, these theoretical advances place randomness at the heart of their explanatory system.

    3. Thinking about the phenomenon of life through the lens of network theory

    A detailed presentation of the new explanatory models for the phenomenon of life would require much more than a simple essay. Nevertheless, we can outline some fundamental principles that structure them. As will be seen, they imply thinking in terms of networks of interactions and processes. For this, we rely on three key examples: reflexively autocatalytic and food-generated sets (RAF), autopoietic structures and closure of constraints.

    (a) Reflexively autocatalytic and food-generated sets

    We start with a well-known architectural phenomenon that will serve as an analogy: the arch. The most rudimentary arch can be made from two elongated stones, buttressed against each other, each preventing the other from falling. This figure, generally rare in nature, is nevertheless quite understandable in terms of the simple laws of physics: two entities reinforcing each other in an unstable common equilibrium, which defies the state of minimal energy dictated by gravity (both stones lying on the ground). Each of the stones is a partial cause of the stability of the arch, and their positioning could have occurred under totally contingent circumstances, during a rockslide, for example.

    Let us now consider the sphere of life. Living organisms also struggle against a minimal state of energy, death. In so doing, they maintain within themselves an invariant organization4 of physico-chemical processes, which allows them to structure in a non-trivial way the matter they absorb, transforming it into a set of systems and subsystems that compose their body (genetic networks, protein networks, cells, organs, skeleton, etc.). To do this, they must constantly regenerate, from the elements they extract from their environment (e.g. carbon, oxygen, nitrogen), their constituent components, which are continuously decomposed according to the laws of physics and chemistry. If we look at chemical reactions and abstract the fundamental principles of how life works, one concept makes it possible to define a necessary condition for the phenomenon of life: the existence of RAF [16,17]. These sets play the role of the ‘arch’ in living systems. In the sense of our analogy, because they involve chemical elements that mutually reinforce each other’s production (cross catalytic effects), but also in the etymological sense of the term (/’a:rki/ in ancient Greek which first meant ‘beginning’, ‘origin’ or ‘source of action’, then later ‘first principle’ or ‘first element’), because these sets are perceived as determining elements in the emergence of the phenomenon of life.

    More precisely, an RAF is a set R of chemical reactions dependent on a food set F, rich in some subset of molecules (e.g. nitrogen, oxygen, carbon), that satisfies the following conditions [17]:

    reflexively autocatalytic: each reaction rR is catalysed5 by at least one type of molecule that is either a product of R or is present in the food set F; and

    F-generated: all reactants involved in reactions in R can be created from the food set F by using a series of reactions taken only from R itself.

    The existence of autocatalytic sets is one of the fundamental principles proposed by the community of biologists in their effort to theorize on the origin of life. In line with the work of Eigen & Schuster [18] on self-organization, Stuart Kauffman [19,20] defined these as RAF sets, noting the omnipresence of catalytic phenomena in living organisms, and starting from a conception of life as an organized network of chemical reactions.

    Autocatalytic chemical reaction networks have since become the subject of considerable theoretical and experimental research [16,17]. The first experimental observations of RAF systems included two peptides sharing a fragment, each with the ability to self-replicate, and also to catalyse the production of the other peptide [21]. Unlike replicator systems, where the peptide with the highest rate of self-replication is expected to dominate the reaction, it has been observed that the production of each of the peptides is greater when they interact, than when they synthesize independently, thus maintaining a chemical reaction that leads to high concentrations of each peptide, provided the basic components of the reaction (F) are present in the environment. Here, we find the principle of the arch described above, two elements that interact to reinforce each other in their existence, and to form a sustainable structure.

    More complex autocatalytic structures were subsequently discovered that can be described theoretically and which can be experimentally produced. For instance, Ashkenasy et al. [22] succeeded in constructing an RAF molecular network in which nine peptides catalyse each other. In such configuration, it is the co-presence of all the components of the network and the continuous realization of the various chemical reactions of R that guarantee the persistence of all the network’s activity. In such a system, none of the nine peptides is the original cause of the presence of the others, because there is a circularity in the catalysis phenomena. The presence of a network of causalities (the elements of R) indeed makes it possible to maintain the presence of the reactants of R over time. The invariant of the system is its organization, i.e. the graph defined by the reactions of R.

    According to the definition of an RAF, its activity results in the preservation of spatial inhomogeneity over time, which is expressed by a high local concentration of the elements produced by R. The material thus self-organizes to differentiate space in a dynamic and sustainable manner, by creating specific components in certain places. It is the beginning of the phenomenon of life.

    The formulation of the definition of an RAF in the language of graph theory has made it possible to identify the RAF’s properties, and in particular to prove that their existence is guaranteed if a certain threshold of diversification of the basic chemical components is reached. In natural settings, this diversification can take place through a slow and contingent evolution under the cumulative effects of natural events such as lightning, volcanoes, sunrays, meteorite impacts, oxidation, etc. In this manner, it has been possible to demonstrate that this first ‘something’, which is a form of organization of matter clearly identifiable through its effects on the environment, necessarily appears after a certain period of contingent evolution of the elements, which tend to recombine under the influence of random physical constraints (temperature, pressure, etc.) [23].

    (b) Autopoietic systems

    Living organisms need to be able to locally create environments that are chemically (e.g. stomach acidity) or physically (e.g. constant body temperature) stable. Although RAFs are a demonstration of how these environments have emerged and been maintained, life is not just about a few chemical reactions that create locally inhomogeneous concentrations. Although they have succeeded in identifying an explanatory mechanism for certain properties of life, Ashkenasy et al. [22] and their predecessors have not recreated life in a test tube. In particular, these sets of chemical reactions lack the ability to constitute an entity delimited in space and time, capable of interacting with its environment, and one that we can identify as being a form of life.

    At this stage, we deliberately introduce the subjectivity of the observer in the definition of life. Very often, the question of the precise definition of the entities examined in a given discipline is the subject of considerable controversy between the scientists who study them. As noted by Stewart [24], biology is no exception. In his introduction to Autopoiesis and cognition [25], Maturana stresses the difficulty of drawing up a list of properties defining living organisms, such as reproduction, heredity, growth, etc.: on the one hand, because we would have to have a definition of what a living organism is to be sure that the list is complete, and on the other hand, because such a list cannot be a list of necessary and sufficient conditions. As stressed by Stewart in relation to the first item on this list, although mules do not reproduce, they are nevertheless living beings. Under certain conditions, because other entities such as crystals or prions can ‘reproduce’, does this mean they are ‘alive’?

    This problem led Varela and Maturana [1,25] to introduce the concept of autopoietic organization, in an effort to define a new class of entities to which living beings belong: ‘the autopoietic organization is defined as a unity by a network of productions of components which (i) participate recursively in the same network of productions of components which produced these components, and (ii) realize the network of productions as a unity in the space in which the components exist’ [1, p. 188]. The concept of autopoiesis radically changed the way the question of life is raised (Varela et al. [1, p. 187]).

    Notwithstanding their diversity, all living systems must share a common organization which we implicitly recognize by calling them ‘living’. At present there is no formulation of this organization, mainly because the great developments of molecular, genetic and evolutionary notions in contemporary biology have led to the overemphasis of isolated components, e.g. to consider reproduction as a necessary feature of the living organization and, hence not to ask about the organization which makes a Living system a whole, autonomous unity that is alive regardless of whether it reproduces or not. As a result, processes that are history dependent (evolution, ontogenesis) and history independent (individual organization) have been confused in the attempt to provide a single mechanistic explanation for phenomena which, although related, are fundamentally distinct.

    We assert that reproduction and evolution are not constitutive features of the living organization and that the properties of a unity cannot be accounted for only through accounting for the properties of its components. By contrast, we claim that the living organization can only be characterized unambiguously by specifying the network of interactions of components which constitute a living system as a whole, that is, as a ‘unity’. We also claim that all biological phenomenology, including reproduction and evolution, is secondary to the establishment of this unitary organization. Thus, instead of asking ‘What are the necessary properties of the components that make a living system possible?’ we ask ‘What is the necessary and sufficient organization for a given system to be a living unity?’ In other words, instead of asking what makes a living system reproduce, we ask what is the organization reproduced when a living system gives origin to another living unity?

    The reformulation of one of the main problems of a discipline is a major act. The reformulation proposed by Varela and Maturana will place the concept of the network at the heart of contemporary biology. Not only does it prefigure theoretical and experimental studies on the RAF ensembles, but it also conceptualizes ubiquitous inter-level relationships (ascending and descending causalities) in living organisms.

    To explain this, nothing is better than a concrete example. Let us consider the model of the tesselation automaton [24,26] (figure 1).

    Figure 1.

    Figure 1. Tesselation automaton. (a) Representation of the automaton and (b) inter-level loop between the membrane and the metabolism. See also Bourgine & Stewart [26] and Stewart [24]. (Online version in colour.)

    Inspired by the research of Varela et al. [1] and McMullin & Varela [27], the model conceptualizes the self-healing properties of a membrane delimiting a confined space (a ‘proto-cell’). We can briefly describe it as follows:

    a liquid substrate containing abundant molecules A hosts a delimited vesicle whose membrane is composed of components C;

    the membrane is asymmetrically permeable to the A molecules, such that it allows them to enter more easily than they can exit, thus inducing a build-up of A inside the space defined by the membrane;

    the membrane degrades when C randomly disintegrates and must be repaired to continue to form a unit and concentrate the A’s;

    repair of the membrane is carried out correctly only if the concentration of a component B is sufficiently high in the liquid bag, B being able to attach itself to the membrane so as to repair it by transforming itself into C; and

    the inner surface of the membrane catalyses a chemical reaction A + AB leading to the formation of B molecules that remain trapped in the vesicle and accumulate (the membrane is impermeable to B).

    In this model, the existence of the vesicle, as a macrostructure, is a necessary condition for maintaining a high concentration of element B over time, which repairs its membrane and ensures its durability. At the same time, the high concentration of B is a necessary condition for the membrane to be repaired sufficiently quickly to prevent it from disintegrating. When this is not the case, the holes in the membrane can become large enough for the B elements to escape from the vesicle without turning into C elements. This toy model makes it possible to understand the inter-level loops that constitute the phenomenon of life: the creation of macro-structures from micro processes (integration, emergence) and the feedback of these macro-structures onto micro processes (regulation, immergence).

    This model also provides us with an example of interconnections between several time scales: the lifetime of the membrane (long periods, slow dynamics), which itself is made up from a network of C components adopting a specific topological configuration (a vesicle); and the lifetime of its components and the elements of the substrate (short periods, fast dynamics) which form a network of chemical reactions allowing the membrane to regenerate.

    This model is clearly insufficient to explain the phenomenon of life. The tesselation automaton is not alive. However, it gives us insight, allowing us to better understand the phenomenon of life, and in particular the importance of the entanglement of organizational levels. It highlights the type of explanation required to understand the stability of the membrane as a structure: this means clearly defining processes and their domain of viability, rather than simply focusing on interaction relationships. For example, it allows the existence of a region of the parameter space, where a membrane structure is viable, to be questioned: if the reaction A + AB is too slow with respect to the degradation rate of the membrane, the entire structure collapses.

    This approach to living beings as autopoietic systems has had successful experimental developments [28]. In particular, it is important to know whether such systems would be able to self-prime under realistic environmental conditions, without which the tesselation device would remain a pure abstraction. This is what Walde et al. [29] demonstrated experimentally. Their results prove that vesicles can spontaneously form in a solution of caprylic acid and oleic acid, and that they have an autopoietic property. These vesicles catalyse a network of chemical reactions in their interior, which allow the vesicles to be reproduced. This is an observation which, according to Luisi [30, p. 335], supports the hypothesis that ‘closed, cell-like compartments, may have existed in prebiotic time, showing a simplified metabolism which was bringing about a primitive form of stationary state—a kind of homeostasis. The autopoietic primitive cell can be taken as an example and there are preliminary experimental data supporting the possible existence of this primitive form’.

    (c) The triple closure: a new conceptual framework for the understanding of the living

    A bridge exists between the notion of autopoietic systems (processes closure) and the notion of RAF (autocatalytic closure) that strengthens our understanding of the fundamental nature of life. Montévil & Mossio [31] have proposed the concept of constraints closure that extends the RAF model. There is no space here for a detailed description of this theory but in a nutshell, the core observation is that the structures of living organisms constrain the processes that take place within them. In the tessellation automaton, the membrane constrains the circulation of the molecules A such that the creation of molecules B becomes possible but the membrane itself is not altered by the chemical reaction that creates the Bs. It acts as a constraint on the reaction. Montévil & Mossio’s core idea is that managing constraints is what makes it possible for life to go beyond the second principle of thermodynamics.6 Living organisms consume energy and therefore contribute to the increase in the entropy of the universe, but they create order at the same time so that the net balance order versus disorder is better than in physical processes involving inert matter.7

    If constraints are so important to living systems, there should be a trick to maintain them through time. This trick, again, could be closure. Montévil & Mossio propose that constraints in living organisms are chained into a global and closed network of constraints that contribute to each others production and maintenance. For example, in some complex organisms, the blood vessels canalize the blood flow. The blood flow contributes to multiple processes within the organism, processes to which the cells constituting the blood vessels do not take part directly apart in their collective channelling effect. From the perspective of the cells constituting the blood vessels, the emergent structures that result from their collective embedding in a physical space, the theatre of their interactions, create new limitations and constraints on every possible configuration of the matter that circulate around them. These limitations generate new channelling properties. However the formation and sustainability of these blood vessels are the consequences of some other constrained processes in the organism so that the constraint blood vessel cells collectively generate depends on other constraints present in the organism.

    We get a closure of constraints when ‘a set of mutually dependent constraints act on the flows of energy and matter so as to collectively maintain themselves, and their organisation, over time’ [31, p. 190]. Kauffman [23] conjectures that together, these three closures (processes closure, autocatalytic closure and constraints closure) constitute ‘elan vital’, ‘a non-mysterious but wonderful life force’. We will hereafter refer to this conjecture as the triple closure theory.

    The theoretical and experimental results that led to the triple closure theory show that although it is legitimate to try to partially account for the properties of an organizational level according to the properties of the entities at the lower level (weak reductionism) it does not follow that it is legitimate to try to explain all phenomena on the basis of the properties of a single level (strong reductionism). The influence of the topological makeup of interactions reveals macrostructures that have radically new properties, and modify the space of possibilities (phase space) of the elements at the lower level. We cannot make successive reductions as we would go down the steps of a staircase. The properties of a processes/catalysts/constraints closures cannot be deduced from the properties of its elements alone.

    All these new theoretical perspectives and observations have profound consequences on biology as a discipline. They call for a paradigm shift from the modern synthesis (neo-Darwinist) theory8 to a systemic approach suitable for explaining phenomena such as downward causation or epigenetic inheritance [32].

    An illustration of the required radical change in perspective is given by the study of circadian rhythm [32]. The simplest ‘mainstream’ explanation of this process considers a DNA sequence as the starting point of a ‘programme’ that produces a 24 h rhythm. But a closer look at circadian rhythm reveals that this ‘programme’ critically depends on the metabolism of the cell: ‘the intricate cellular, tissue and organ structures that are not specified by DNA sequences, which replicate themselves via self-templating, and which are also essential to inheritance across cell and organism generations’ [32, p. 10]. This observation leads Noble to assert that the concepts of ‘genetic programmes’ or ‘gene networks’ are misleading since they ‘fuel the misconception that all the active causal determination lies in the one-dimensional DNA sequences. It does not. It also lies in the three-dimensional static and dynamic structures of the cells, tissues and organs’ [32, p. 10]. The same applies to the concepts of ‘genetic code’, ‘selfish gene’, ‘genome as the book of life’ that have been extensively used for framing biological research since the end of the twentieth century.

    The second major consequence of this paradigm shift concerns our conception of biological evolution. When modelling the evolution of inert matter, physicists first describe the set of all possible states of the systems, called the phase space (e.g. all the possible positions of the planets and their velocities in a three-body problem) and the laws of evolution (e.g. the law of gravitation) to derive the equations of evolution. The phase space is fixed and defines the domain of possible trajectories of the embedded evolving systems. It defines, in a sense, certain constraints to the evolution of all possible systems (a planet cannot take a shortcut in a fourth dimension and reappear suddenly somewhere else in our three-dimensional world).

    Life is a very different matter. In the triple closure theory, the set of possible constraints closures is part of the phase space of biological systems. At the scale of biological evolution, living organisms are continuously creating new sustainable constraints (e.g. new organs and new biological structures) that are passed from one generation to the next precisely because they are integrated into new constraints closures. These new constraints closures modify the conditions of appearance for future biological innovations and are thus changing the phase space of biological evolution. Contrary to inert matter, as Longo [33, p. 5] pointed out, biological systems evolve in open phase spaces, ‘the list of possible observables and parameters, changes along historical time’, the evolution itself is evolving. The consequence of this is that ‘what evolves cannot be said ahead of time: what evolves emerges unprestatably’, as Kauffman [23, p. 6] coins it. I invite the reader willing to go further to refer to [23,33,34].

    To summarize, the set of possible static configurations of a large collection of physical entities under-determines all of its configurations in interaction situations. The embedding of such sets in a physical space where its elements can interact (i.e. a complex system) reduces this under-determination, a phenomenon that is at the heart of our perception of the emergent nature of the forms, which this collection of entities can take. As a consequence, we cannot understand all of the properties of a complex system, apart from its deployment in space, even though we may have a detailed knowledge of the properties of each of its elements. This is the fundamental reason why, as Anderson [35] stated it, ‘More is different’. At the same time, new properties emerge from the interactions and, as Morin [36] stressed it, some properties of the elements are inhibited by the constraints collectively produced (for example, some the expression of some genes can be inhibited by upper levels phenomena).

    The triple closure theory highlights the necessity to conceptualize organizational levels and situated or spatialiazed interactions in the modelling of complex systems. Seeking ‘a one-level standpoint’ to study complex systems leads to conceptual aporia, let it be in biology [32]—where the neo-Darwinist theory proposes to derive all the emergent structures of the living from the analysis of genes networks—or in sociology—where some authors [37] claim that we could flatten social structures into a single level where to examine from all sides the relationships between aggregates and their constituents.

    4. Life, cognition and cultural evolution

    (a) Reframing cognition with operational closure

    RAF theory accounts for the spontaneous and sustained emergence (at the chemistry level) of chemical reaction networks locally creating high concentrations of biochemical components. The theories of autopoiesis and constraint closure explain how living organisms can mobilize biochemistry to generate emergent structures that differ from their environment, as autonomous and perennial entities, thus constituting vessels that catalyse chemical reactions specific to living organisms.

    The subtle link illustrated by these theories, of dependence/autonomy between an organizational level and the higher levels through catalysis and regulatory processes, is found at all scales of life. Autopoietic entities can themselves interact and take part in new types of processes. Thus, living organisms are made up from the stratification of entangled complex processes, some of which can legitimately be described as living systems. The cells of our body can be grown outside their original environment as living organisms in their own right. Nevertheless, collectively, they take part in multi-scale processes defining ourselves as new living organism.

    Because triple closure theory applies generically to any set of interacting entities, whatever their nature, it makes it possible to theorize the involvement of emergent autopoietic structures in open sets, which are reflexively autocatalytic beyond biochemistry. The complexification of living organisms is then explained by the recursive networking of process networks; a multi-level organization that extends to everything that living organisms have generated, from social systems to ecosystems [38,39].

    This perspective on living organisms highlights a specificity that distinguishes them from other natural or artificial entities that we know. Their operators are not instructed by their environment, but determined by their own structure and organization [25]. The feedback, which materializes the double upward and downward causality that we have just described, is explained by the existence of physico-chemical systems whose activity stabilizes the cohesion and production of their own components. This is what Maturana & Varela referred to as operational closure. If this activity stops, these systems will disintegrate, and ‘die’. As pointed out by Edgar Morin in The Method, unlike artificial machines, existence and functioning correspond to two modes that are inseparable from living systems. ‘Thus, the identity of such complex systems cannot be defined by their constituents, but by the processes that take place in them, and which allow them to produce themselves continually, their autopoietic character. Their fundamental invariant is their own organization’ [40, p. 193].

    However, in order to maintain this fundamental invariant and thus to resist any interference from their environment, living systems need to constantly extract resources from this same environment (otherwise their functioning would violate the laws of thermodynamics). As pointed out by Clarke & Hansen [41], this struggle between the openness to the energy flows and the invariance of their organization with respect to the disturbances contributed by their environment is precisely what makes it possible to qualify living beings as entities endowed with cognition:

    ‘Once the paradigmatic shift is made from the physical to the life science, the order-from-noise principle in self-organizing systems gives way to the openness-from-closure principle in autopoietic systems. To understand the stakes of this development, one must bring into play the fundamental distinction between thermodynamic and autopoietic principles. Thermodynamically, a system is either open or closed to energic exchange with its environment; by contrast, autopoietic systems are both environmentally open to energic exchange and operationally closed to informatic transfer. According to this understanding, operational closure ‘far from being simply opposed to openness’ is in fact the precondition for openness, which is to say for any cognitive capacity whatsoever’. [41, p. 9]

    The second-order cybernetics allows us to think about this phenomenon through a constructivist conception of cognition. The most common definition, arising from cognitivism, envisages cognition as a manipulation of representations, dealing with the objects in our environment. von Foerster [42], on the contrary, considered that there is no objective environment outside cognition. For him, cognition is the emergence of neuronal activities specific to the observer, called eigenbehaviours, resulting from his/her interaction with the environment. To understand the notion of eigenbehaviours, we can make the analogy of a string on a musical instrument. Only those tones corresponding to harmonics of the string’s fundamental tone will make it resonate. A string tuned to produce a Do will resonate if the upper Do, Sol or Mi are played but not the #Fa. It cannot ‘perceive’ the #Fa. The brain’s eigenbehaviours are considerably more complex, in the sense that the brain has an undefined number of inter-dependent ‘strings’, resonating with multiple sensory dimensions, which are furthermore created and tuned by learning processes. However, the brain can only resonate along its own eigenmodes that have been forged during its history. Consequently, the objects are not entities with objective properties; for an observer, anything that ‘presents tokens for eigenbehaviours which we can establish’ [41, p. 31] is an object. As summarized by Varela [43, p. 33] neuronal activities ‘are internally perceived as thought and will, or are externally perceivable as speech and movement’ but they all correspond to some eigenbehaviour of our brain viewed as a set of interconnected neurons.

    This change of perspective about the nature of cognition is important both for philosophy and for the understanding of biological, social and artificial systems. In the paradigm of the second-order cybernetics, information is a perturbation of an autonomous cognitive system that either makes it switch from one eigenbehaviour to another or, when the perturbation is strong enough, leads to new eigenbehaviours through a modification of the relationships between its elements9 (i.e. learning process). However, sets of eigenbehaviours could be so robust that even strong perturbations hardly lead to a learning process, despite the fact that the long-term survival of the cognitive entity could be at stake. In the domain of social affairs, this robustness manifests itself at the individual or collective level in terms of the self-consistency of the (collective) belief system, who might fail to take into account new information whatever its ‘true’ or ‘false’ value could be10. Self-consistent beliefs systems often distinguish themselves by their understanding of causality and the importance they attach to it; they are named ‘paradigms’, ‘ideologies’ or ‘religions’ according to their propensity to learn from past experience.

    (b) The end of cultural evolution

    The increasing complexity of life has led to the emergence of a species granted with an advanced form of consciousness that allows its members to reflect on their actions and their future consequences. So far, human rationality has been mainly framed by notions of direct consequences and causality at the level of trivially chained processes or constraints. Because humans hardly consider the wider systems in which their actions take place, in terms of the triple closure we just described, we will call ‘local’ the reasoning and the notion of causality that predominates in humans.

    Local reasoning led humans to excel in the art of short-term optimization. Innovations for the control of natural and artificial systems have appeared and spread across various cultures: grow crops faster and protect them against pests, prevent illness, move faster, etc. When integrated within a civilization,11 the diffusion of cultural innovations can be scale-up, leading to collective behavioural changes. Some civilizations developed these short-term optimizations up to the point where their collective implementation disrupted the closures they were part of, inducing some counter-productive effects12 and sometimes leading to the collapse of the civilization, as documented by Diamond [47].

    Cultural evolution is the evolutionary process shaping societies and civilizations on the long run. Whatever the paradigm chosen to study cultural evolution (see [48], [49] or [44] for reviews from different paradigmatic perspectives), civilizations have so far been thought to be able to evolve independently of each other, even if some of them might interact.

    But things are changing. Over the last century, cultural evolution has considerably accelerated13 and the cultures populating the Earth have developed in an ever-increasing interdependent way14: various phenomena that used to be restricted to part of the world like epidemics or economic crisis now spread across the globe through global exchange networks; the path taken by a single country can affect all the cultures worldwide (such as starting a nuclear war, overexploiting fossil fuels or carrying out massive deforestation); and information circulates within technology platforms that gather billions of individuals. All these phenomena are recent, some of them being less than 20 years old.

    On the other hand, what we used to call the ‘environment’ is more and more under the influence of human civilizations. Humans and the livestock biomass now represents 95.8% of all mammal biomass [50] and as of 2012, humans had modified more than 50% of Earth's land surface [51,52]. Bar-On et al. [50] estimated that humans represent only 0.011% of Earth’s total biomass but their combined scale of carbon appropriation and consumption through biomass consumption and fossil fuel use might be approximately 30% as large as total net primary production of the Earth (NPP) [53]. Owing to human influence, the Earth’s biosphere is approaching a state shift [54,55] and scientists are alerting that the sixth mass extinction might be underway [56,57].

    These facts means that there are no longer such phenomena as independent civilizations taking new cultural paths at their own risks. Although some civilizations or small societies might keep a certain autonomy and collapse in the wings as non-vital organs do, the failure of some of parts of humanity could perturb the organization of all of its sub-parts up to a possible collapse of all human civilizations.15 The fate of humans are now bound together worldwide for better and for worse like cells in an organism. There is only a single evolutionary path left that humanity as a whole creates while walking.

    It is reasonable to think that with the multiplication of different types of closures, humanity could be on the verge of an unprecedented organizational transition from a myriad of individuals belonging to relatively independent sub-populations to a single ‘organism’ where all parts are inter-dependent. This transition is all the more likely as the overexploitation of the environment by man leads to the depletion of natural resources, a phenomenon well known to promote the transition from single-cell to multi-cell organisms. [58,59]—a transition that seems to be more common and faster than originally thought [60].

    This transition will lead to a rethinking of what humanity’s environment is. Global temperature, ocean acidification, ice sheet and forest cover extents now depend on human activities. The ‘environment’, as we understand it as individuals, is becoming a set of populations of alien organisms that help humanity maintain its metabolism just like the many micro-organisms that populate our intestines.16 Our environment is being reduced by the new humanity–organism to intestinal flora status.

    So, what would be the new ‘environment’ of this humanity–organism? The fact that an increasing number of people dream of spatial conquest and new habitable planets might be a hint. However, for the moment, there is no planet B. The humanity–organism is its own environment.

    Humanity, by becoming an ‘organism’, is becoming de facto mortal. Cultural evolution as we used to think of it, is over. It will become more similar to a process of adaptation and learning at the level of humanity that can lead to its disappearance at any time. The new humanity–organism is alone on its evolutionary path and we can ask ourselves if we can afford to have it guided by random trials and errors, or even by an ‘invisible hand’17 focusing on ‘local’ reasoning. The kinds of collective cognition and behaviours that humanity will adopt in this new phase of its existence will determine its chances of survival in the future.

    5. Concluding remarks and perspectives

    From gut bacteria to the human and natural ecosystems, the ramifications of the phenomenon of life constitute the foundations of the world we live in. Human beings came to fully control their environment, from gene manipulation to economics and land management, with the same concepts they use to reflect on inert matter: constitute stocks, move stocks, action the right trigger or send the appropriate signal so that the desired elements change their states. However, it must be acknowledged that, from antibiotic resistance to global biodiversity collapse and climate change, this way of interacting with one’s environment has reached its limits, threatening at the same time humanity.

    In this article, we have studied how concepts from networks science and graphs theory that have been developed to understand the specificity of life can help us to better address some of the important challenges facing humanity today. In order to ascend the levels of the organization of matter, from physics to biology and societies, we outlined three fundamental theories developed in the wake of second-order cybernetics to reflect on complex systems: autopoieisis theory, RAF networks theory and constraints closure theory. These three theories have in common the concept of closure that is required to think of an entity able to act on itself. Kauffman [23] proposed that the three kinds of closure conceptualized by these theories are the prerequisite for the phenomenon of life: processes closure, catalysts closure and constraints closure. Triple closure theory invites us to change our world view, shifting our attention from states changes to processes changes and to revise our notion of causality from ‘local’ to complex systems thinking. The entities that make up our world are no longer envisioned as isolated units that can be modified independently of each other. The world is rather composed of entangled, self-sustained and interdependent dynamical processes among which we isolate entities by observing them over particular spatial and temporal scales.

    This new paradigm calls for a rethinking of core concepts of biology, from the role of DNA to the concept of biological evolution. Following Maturana & Varela [25], it also leads to a rethinking of cognition as the prerogative of an autonomous entity, defined as a self-sustained network of processes, which maintains its internal organization (the structure of this network) when it faces perturbation from its environment (the entities relative to which it is differentiated). We have cognition when the activity of this network of processes switches from one eigenbehaviour to the other or when its structure is modified by learning. Learning occurs when a perturbation leads to the modification of the internal organization of the autonomous entity, but changes are kept small enough so that the self-sustainability of the new network of processes is preserved. This immediately raises the question of ‘perception’—as a mode of interaction between an autonomous entity and its environment [62]—and what this perception does from a systemic point of view to this entity.

    Cultural evolution is not left out in this new paradigm. ‘Complex systems are systems capable of complexification’18 and these last decades humanity has reached a new stage of complexification where all its sub-components have become interdependent, bringing humanity to the brink of a transition towards the formation of a super-organism. This forthcoming organizational transition calls for a rethinking of the current regime of cultural evolution, from selection under a trial-and-error process at a population level into the learning process of a single entity that will become de facto mortal.

    The choices we make to guide this transition will determine the longevity of this humanity–organism. The future is wide open. Nevertheless, observing the recent changes in our societies we can identify three archetypal scenarios that differ in terms of the importance given to people’s voices and initiatives: the homunculus scenario, the artificial intelligence (AI) scenario and the collective intelligence scenario.

    In the homunculus scenario, the learning and decision processes of the super-organism are concentrated into the hands of one or few (powerful) people assisted by the development of big data technologies. It is the path currently experimented by China over its nearly 1.4 billion citizens with its forthcoming ‘social credit’ system [63] and an impressive deployment of sensors and AI technologies to monitor its population. With large-scale population control based on criteria determined by a central system, this scenario offers no guarantee that the super-organism will have better learning and decision-making processes than humans, who are obviously error prone. Moreover, power struggles at the head of this super-organism will inevitably occur, creating instabilities that will threaten the entire system. Therefore, in the long term, this system will almost certainly lead to a global collapse.

    In the AI scenario, humans could delegate the collective decision processes to sophisticated AI procedures, thus leading to a form of trivialization of the society which von Foerster warned us against [62]. The penetration of AI technologies in daily individual and collective decision-making processes could be a premise of this scenario and some already claim that the combination of AI and Big Data is a technological fix that could allow humanity to regain control of its environment. Anderson [64], one of the most extreme supporters of this position, explains that ‘the new availability of huge amounts of data, along with the statistical tools to crunch these numbers, offers a whole new way of understanding the world. Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all’. In addition to the fact that this assertion is false from a mathematical point of view [65], it could also mislead about the appropriate measures to be taken to meet the challenges of our times. The hope that new technologies alone will make it possible to collectively master the world ignores that these technologies only implement one aspect of cognition.19 Most if not all IA implementations are heteronomous, they do not realize operational closure and cannot adapt to environments with open phase states. The risk is that their ubiquitous deployment in all aspects of our daily lives could engrave past and inappropriate collective behaviours into code. These technologies may improve our collective decision-making processes and make them less flexible and therefore less adaptable at once.

    The first two scenarios have in common that they move away from the complex systems mode of organization of living systems to centralized modes of organization. They are in a way an extension of an age-old notion of individual cognition in which consciousness and decision-making processes emanate from an ‘I’ that controls the body. There is however a third scenario in which the principles of self-organization, pervasive in living systems, are preserved. There are many examples in nature where decentralized decision-making processes that supplant the cognitive capacities of the individuals making up the collective are key contributors of the long-term survival of a species. In some species like social insects, the collective behaviours of their members allow them to perform tasks that are far beyond the reach of individuals’ cognitive abilities, like building complex nests or raise other animals, a phenomena called collective intelligence [68]. Collective intelligence is often enabled by stigmergic interactions, i.e. traces left by individuals in the environment that ensure decentralized coordination on a large scale within a population. For example, ants form effective pathways to food sources by leaving, when foraging, a pheromone on the ground that indicates to their fellow ants a path to a potential food source.

    Stigmergic interactions have been at the heart of human societies for millennia. To name but a few, the invention of writing made it possible to convey information on large space–time scales between individuals without any particular tie, a phenomenon accelerated by the invention of printing. The possibility of this third scenario, based on a global organizational transition favoured by the percolation at the scale of humanity of collective intelligence processes, must be considered by observing the vertiginous increase in the use and reach of stigmergic means of coordination in our societies. With development of the World Wide Web (57% penetration of the worldwide population in 201920) followed by the one of social networks (45% penetration of the worldwide population in 2019), new large scale supports for stigmergic interactions have emerged. In the digital age, each contribution to a web page, each publication on an open archive, a social network or a forum, each evaluation left on a commercial website are all traces left behind that are likely to guide the future actions of complete strangers. These stigmergic interactions, as in the case of the anthill, generate social constructions that are out of all proportion to what isolated individuals might have produced.21

    There are, however, fundamental differences between the collective intelligence of species such as social insects and that of the human species. As pointed out by Lestel [69, p. 88], ‘the mediatized actions of social insects are collective and rigid, those of chimpanzees are individual and intelligent— and those of humans are collective and intelligent’.22 While in other animal societies the characteristics of collective intelligence are closely associated with the genetics of individuals, in humans it is a complex epigenetic phenomenon that depends on the characteristics of individuals and on the patterns of interactions they are able to form. In addition, collective intelligence processes often have the effect of modifying the same patterns of interaction and individual characteristics that have been at the origin of these processes. This particular type of collective intelligence that achieves operational closure has been called social cognition [70,71]. For humanity to embark on this third path, the appropriate organizational schemes for social cognition on the scale of humanity have yet to be invented.

    There is a tension between these three scenarios. The first scenario is obviously incompatible with the third one and it is no surprise that the first thing an authoritarian regime does when it feels threatened by some form of collective organization is to censor or suppress the main stigmergic media, the web and the social networks. The first scenario may also follow a transition based on the second, since it is all the easier to centrally control a population whose interactions and behaviours have already been channelled by Big Data and AI technologies.

    The second scenario could also interfere with the third if AI technologies are deployed inappropriately. The nature of interactions between humans being the keystone of social cognition processes, the design of technologies such as online social networks or AI-assisted recommendation systems that mediate interactions between people should be expected to have a very significant impact on cultural evolution. For example, von Foerster’s conjecture23 transposed to our modern era suggests that one of the effects of the large-scale penetration of social networks and recommendation systems is that collective dynamics become more unpredictable and manipulable at once [73]. The understanding of the role of these new technologies in the enhancement or degradation or social cognition processes is thus a major scientific challenge.

    The balance between these three scenarios in humanity’s transition toward a new organizational level of life will depend on our understanding of the impact of information technologies and AI on our collective behaviours and our relationship to past and future events. But first of all, it is important to remember, at the risk of claiming an obvious point, that social cognition processes are tightly linked to individual cognition and decision-making processes. Therefore, humanity’s propensity to follow the third scenario and humanity’s future ability to adapt and learn as an organism will be directly related to the efforts invested today in educating the world’s citizens.

    Data accessibility

    This article does not contain any additional data.

    Competing interests

    I declare I have no competing interests.

    Funding

    This study has been funded by the ANR project EPIQUE (ANR-16-CE38--0002-03).

    Acknowledgements

    This article has benefited from many fruitful discussions with my colleagues at the Centre de Recherche en Epistémologie Appliquée (CREA, Paris), for which I would like to thank them collectively.

    Footnotes

    Endnotes

    1 Poincaré demonstrated in the late nineteenth century that a gravitational system composed of only three entities like sun/Earth/moon (the so-called three-body problem), although described by fully deterministic equations, has a chaotic behaviour.

    2 This makes it possible for meteorologists to sometimes predict rain a few days in advance with 100% confidence despite the fact that weather is the prototypical example of chaotic systems [4,8].

    3 The DNA of any living organism is ‘written’ from only four molecules (nucleic bases), symbolized by the letters A, C, G and T. They have the ability to be paired in such a way as to form the famous double helix discovered by Watson et al. [11].

    4 Invariance always takes place over a given time scale, that is usually smaller than the average lifetime of the organism. Over its lifetime, the organization of an organism might change, for example, under the influence of ageing. Ultimately, these organizational reconfigurations lead to death.

    5 A catalyst is a substance that increases the speed of a chemical reaction without being involved in the reaction. It induces a reaction by its mere presence or interaction, as long as the elements required by the chemical reaction are present.

    6 This principle asserts that the entropy, a quantity that measures disorder in a closed system, is necessarily increasing in any of its transformations. The paradox of the living is that we perceive this phenomena as creating some kind of order although, as part of the closed-system universe, it necessarily creates more disorder than order.

    7 To give an intuition of the fundamental reasons for this phenomenon, when a displacement of matter is constrained by a physical structure, this creates what is called work in thermodynamics. The relationship between the increase in entropy S of a system, the energy ΔU it absorbs, the work W produced and the temperature T of the system is given by ΔS = (ΔUW)/T. Because living systems are able to channel, thanks to the emergent structures they form, the matter displaced by the energy they consume (think of the displacement of blood cells being channelled by blood vessels), they produce work (W > 0) and consequently less entropy than an unstructured release of the same amount of energy.

    8 The modern synthesis theory has shaped the scientific landscape in biology since the 1930s with the assumption that DNA is the ultimate explanatory level to understand living organisms.

    9 In the brain, for example, learning leads to the creation or modification of connections between certain neurons.

    10 Let us be reminded that climate change issues have been known for decades, yet countries still hardly take them into account in their policies despite the fact that scientists now predict we have less than a decade left to act.

    11 Following Flannery [44, p. 400], we use here the term civilization to refer to ‘that complex of cultural phenomena which tends to occur with the particular form of socio-political organization known as the state’.

    12 Counter-productivity has been theorized by Illich [45, p. 11]: ‘When an enterprize grows beyond a certain point on [an ad hoc scale], it first frustrates the end for which it was originally designed, and then rapidly becomes a threat to society itself’. As stressed by Dupuy [46], counter-productivity in societies characterizes a system that escapes the control of those who contribute to it, and is destroyed by the same means which are intended to serve it: ‘medical science corrupts health, school makes one mindless, transportation immobilises, communications make one deaf and dumb, information flow destroys the senses, […] industrial food converts to poison’ [46, p. 60].

    13 This can be measured in many ways: the rate of innovations, the number of cultural goods produced, the volume of information produced, etc. Let us remind the reader that as of 2017, it has been estimated that 90% of data available to humanity had been produced during the last 2 years.

    14 This led a group of dozens of famous scientists and policymakers united under the name of Collegium international to publish the Déclaration Universelle d’Interdépendance (Universal declaration of interdependence) on the occasion of the 60th anniversary of the United Nations (cf. electronic supplementary material, appendix).

    15 A future scenario where civilization A would have disappeared after triggering a nuclear winter, civilization B would have collapsed after increasing Earth’s temperature of 8°C, but civilization C would have survived because of its sustainable practices is no more a possible future. The behaviours of civilizations A and B would have preempted the future of civilization C.

    16 Which, let us remember, are much more numerous than our own cells.

    17 The concept of the ‘invisible hand’ has been introduced by Smith [61] and is the cornerstone of neoliberalism. It states that letting individual interests free to self-organize is the most efficient to achieve the public good because every individual is ‘led by an invisible hand to promote an end which was no part of his intention. Nor is it always the worse for the society that it was no part of it. By pursuing his own interest, he frequently promotes that of the society more effectually than when he really intends to promote it. I have never known much good done by those who affected to trade for the public good’ [61, p. 349].

    18 Jean-Pierre Dupuy 2017, personal communication.

    19 To take one example, deep learning [66], the most popular AI approach today, is based on multi-layer artificial neural networks trained on billions of examples. This supervised training forges complex eigenbehaviours that make it possible to categorize a huge variety of input and extract patterns from the data. But deep learning lacks autonomy. Its success depends on the meanings that have been injected in the training sets by thousands of humans [67], and it is both unable to adapt on-the-fly to a new meanings without a new training and unable to take into account emergent inputs that would not have been specified in advance. Supervised machine learning is a very useful extension of human thought but still, it is more artificial than intelligent.

    21 Wikipedia is a very good example. A pure product of stigmergic interactions, it is in the top 10 most visited websites in the world with more than 130 000 active contributors per month. Collectively, Wikipedia’s contributors bring to life a medium that synthesizes in real time a set of facts and knowledge that would have been impossible to conceive without a stigmergic medium like the web. See https://en.wikipedia.org/wiki/Wikipedia:Wikipedians.

    22 ‘Les actions médiatisées des insectes sociaux sont collectives et rigides, celles des chimpanzés sont au contraire individuelles et intelligentes—et celles des humains sont collectives et intelligentes’.

    23 This conjecture links the rigidity of interpersonal interactions to the individual’s ability to control his or her destiny when part of a collective. It has been turned into a theorem by Koppel et al. [72].

    One contribution of 11 to a theme issue ‘Unifying the essential concepts of biological networks: biological insights and philosophical foundations’.

    Electronic supplementary material is available online at https://dx.doi.org/10.6084/m9.figshare.c.4826583.

    Published by the Royal Society. All rights reserved.

    References