Abstract
This paper develops a Bayesian mechanics for adaptive systems. Firstly, we model the interface between a system and its environment with a Markov blanket. This affords conditions under which states internal to the blanket encode information about external states. Second, we introduce dynamics and represent adaptive systems as Markov blankets at steady state. This allows us to identify a wide class of systems whose internal states appear to infer external states, consistent with variational inference in Bayesian statistics and theoretical neuroscience. Finally, we partition the blanket into sensory and active states. It follows that active states can be seen as performing active inference and well-known forms of stochastic control (such as PID control), which are prominent formulations of adaptive behaviour in theoretical biology and engineering.
1. Introduction
Any object of study must be, implicitly or explicitly, separated from its environment. This implies a boundary that separates it from its surroundings, and which persists for at least as long as the system exists.
In this article, we explore the consequences of a boundary mediating interactions between states internal and external to a system. This provides a useful metaphor to think about biological systems, which comprise spatially bounded, interacting components, nested at several spatial scales [1,2]: for example, the membrane of a cell acts as a boundary through which the cell communicates with its environment, and the same can be said of the sensory receptors and muscles that bound the nervous system.
By examining the dynamics of persistent, bounded systems, we identify a wide class of systems wherein the states internal to a boundary appear to infer those states outside the boundary—a description which we refer to as Bayesian mechanics. Moreover, if we assume that the boundary comprises sensory and active states, we can identify the dynamics of active states with well-known descriptions of adaptive behaviour from theoretical biology and stochastic control.
In what follows, we link a purely mathematical formulation of interfaces and dynamics with descriptions of belief updating and behaviour found in the biological sciences and engineering. Altogether, this can be seen as a model of adaptive agents, as these interface with their environment through sensory and active states and furthermore behave so as to preserve a target steady state.
(a) Outline of paper
This paper has three parts, each of which introduces a simple, but fundamental, move.
(i) | The first is to partition the world into internal and external states whose boundary is modelled with a Markov blanket [3,4]. This allows us to identify conditions under which internal states encode information about external states. | ||||
(ii) | The second move is to equip this partition with stochastic dynamics. The key consequence of this is that internal states can be seen as continuously inferring external states, consistent with variational inference in Bayesian statistics and with predictive processing accounts of biological neural networks in theoretical neuroscience. | ||||
(iii) | The third move is to partition the boundary into sensory and active states. It follows that active states can be seen as performing active inference and stochastic control, which are prominent descriptions of adaptive behaviour in biological agents, machine learning and robotics. |
(b) Related work
The emergence and sustaining of complex (dissipative) structures have been subjects of long-standing research starting from the work of Prigogine [5,6], followed notably by Haken’s synergetics [7], and in recent years, the statistical physics of adaptation [8]. A central theme of these works is that complex systems can only emerge and sustain themselves far from equilibrium [9–11].
Information processing has long been recognized as a hallmark of cognition in biological systems. In light of this, theoretical physicists have identified basic instances of information processing in systems far from equilibrium using tools from information theory, such as how a drive for metabolic efficiency can lead a system to become predictive [12–15].
A fundamental aspect of biological systems is a self-organization of various interacting components at several spatial scales [1,2]. Much research currently focuses on multipartite processes—modelling interactions between various sub-components that form biological systems—and how their interactions constrain the thermodynamics of the whole [16–20].
At the confluence of these efforts, researchers have sought to explain cognition in biological systems. Since the advent of the twentieth century, Bayesian inference has been used to describe various cognitive processes in the brain [21–25]. In particular, the free energy principle [23], a prominent theory of self-organization from the neurosciences, postulates that Bayesian inference can be used to describe the dynamics of multipartite, persistent systems modelled as Markov blankets at non-equilibrium steady state [26–30].
This paper connects and develops some of the key themes from this literature. Starting from fundamental considerations about adaptive systems, we develop a physics of things that hold beliefs about other things—consistently with Bayesian inference—and explore how it relates to known descriptions of action and behaviour from the neurosciences and engineering. Our contribution is theoretical: from a biophysicist’s perspective, this paper describes how Bayesian descriptions of biological cognition and behaviour can emerge from standard accounts of physics. From an engineer’s perspective, this paper contextualizes some of the most common stochastic control methods and reminds us how these can be extended to suit more sophisticated control problems.
(c) Notation
Let be a square matrix with real coefficients. Let denote a partition of the states , so that
When a square matrix is symmetric positive-definite we write . and respectively denote the kernel and Moore–Penrose pseudo-inverse of a linear map or matrix, e.g. a non-necessarily square matrix such as . In our notation, indexing takes precedence over (pseudo) inversion, for example,
2. Markov blankets
This section formalizes the notion of boundary between a system and its environment as a Markov blanket [3,4], depicted graphically in figure 1. Intuitive examples of a Markov blanket are that of a cell membrane, mediating all interactions between the inside and the outside of the cell, or that of sensory receptors and muscles that bound the nervous system.
To formalize this intuition, we model the world’s state as a random variable with corresponding probability distribution over a state-space . We partition the state-space of into external, blanket and internal states:
A Markov blanket is a statement of conditional independence between internal and external states given blanket states.
Definition 2.1. (Markov blanket)
A Markov blanket is defined as
The existence of a Markov blanket can be expressed in several equivalent ways
For now, we will consider a (non-degenerate) Gaussian distribution encoding the distribution of states of the world
Example 2.2.
For example,
(a) Expected internal and external states
Blanket states act as an information boundary between external and internal states. Given a blanket state, we can express the conditional probability densities over external and internal states (using (2.1) and [32, proposition 3.13])1
This enables us to associate with any blanket state its corresponding expected external and expected internal states:
Pursuing the example of the nervous system, each sensory impression on the retina and oculomotor orientation (blanket state) is associated with an expected scene that caused sensory input (expected external state) and an expected pattern of neural activity in the visual cortex (expected internal state) [33].
(b) Synchronization map
A central question is whether and how expected internal states encode information about expected external states. For this, we need to characterize a synchronization function , mapping the expected internal state to the expected external state, given a blanket state . This is summarized in the following commutative diagram:
The existence of is guaranteed, for instance, if the expected internal state completely determines the blanket state—that is, when no information is lost in the mapping in virtue of it being one-to-one. In general, however, many blanket states may correspond to an unique expected internal state. Intuitively, consider the various neural pathways that compress the signal arriving from retinal photoreceptors [34], thus many different (hopefully similar) retinal impressions lead to the same signal arriving in the visual cortex.
(i) Existence
The key for the existence of a function mapping expected internal states to expected external states given blanket states, is that for any two blanket states associated with the same expected internal state, these be associated with the same expected external state. This non-degeneracy means that the internal states (e.g. patterns of activity in the visual cortex) have enough capacity to represent all possible expected external states (e.g. three-dimensional scenes of the environment). We formalize this in the following Lemma:
Lemma 2.3.
The following are equivalent:
(i) | There exists a function such that for any blanket state | ||||
(ii) | For any two blanket states | ||||
(iii) | . | ||||
(iv) | . |
See appendix A for a proof of lemma 2.3.
Example 2.4.
— | When external, blanket and internal states are one dimensional, the existence of a synchronization map is equivalent to or . | ||||
— | If is chosen at random—its entries sampled from a non-degenerate Gaussian or uniform distribution—then has full rank with probability 1. If furthermore, the blanket state-space has lower or equal dimensionality than the internal state-space , we obtain that is one-to-one (i.e. ) with probability 1. Thus, in this case, the conditions of lemma 2.3 are fulfilled with probability . |
(ii) Construction
The key idea to map an expected internal state to an expected external state is to: (1) find a blanket state that maps to this expected internal state (i.e. by inverting ) and (2) from this blanket state, find the corresponding expected external state (i.e. by applying ):
We now proceed to solving this problem. Given an internal state , we study the set of blanket states such that
Definition 2.5. (Synchronization map)
We define a synchronization function that maps to an internal state a corresponding most likely internal state2,3
Note that we can always define such , however, it is only when the conditions of lemma 2.3 are fulfilled that maps expected internal states to expected external states . When this is not the case, the internal states do not fully represent external states, which leads to a partly degenerate type of representation, see figure 2 for a numerical illustration obtained by sampling from a Gaussian distribution, in the non-degenerate (a) and degenerate cases (b), respectively.
3. Bayesian mechanics
In order to study the time-evolution of systems with a Markov blanket, we introduce dynamics into the external, blanket and internal states. Henceforth, we assume a synchronization map under the conditions of lemma 2.3.
(a) Processes at a Gaussian steady state
We consider stochastic processes at a Gaussian steady-state with a Markov blanket. The steady-state assumption means that the system’s overall configuration persists over time (e.g. it does not dissipate). In other words, we have a Gaussian density with a Markov blanket (2.2) and a stochastic process distributed according to at every point in time
Note that we do not require to be independent samples from the steady-state distribution . On the contrary, may be generated by extremely complex, nonlinear, and possibly stochastic equations of motion. See example 3.1 and figure 4 for details.
Example 3.1.
The dynamics of are described by a stochastic process at a Gaussian steady-state . There is a large class of such processes, which includes:
— | Stationary diffusion processes, with initial condition . Their time-evolution is given by an Itô stochastic differential equation (see appendix B):
3.2
Here, is a standard Brownian motion (a.k.a., Wiener process) [38,39] and are sufficiently well-behaved matrix fields (see appendix B). Namely, is the diffusion tensor (half the covariance of random fluctuations), which drives dissipative flow; is an arbitrary antisymmetric matrix field which drives conservative (i.e. solenoidal) flow. Note that there are no non-degeneracy conditions on the matrix field —in particular, the process is allowed to be non-ergodic or even completely deterministic (i.e. ). Also, denotes the divergence of a matrix field defined as . | ||||
— | More generally, could be generated by any Markov process at steady-state , such as the zig-zag process or the bouncy particle sampler [40–42], by any mean-zero Gaussian process at steady-state [43], or by any random dynamical system at steady-state [44]. |
Remark 3.2.
When the dynamics are given by an Itô stochastic differential equation (3.2), a Markov blanket of the steady-state density (2.2) does not preclude reciprocal influences between internal and external states [45,46]. For example,
(b) Maximum a posteriori estimation
The Markov blanket (3.1) allows us to exploit the construction of §2 to determine expected external and internal states given blanket states
We can view the steady-state density as specifying the relationship between external states (, causes) and particular states (, consequences). In statistics, this corresponds to a generative model, a probabilistic specification of how (external) causes generate (particular) consequences.
By construction, the expected internal states encode expected external states via the synchronization map
(c) Predictive processing
We can go further and associate with each internal state a probability distribution over external states, such that each internal state encodes beliefs about external states
Note a potential connection with epistemic accounts of quantum mechanics; namely, a world governed by classical mechanics ( in (3.2)) in which each agent encodes Gaussian beliefs about external states could appear to the agents as reproducing many features of quantum mechanics [50].
Under this specification (3.4), expected internal states are the unique minimizer of a Kullback–Leibler divergence [51]
In the neurosciences, the right-hand side of (3.5) is commonly known as a (squared) precision-weighted prediction error: the discrepancy between the prediction and the (expected) state of the environment is weighted with a precision matrix [24,52,53] that derives from the steady-state density. This equation is formally similar to that found in predictive coding formulations of biological function [24,54–56], which stipulate that organisms minimize prediction errors, and in doing so optimize their beliefs to match the distribution of external states.
(d) Variational Bayesian inference
We can go further and associate expected internal states to the solution to the classical variational inference problem from statistical machine learning [57] and theoretical neurobiology [52,58]. Expected internal states are the unique minimizer of a free energy functional (i.e. an evidence bound [57,59])
At first sight, variational inference and predictive processing are solely useful to characterize the average internal state given blanket states at steady state. It is then surprising to see that the free energy says a great deal about a system’s expected trajectories as it relaxes to steady state. Figures 5 and 6 illustrate the time-evolution of the free energy and prediction errors after exposure to a surprising stimulus. In particular, figure 5 averages internal variables for any blanket state: In the neurosciences, perhaps the closest analogy is the event-triggered averaging protocol, where neurophysiological responses are averaged following a fixed perturbation, such a predictable neural input or an experimentally controlled sensory stimulus (e.g. spike-triggered averaging, event-related potentials) [62–64].
The most striking observation is the nearly monotonic decrease of the free energy as the system relaxes to steady state. This simply follows from the fact that regions of high density under the steady-state distribution have a low free energy. This overall decrease in free energy is the essence of the free-energy principle, which describes self-organization at non-equilibrium steady state [23,28,29]. Note that the free energy, even after averaging internal variables, may decrease non-monotonically. See the explanation in figure 5.
4. Active inference and stochastic control
In order to model agents that interact with their environment, we now partition blanket states into sensory and active states
(a) Active inference
We now proceed to characterize autonomous states, given sensory states, using the free energy. Unpacking blanket states, the free energy (3.6) reads
(b) Multivariate control
Active inference is used in various domains to simulate control [65,69,71,72,74–77], thus, it is natural that we can relate the dynamics of active states to well-known forms of stochastic control.
By computing the free energy explicitly (see appendix C), we obtain that
(c) Stochastic control in an extended state-space
More sophisticated control methods, such as PID (proportional-integral-derivative) control [77,80], involve controlling a process and its higher orders of motion (e.g. integral or derivative terms). So how can we relate the dynamics of autonomous states to these more sophisticated control methods? The basic idea involves extending the sensory state-space to replace the sensory process by its various orders of motion (integral, position, velocity, jerk etc, up to order ). To find these orders of motion, one must solve the stochastic realization problem.
(i) The stochastic realization problem
Recall that the sensory process is a stationary stochastic process (with a Gaussian steady state). The following is a central problem in stochastic systems theory: Given a stationary stochastic process , find a Markov process , called the state process, and a function such that
What kind of processes can be expressed as a function of a Markov process (4.2)?
There is a rather comprehensive theory of stochastic realization for the case where is a Gaussian process (which occurs, for example, when is a Gaussian process). This theory expresses as a linear map of an Ornstein–Uhlenbeck process [39,82,83]. The idea is as follows: as a mean-zero Gaussian process, is completely determined by its autocovariance function , which by stationarity only depends on . It is well known that any mean-zero stationary Gaussian process with exponentially decaying autocovariance function is an Ornstein–Uhlenbeck process (a result sometimes known as Doob’s theorem) [39,84–86]. Thus if equals a finite sum of exponentially decaying functions, we can express as a linear function of several nested Ornstein–Uhlenbeck processes, i.e. as an integrator chain from control theory [87,88]
In this example, are suitably chosen linear functions, are matrices and are standard Brownian motions. Thus, we can see as the output of a continuous-time hidden Markov model, whose (hidden) states encode its various orders of motion: position, velocity, jerk etc. These are known as generalized coordinates of motion in the Bayesian filtering literature [89–91]. See figure 10.
More generally, the state process and the function need not be linear, which enables to realize nonlinear, non-Gaussian processes [89,92,93]. Technically, this follows as Ornstein–Uhlenbeck processes are the only stationary Gaussian Markov processes. Note that stochastic realization theory is not as well developed in this general case [81,89,93–95].
(ii) Stochastic control of integrator chains
Henceforth, we assume that we can express as a function of a Markov process (4.2). Inserting (4.2) into (4.1), we now see that the expected autonomous states minimize how far themselves and are from their target value of zero
Furthermore, if the state process can be expressed as an integrator chain, as in (4.3), then we can interpret expected active and internal states as controlling each order of motion . For example, if is linear, these processes control each order of motion towards their target value of zero.
(iii) PID-like control
PID control is a well-known control method in engineering [77,80]. More than 90% of controllers in engineered systems implement either PID or PI (no derivative) control. The goal of PID control is to control a signal , its integral , and its derivative close to a pre-specified target value [77].
This turns out to be exactly what happens here when we consider the stochastic control of an integrator chain (4.4) with three orders of motion . When is linear, expected autonomous states control integral, proportional and derivative processes towards their target value of zero. Furthermore, from and one can derive integral, proportional and derivative gains, which penalise deviations of , respectively, from their target value of zero. Crucially, these control gains are simple by-products of the steady-state density and the stochastic realization problem.
Why restrict ourselves to PID control when stochastic control of integrator chains is available? It turns out that when sensory states are expressed as a function of an integrator chain (4.3), one may get away by controlling an approximation of the true (sensory) process, obtained by truncating high orders of motion as these have less effect on the dynamics, though knowing when this is warranted is a problem in approximation theory. This may explain why integral feedback control (), PI control () and PID control () are the most ubiquitous control methods in engineering applications. However, when simulating biological control—usually with highly nonlinear dynamics—it is not uncommon to consider generalized motion to fourth () or sixth () order [92,96].
It is worth mentioning that PID control has been shown to be implemented in simple molecular systems and is becoming a popular mechanistic explanation of behaviours such as bacterial chemotaxis and robust homeostatic algorithms in biochemical networks [77,97,98]. We suggest that this kind of behaviour emerges in Markov blankets at non-equilibrium steady state. Indeed, stationarity means that autonomous states will look as if they respond adaptively to external perturbations to preserve the steady state, and we can identify these dynamics as implementations of various forms of stochastic control (including PID-like control).
5. Discussion
In this paper, we considered the consequences of a boundary mediating interactions between states internal and external to a system. On unpacking this notion, we found that the states internal to a Markov blanket look as if they perform variational Bayesian inference, optimizing beliefs about their external counterparts. When subdividing the blanket into sensory and active states, we found that autonomous states perform active inference and various forms of stochastic control (i.e. generalizations of PID control).
(a) Interacting Markov blankets
The sort of inference we have described could be nuanced by partitioning the external state-space into several systems that are themselves Markov blankets (such as Markov blankets nested at several different scales [1]). From the perspective of internal states, this leads to a more interesting inference problem, with a more complex generative model. It may be that the distinction between the sorts of systems we generally think of as engaging in cognitive, inferential, dynamics [99] and simpler systems rest upon the level of structure of the generative models (i.e. steady-state densities) that describe their inferential dynamics.
(b) Temporally deep inference
This distinction may speak to a straightforward extension of the treatment on offer, from simply inferring an external state to inferring the trajectories of external states. This may be achieved by representing the external process in terms of its higher orders of motion by solving the stochastic realization problem. By repeating the analysis above, internal states may be seen as inferring the position, velocity, jerk, etc of the external process, consistently with temporally deep inference in the sense of a Bayesian filter [91] (a special case of which is an extended Kalman–Bucy filter [100]).
(c) Bayesian mechanics in non-Gaussian steady states
The treatment from this paper extends easily to non-Gaussian steady states, in which internal states appear to perform approximate Bayesian inference over external states. Indeed, any arbitrary (smooth) steady-state density may be approximated by a Gaussian density at one of its modes using a so-called Laplace approximation. This Gaussian density affords one with a synchronization map in closed form4 that maps the expected internal state to an approximation of the expected external state. It follows that the system can be seen as performing approximate Bayesian inference over external states—precisely, an inferential scheme known as variational Laplace [101]. We refer the interested reader to a worked-out example involving two sparsely coupled Lorenz systems [30]. Note that variational Laplace has been proposed as an implementation of various cognitive processes in biological systems [25,52,58] accounting for several features of the brain’s functional anatomy and neural message passing [53,70,99,102,103].
(d) Modelling real systems
The simulations presented here are as simple as possible and are intended to illustrate general principles that apply to all stationary processes with a Markov blanket (3.1). These principles have been used to account for synthetic data arising in more refined (and more specific) simulations of an interacting particle system [27] and synchronization between two sparsely coupled stochastic Lorenz systems [30]. Clearly, an outstanding challenge is to account for empirical data arising from more interesting and complex structures. To do this, one would have to collect time-series from an organism’s internal states (e.g. neural activity), its surrounding external states, and its interface, including sensory receptors and actuators. Then, one could test for conditional independence between internal, external and blanket states (3.1) [104]. One might then test for the existence of a synchronization map (using lemma 2.3). This speaks to modelling systemic dynamics using stochastic processes with a Markov blanket. For example, one could learn the volatility, solenoidal flow and steady-state density in a stochastic differential equation (3.2) from data, using supervised learning [105].
6. Conclusion
This paper outlines some of the key relationships between stationary processes, inference and control. These relationships rest upon partitioning the world into those things that are internal or external to a (statistical) boundary, known as a Markov blanket. When equipped with dynamics, the expected internal states appear to engage in variational inference, while the expected active states appear to be performing active inference and various forms of stochastic control.
The rationale behind these findings is rather simple: if a Markov blanket derives from a steady-state density, the states of the system will look as if they are responding adaptively to external perturbations in order to recover the steady state. Conversely, well-known methods used to build adaptive systems implement the same kind of dynamics, implicitly so that the system maintains a steady state with its environment.
Footnotes
2 This mapping was derived independently of our work in [36, §3.2].
3 Replacing by any other element of (2.5) would lead to the same synchronization map provided that the conditions of lemma 2.3 are satisfied.
4 Another option is to empirically fit a synchronization map to data [27].
Data accessibility
All data and numerical simulations can be reproduced with code freely available at https://github.com/conorheins/bayesian-mechanics-sdes.
Authors' contributions
Conceptualization: L.D.C., K.F., C.H., G.A.P. Formal analysis: L.D.C., K.F., G.A.P. Software: L.D.C., C.H. Supervision: K.F., G.A.P. Writing-original draft: L.D.C. Writing-review and editing: K.F., C.H., G.A.P. All authors gave final approval for publication and agree to be held accountable for the work performed therein.
Competing interests
We have no competing interests.
Funding
L.D. is supported by the Fonds National de la Recherche, Luxembourg (Project code: 13568875). This publication is based on work partially supported by the EPSRC Centre for Doctoral Training in Mathematics of Random Systems: Analysis, Modelling and Simulation (EP/S023925/1). K.F. was a Wellcome Principal Research Fellow (Ref: 088130/Z/09/Z). C.H. is supported by the U.S. Office of Naval Research (N00014-19-1-2556). The work of G.A.P. was partially funded by the EPSRC, grant no. EP/P031587/1, and by JPMorgan Chase & Co. Any views or opinions expressed herein are solely those of the authors listed, and may differ from the views and opinions expressed by JPMorgan Chase & Co. or its affiliates. This material is not a product of the Research Department of J.P. Morgan Securities LLC. This material does not constitute a solicitation or offer in any jurisdiction.
Acknowledgements
L.D. would like to thank Kai Ueltzhöffer, Toby St Clere Smithe and Thomas Parr for interesting discussions. We are grateful to our two anonymous reviewers for feedback which substantially improved the manuscript.
Appendix A. Existence of synchronization map: proof
We prove lemma 2.3.
Proof.
follows by definition of a function.
is as follows
From [106, §0.7.3], using the Markov blanket condition (2.3), we can verify that
Appendix B. The Helmholtz decomposition
We consider a diffusion process on satisfying an Itô stochastic differential equation (SDE) [39,107,108],
— | Bounded, linear growth condition: , | ||||
— | Lipschitz condition: , |
We now recall an important result from the theory of stationary diffusion processes, known as the Helmholtz decomposition. It consists of splitting the dynamic into time-reversible (i.e. dissipative) and time-irreversible (i.e. conservative) components. The importance of this result in non-equilibrium thermodynamics was originally recognized by Graham in 1977 [109] and has been of great interest in the field since [39,110–112]. Furthermore, the Helmholtz decomposition is widely used in statistical machine learning to generate Monte-Carlo sampling schemes [39,73,113–116].
Lemma B.1. (Helmholtz decomposition)
For a diffusion process (B1) and a smooth probability density , the following are equivalent:
(i) | is a steady state for . | ||||
(ii) | We can write the drift as
B 2 where is the diffusion tensor and is a smooth antisymmetric matrix field. denotes the divergence of a matrix field defined as . |
Furthermore, is invariant under time-reversal, while changes sign under time-reversal.
In the Helmholtz decomposition of the drift (B2), the diffusion tensor mediates the dissipative flow, which flows towards the modes of the steady-state density, but is counteracted by random fluctuations , so that the system’s distribution remains unchanged—together these form the time-reversible part of the dynamics. In contrast, mediates the solenoidal flow—whose direction is reversed under time-reversal—which consists of conservative (i.e. Hamiltonian) dynamics that flow on the level sets of the steady state. See figure 11 for an illustration. Note that the terms time-reversible and time-irreversible are meant in a probabilistic sense, in the sense that time-reversibility denotes invariance under time-reversal. This is opposite to reversible and irreversible in a classical physics sense, which respectively mean energy preserving (i.e. conservative) and entropy creating (i.e. dissipative).
It is well known that when is stationary at , its time-reversal is also a diffusion process that solves the following Itô SDE [118]
Proof.
‘’ From (B4), we can rewrite the time-reversible part of the drift as
Appendix C. Free energy computations
The free energy reads (3.6)