Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences
Open AccessResearch articles

Bayesian mechanics for stationary processes

Lancelot Da Costa

Lancelot Da Costa

Department of Mathematics, Imperial College London, London SW7 2AZ, UK

Wellcome Centre for Human Neuroimaging, University College London, London WC1N 3AR, UK

[email protected]

Google Scholar

Find this author on PubMed

,
Karl Friston

Karl Friston

Wellcome Centre for Human Neuroimaging, University College London, London WC1N 3AR, UK

Google Scholar

Find this author on PubMed

,
Conor Heins

Conor Heins

Department of Collective Behaviour, Max Planck Institute of Animal Behavior, Konstanz D-78457, Germany

Centre for the Advanced Study of Collective Behaviour, University of Konstanz, Konstanz D-78457, Germany

Department of Biology, University of Konstanz, Konstanz D-78457, Germany

Google Scholar

Find this author on PubMed

and
Grigorios A. Pavliotis

Grigorios A. Pavliotis

Department of Mathematics, Imperial College London, London SW7 2AZ, UK

Google Scholar

Find this author on PubMed

Published:https://doi.org/10.1098/rspa.2021.0518

    Abstract

    This paper develops a Bayesian mechanics for adaptive systems. Firstly, we model the interface between a system and its environment with a Markov blanket. This affords conditions under which states internal to the blanket encode information about external states. Second, we introduce dynamics and represent adaptive systems as Markov blankets at steady state. This allows us to identify a wide class of systems whose internal states appear to infer external states, consistent with variational inference in Bayesian statistics and theoretical neuroscience. Finally, we partition the blanket into sensory and active states. It follows that active states can be seen as performing active inference and well-known forms of stochastic control (such as PID control), which are prominent formulations of adaptive behaviour in theoretical biology and engineering.

    1. Introduction

    Any object of study must be, implicitly or explicitly, separated from its environment. This implies a boundary that separates it from its surroundings, and which persists for at least as long as the system exists.

    In this article, we explore the consequences of a boundary mediating interactions between states internal and external to a system. This provides a useful metaphor to think about biological systems, which comprise spatially bounded, interacting components, nested at several spatial scales [1,2]: for example, the membrane of a cell acts as a boundary through which the cell communicates with its environment, and the same can be said of the sensory receptors and muscles that bound the nervous system.

    By examining the dynamics of persistent, bounded systems, we identify a wide class of systems wherein the states internal to a boundary appear to infer those states outside the boundary—a description which we refer to as Bayesian mechanics. Moreover, if we assume that the boundary comprises sensory and active states, we can identify the dynamics of active states with well-known descriptions of adaptive behaviour from theoretical biology and stochastic control.

    In what follows, we link a purely mathematical formulation of interfaces and dynamics with descriptions of belief updating and behaviour found in the biological sciences and engineering. Altogether, this can be seen as a model of adaptive agents, as these interface with their environment through sensory and active states and furthermore behave so as to preserve a target steady state.

    (a) Outline of paper

    This paper has three parts, each of which introduces a simple, but fundamental, move.

    (i)

    The first is to partition the world into internal and external states whose boundary is modelled with a Markov blanket [3,4]. This allows us to identify conditions under which internal states encode information about external states.

    (ii)

    The second move is to equip this partition with stochastic dynamics. The key consequence of this is that internal states can be seen as continuously inferring external states, consistent with variational inference in Bayesian statistics and with predictive processing accounts of biological neural networks in theoretical neuroscience.

    (iii)

    The third move is to partition the boundary into sensory and active states. It follows that active states can be seen as performing active inference and stochastic control, which are prominent descriptions of adaptive behaviour in biological agents, machine learning and robotics.

    (b) Related work

    The emergence and sustaining of complex (dissipative) structures have been subjects of long-standing research starting from the work of Prigogine [5,6], followed notably by Haken’s synergetics [7], and in recent years, the statistical physics of adaptation [8]. A central theme of these works is that complex systems can only emerge and sustain themselves far from equilibrium [911].

    Information processing has long been recognized as a hallmark of cognition in biological systems. In light of this, theoretical physicists have identified basic instances of information processing in systems far from equilibrium using tools from information theory, such as how a drive for metabolic efficiency can lead a system to become predictive [1215].

    A fundamental aspect of biological systems is a self-organization of various interacting components at several spatial scales [1,2]. Much research currently focuses on multipartite processes—modelling interactions between various sub-components that form biological systems—and how their interactions constrain the thermodynamics of the whole [1620].

    At the confluence of these efforts, researchers have sought to explain cognition in biological systems. Since the advent of the twentieth century, Bayesian inference has been used to describe various cognitive processes in the brain [2125]. In particular, the free energy principle [23], a prominent theory of self-organization from the neurosciences, postulates that Bayesian inference can be used to describe the dynamics of multipartite, persistent systems modelled as Markov blankets at non-equilibrium steady state [2630].

    This paper connects and develops some of the key themes from this literature. Starting from fundamental considerations about adaptive systems, we develop a physics of things that hold beliefs about other things—consistently with Bayesian inference—and explore how it relates to known descriptions of action and behaviour from the neurosciences and engineering. Our contribution is theoretical: from a biophysicist’s perspective, this paper describes how Bayesian descriptions of biological cognition and behaviour can emerge from standard accounts of physics. From an engineer’s perspective, this paper contextualizes some of the most common stochastic control methods and reminds us how these can be extended to suit more sophisticated control problems.

    (c) Notation

    Let ΠRd×d be a square matrix with real coefficients. Let η,b,μ denote a partition of the states [[1,d]], so that

    Π=[ΠηΠηbΠημΠbηΠbΠbμΠμηΠμbΠμ].
    We denote principal submatrices with one index only (i.e. we use Πη instead of Πηη). Similarly, principal submatrices involving various indices are denoted with a colon
    Πη:b:=[ΠηΠηbΠbηΠb].

    When a square matrix Π is symmetric positive-definite we write Π0. ker and respectively denote the kernel and Moore–Penrose pseudo-inverse of a linear map or matrix, e.g. a non-necessarily square matrix such as Πμb. In our notation, indexing takes precedence over (pseudo) inversion, for example,

    Πμb:=(Πμb)(Π)μb.

    2. Markov blankets

    This section formalizes the notion of boundary between a system and its environment as a Markov blanket [3,4], depicted graphically in figure 1. Intuitive examples of a Markov blanket are that of a cell membrane, mediating all interactions between the inside and the outside of the cell, or that of sensory receptors and muscles that bound the nervous system.

    Figure 1.

    Figure 1. Markov blanket depicted graphically as an undirected graphical model, also known as a Markov random field [4,31]. (A Markov random field is a Bayesian network whose directed arrows are replaced by undirected arrows.) The circles represent random variables. The lines represent conditional dependencies between random variables. The Markov blanket condition means that there is no line between μ and η. This means that μ and η are conditionally independent given b. In other words, knowing the internal state μ, does not afford additional information about the external state η when the blanket state b is known. Thus blanket states act as an informational boundary between internal and external states. (Online version in colour.)

    To formalize this intuition, we model the world’s state as a random variable x with corresponding probability distribution p over a state-space X=Rd. We partition the state-space of x into external, blanket and internal states:

    x=(η,b,μ)X=E×B×I.
    External, blanket and internal state-spaces (E,B,I) are taken to be Euclidean spaces for simplicity.

    A Markov blanket is a statement of conditional independence between internal and external states given blanket states.

    Definition 2.1. (Markov blanket)

    A Markov blanket is defined as

    (M.B.)ημb. 2.1
    That is, blanket states are a Markov blanket separating μ,η [3,4].

    The existence of a Markov blanket can be expressed in several equivalent ways

    (M.B.)p(η,μ|b)=p(η|b)p(μ|b)p(η|b,μ)=p(η|b)p(μ|b,η)=p(μ|b). 2.2

    For now, we will consider a (non-degenerate) Gaussian distribution p encoding the distribution of states of the world

    p(x):=N(x;0,Π1),Π0,
    with associated precision (i.e. inverse covariance) matrix Π. Throughout, we will denote the (positive definite) covariance by Σ:=Π1. Unpacking (2.1) in terms of Gaussian densities, we find that a Markov blanket is equivalent to a sparsity in the precision matrix
    (M.B.)Πημ=Πμη=0. 2.3

    Example 2.2.

    For example,

    Π=[210121012]Ση:b1=[2111.5],Σb:μ1=[1.5112]
    Then,
    p(η,μ|b)p(η,μ,b)exp(12xΠx)exp(12[η,b]Ση:b1[ηb]12[b,μ]Σb:μ1[bμ])p(η,b)p(b,μ)p(η|b)p(μ|b).
    Thus, the Markov blanket condition (2.1) holds.

    (a) Expected internal and external states

    Blanket states act as an information boundary between external and internal states. Given a blanket state, we can express the conditional probability densities over external and internal states (using (2.1) and [32, proposition 3.13])1

    p(η|b)=N(η;ΣηbΣb1b,Πη1)p(μ|b)=N(μ;ΣμbΣb1b,Πμ1). 2.4

    This enables us to associate with any blanket state its corresponding expected external and expected internal states:

    η(b)=E[ηb]=Ep(η|b)[η]=ΣηbΣb1bEμ(b)=E[μb]=Ep(μ|b)[μ]=ΣμbΣb1bI.

    Pursuing the example of the nervous system, each sensory impression on the retina and oculomotor orientation (blanket state) is associated with an expected scene that caused sensory input (expected external state) and an expected pattern of neural activity in the visual cortex (expected internal state) [33].

    (b) Synchronization map

    A central question is whether and how expected internal states encode information about expected external states. For this, we need to characterize a synchronization function σ, mapping the expected internal state to the expected external state, given a blanket state σ(μ(b))=η(b). This is summarized in the following commutative diagram:

    Display Formula

    The existence of σ is guaranteed, for instance, if the expected internal state completely determines the blanket state—that is, when no information is lost in the mapping bμ(b) in virtue of it being one-to-one. In general, however, many blanket states may correspond to an unique expected internal state. Intuitively, consider the various neural pathways that compress the signal arriving from retinal photoreceptors [34], thus many different (hopefully similar) retinal impressions lead to the same signal arriving in the visual cortex.

    (i) Existence

    The key for the existence of a function σ mapping expected internal states to expected external states given blanket states, is that for any two blanket states associated with the same expected internal state, these be associated with the same expected external state. This non-degeneracy means that the internal states (e.g. patterns of activity in the visual cortex) have enough capacity to represent all possible expected external states (e.g. three-dimensional scenes of the environment). We formalize this in the following Lemma:

    Lemma 2.3.

    The following are equivalent:

    (i)

    There exists a function σ:Image(μ)Image(η) such that for any blanket state bB

    σ(μ(b))=η(b).

    (ii)

    For any two blanket states b1,b2B

    μ(b1)=μ(b2)η(b1)=η(b2).

    (iii)

    kerΣμbkerΣηb.

    (iv)

    kerΠμbkerΠηb.

    See appendix A for a proof of lemma 2.3.

    Example 2.4.

    When external, blanket and internal states are one dimensional, the existence of a synchronization map is equivalent to Πμb0 or Πμb=Πηb=0.

    If Πμb is chosen at random—its entries sampled from a non-degenerate Gaussian or uniform distribution—then Πμb has full rank with probability 1. If furthermore, the blanket state-space B has lower or equal dimensionality than the internal state-space I, we obtain that Πμb is one-to-one (i.e. kerΠμb=0) with probability 1. Thus, in this case, the conditions of lemma 2.3 are fulfilled with probability 1.

    (ii) Construction

    The key idea to map an expected internal state μ(b) to an expected external state η(b) is to: (1) find a blanket state that maps to this expected internal state (i.e. by inverting μ) and (2) from this blanket state, find the corresponding expected external state (i.e. by applying η):

    Display Formula

    We now proceed to solving this problem. Given an internal state μ, we study the set of blanket states b such that μ(b)=μ

    μ(b)=ΣμbΣb1b=μbμ1(μ)=ΣbΣμb1μ. 2.5
    Here, the inverse on the right-hand side of (2.4) is understood as the preimage of a linear map. We know that this system of linear equations has a vector space of solutions given by [35]
    μ1(μ)={ΣbΣμbμ+(I dΣbΣμbΣμbΣb1)b:bB}. 2.6
    Among these, we choose
    μ(μ)=ΣbΣμbμ.

    Definition 2.5. (Synchronization map)

    We define a synchronization function that maps to an internal state a corresponding most likely internal state2,3

    σ:ImageμImageημη(μ(μ))=ΣηbΣμbμ=Πη1ΠηbΠμbΠμμ.
    The expression in terms of the precision matrix is a by-product of appendix A.

    Note that we can always define such σ, however, it is only when the conditions of lemma 2.3 are fulfilled that σ maps expected internal states to expected external states σ(μ(b))=η(b). When this is not the case, the internal states do not fully represent external states, which leads to a partly degenerate type of representation, see figure 2 for a numerical illustration obtained by sampling from a Gaussian distribution, in the non-degenerate (a) and degenerate cases (b), respectively.

    Figure 2.

    Figure 2. Synchronization map: example and non-example. This figure plots expected external states given blanket states η(b) (in orange), and the corresponding prediction encoded by internal states σ(μ(b)) (in blue). In this example, external, blanket and internal state-spaces are taken to be one dimensional. We show the correspondence when the conditions of lemma 2.3 are satisfied (a) and when these are not satisfied (b). In the latter case, the predictions are uniformly zero. To generate these data, (1) we drew 106 samples from a Gaussian distribution with a Markov blanket, (2) we partitioned the blanket state-space into several bins, (3) we obtained the expected external and internal states given blanket states empirically by averaging samples from each bin, and finally, (4) we applied the synchronization map to the (empirical) expected internal states given blanket states. (Online version in colour.)

    3. Bayesian mechanics

    In order to study the time-evolution of systems with a Markov blanket, we introduce dynamics into the external, blanket and internal states. Henceforth, we assume a synchronization map under the conditions of lemma 2.3.

    (a) Processes at a Gaussian steady state

    We consider stochastic processes at a Gaussian steady-state p with a Markov blanket. The steady-state assumption means that the system’s overall configuration persists over time (e.g. it does not dissipate). In other words, we have a Gaussian density p=N(0,Π1) with a Markov blanket (2.2) and a stochastic process distributed according to p at every point in time

    xtp=N(0,Π1)for any t.
    Recalling our partition into external, blanket and internal states, this affords a Markov blanket that persists over time, see figure 3
    xt=(ηt,bt,μt)pηtμtbt. 3.1
    Figure 3.

    Figure 3. Markov blanket evolving in time. We use a bacillus to depict an intuitive example of a Markov blanket that persists over time. Here, the blanket states represent the membrane and actin filaments of the cytoskeleton, which mediate all interactions between internal states and the external medium (external states). (Online version in colour.)

    Note that we do not require xt to be independent samples from the steady-state distribution p. On the contrary, xt may be generated by extremely complex, nonlinear, and possibly stochastic equations of motion. See example 3.1 and figure 4 for details.

    Figure 4.

    Figure 4. Processes at a Gaussian steady state. This figure illustrates the synchronization map and transition probabilities of processes at a Gaussian steady state. (a) We plot the synchronization map as in figure 2, only, here, the samples are drawn from trajectories of a diffusion process (3.2) with a Markov blanket. Although this is not the case here, one might obtain a slightly noisier correspondence between predictions σ(μ(bt)) and expected external states η(bt)—compared to figure 2—in numerical discretizations of a diffusion process. This is because the steady state of a numerical discretization usually differs slightly from the steady state of the continuous-time process [37]. (b) This panel plots the transition probabilities of the same diffusion process (3.2), for the blanket state at two different times. The joint distribution (depicted as a heat map) is not Gaussian but its marginals—the steady-state density—are Gaussian. This shows that in general, processes at a Gaussian steady state are not Gaussian processes. In fact, the Ornstein–Uhlenbeck process is the only stationary diffusion process (3.2) that is a Gaussian process, so the transition probabilities of nonlinear diffusion processes (3.2) are never multivariate Gaussians. (Online version in colour.)

    Example 3.1.

    The dynamics of xt are described by a stochastic process at a Gaussian steady-state p=N(0,Π1). There is a large class of such processes, which includes:

    Stationary diffusion processes, with initial condition x0p. Their time-evolution is given by an Itô stochastic differential equation (see appendix B):

    dxt=(Γ+Q)(xt)logp(xt)dt+(Γ+Q)(xt)dt+ς(xt)dWt,=(Γ+Q)(xt)Πxtdt+(Γ+Q)(xt)dt+ς(xt)dWtΓ:=ςς/2,Q=Q. 3.2
    Here, Wt is a standard Brownian motion (a.k.a., Wiener process) [38,39] and ς,Γ,Q are sufficiently well-behaved matrix fields (see appendix B). Namely, Γ is the diffusion tensor (half the covariance of random fluctuations), which drives dissipative flow; Q is an arbitrary antisymmetric matrix field which drives conservative (i.e. solenoidal) flow. Note that there are no non-degeneracy conditions on the matrix field ς—in particular, the process is allowed to be non-ergodic or even completely deterministic (i.e. ς0). Also, denotes the divergence of a matrix field defined as ((Γ+Q))i:=j(/xj)(Γ+Q)ij.

    More generally, xt could be generated by any Markov process at steady-state p, such as the zig-zag process or the bouncy particle sampler [4042], by any mean-zero Gaussian process at steady-state p [43], or by any random dynamical system at steady-state p [44].

    Remark 3.2.

    When the dynamics are given by an Itô stochastic differential equation (3.2), a Markov blanket of the steady-state density (2.2) does not preclude reciprocal influences between internal and external states [45,46]. For example,

    Π=[210121012],Q[001000100],ςId3
    and
    d[ηtbtμt]=[11.520.510.520.51][ηtbtμt]dt+ςdWt.
    Conversely, the absence of reciprocal coupling between two states in the drift in some instances, though not always, leads to conditional independence [30,36,45].

    (b) Maximum a posteriori estimation

    The Markov blanket (3.1) allows us to exploit the construction of §2 to determine expected external and internal states given blanket states

    ηt:=η(bt)μt:=μ(bt).
    Note that η,μ are linear functions of blanket states; since bt generally exhibits rough sample paths, ηt,μt will also exhibit very rough sample paths.

    We can view the steady-state density p as specifying the relationship between external states (η, causes) and particular states (b,μ, consequences). In statistics, this corresponds to a generative model, a probabilistic specification of how (external) causes generate (particular) consequences.

    By construction, the expected internal states encode expected external states via the synchronization map

    σ(μt)=ηt,
    which manifests a form of generalized synchrony across the Markov blanket [4749]. Moreover, the expected internal state μt effectively follows the most likely cause of its sensations
    σ(μt)=argmaxp(ηtbt)for any t.
    This has an interesting statistical interpretation as expected internal states perform maximum a posteriori (MAP) inference over external states.

    (c) Predictive processing

    We can go further and associate with each internal state μ a probability distribution over external states, such that each internal state encodes beliefs about external states

    qμ(η):=N(η;σ(μ),Πη1). 3.3
    We will call qμ the approximate posterior belief associated with the internal state μ due to the forecoming connection to inference. Under this specification, the mean of the approximate posterior depends upon the internal state, while its covariance equals that of the true posterior w.r.t. external states (2.3). It follows that the approximate posterior equals the true posterior when the internal state μ equals the expected internal state μ(b) (given blanket states):
    qμ(η)=p(η|b)μ=μ(b). 3.4

    Note a potential connection with epistemic accounts of quantum mechanics; namely, a world governed by classical mechanics (σ0 in (3.2)) in which each agent encodes Gaussian beliefs about external states could appear to the agents as reproducing many features of quantum mechanics [50].

    Under this specification (3.4), expected internal states are the unique minimizer of a Kullback–Leibler divergence [51]

    μt=argminμDKL[qμ(η)||p(η|b)],
    that measures the discrepancy between beliefs about the external world qμ(η) and the posterior distribution over external variables. Computing the KL divergence (see appendix C), we obtain
    μt=argminμ(σ(μ)ηt)Πη(σ(μ)ηt). 3.5

    In the neurosciences, the right-hand side of (3.5) is commonly known as a (squared) precision-weighted prediction error: the discrepancy between the prediction and the (expected) state of the environment is weighted with a precision matrix [24,52,53] that derives from the steady-state density. This equation is formally similar to that found in predictive coding formulations of biological function [24,5456], which stipulate that organisms minimize prediction errors, and in doing so optimize their beliefs to match the distribution of external states.

    (d) Variational Bayesian inference

    We can go further and associate expected internal states to the solution to the classical variational inference problem from statistical machine learning [57] and theoretical neurobiology [52,58]. Expected internal states are the unique minimizer of a free energy functional (i.e. an evidence bound [57,59])

    F(bt,μt)F(bt,μt)F(b,μ):=DKL[qμ(η)||p(η|b)]logp(b,μ)=Eqμ(η)[logp(x)]EnergyH[qμ]Entropy. 3.6
    The last line expresses the free energy as a difference between energy and entropy: energy or accuracy measures to what extent predicted external states are close to the true external states, while entropy penalizes beliefs that are overly precise.

    At first sight, variational inference and predictive processing are solely useful to characterize the average internal state given blanket states at steady state. It is then surprising to see that the free energy says a great deal about a system’s expected trajectories as it relaxes to steady state. Figures 5 and 6 illustrate the time-evolution of the free energy and prediction errors after exposure to a surprising stimulus. In particular, figure 5 averages internal variables for any blanket state: In the neurosciences, perhaps the closest analogy is the event-triggered averaging protocol, where neurophysiological responses are averaged following a fixed perturbation, such a predictable neural input or an experimentally controlled sensory stimulus (e.g. spike-triggered averaging, event-related potentials) [6264].

    Figure 5.

    Figure 5. Variational inference and predictive processing, averaging internal variables for any blanket state. This figure illustrates a system’s behaviour after experiencing a surprising blanket state, averaging internal variables for any blanket state. This is a multidimensional Ornstein–Uhlenbeck process, with two external, blanket and internal variables, initialized at the steady-state density conditioned upon an improbable blanket state p(x0|b0). (a) We plot a sample trajectory of the blanket states as these relax to steady state over a contour plot of the free energy (up to a constant). (b) This plots the free energy (up to a constant) over time, averaged over multiple trajectories. In this example, the rare fluctuations that climb the free energy landscape vanish on average, so that the average free energy decreases monotonically. This need not always be the case: conservative systems (i.e. ς0 in (3.2)) are deterministic flows along the contours of the steady-state density (see appendix B). Since these contours do not generally coincide with those of F(b,μ) it follows that the free energy oscillates between its maximum and minimum value over the system’s periodic trajectory. Luckily, conservative systems are not representative of dissipative, living systems. Yet, it follows that the average free energy of expected internal variables may increase, albeit only momentarily, in dissipative systems (3.2) whose solenoidal flow dominates dissipative flow. (c) We illustrate the accuracy of predictions over external states of the sample path from a. At steady state (from timestep 100), the predictions become accurate. The prediction of the second component is offset by four units for greater visibility, as can be seen from the longtime behaviour converging to four instead of zero. (d) We show how precision-weighted prediction errors ξ:=Πη(ηtσ(μt)) evolve over time. These become normally distributed with zero mean as the process reaches steady state. (Online version in colour.)

    Figure 6.

    Figure 6. Variational inference and predictive processing. This figure illustrates a system’s behaviour after experiencing a surprising blanket state. This is a multidimensional Ornstein–Uhlenbeck process, with one external, blanket and internal variable, initialized at the steady-state density conditioned upon an improbable blanket state p(x0|b0). (a) This plots a sample trajectory of particular states as these relax to steady state over a contour plot of the free energy. The white line shows the expected internal state given blanket states, at which point inference is exact. After starting close to this line, the process is driven by solenoidal flow to regions where inference is inaccurate. Yet, solenoidal flow makes the system converge faster to steady state [60,61] at which point inference becomes accurate again. (b) This plots the free energy (up to a constant) over time, averaged over multiple trajectories. (c) We illustrate the accuracy of predictions over external states of the sample path from the upper left panel. These predictions are accurate at steady state (from timestep 100). (d) We illustrate the (precision weighted) prediction errors over time. In orange, we plot the prediction error corresponding to the sample path in a; the other sample paths are summarized as a heat map in blue. (Online version in colour.)

    The most striking observation is the nearly monotonic decrease of the free energy as the system relaxes to steady state. This simply follows from the fact that regions of high density under the steady-state distribution have a low free energy. This overall decrease in free energy is the essence of the free-energy principle, which describes self-organization at non-equilibrium steady state [23,28,29]. Note that the free energy, even after averaging internal variables, may decrease non-monotonically. See the explanation in figure 5.

    4. Active inference and stochastic control

    In order to model agents that interact with their environment, we now partition blanket states into sensory and active states

    bt=(st,at)xt=(ηt,st,at,μt).
    Intuitively, sensory states are the sensory receptors of the system (e.g. olfactory or visual receptors) while active states correspond to actuators through which the system influences the environment (e.g. muscles). See figure 7. The goal of this section is to explain how autonomous states (i.e. active and internal states) respond adaptively to sensory perturbations in order to maintain the steady state, which we interpret as the agent’s preferences or goal. This allows us to relate the dynamics of autonomous states to active inference and stochastic control, which are well-known formulations of adaptive behaviour in theoretical biology and engineering.
    Figure 7.

    Figure 7. Markov blanket evolving in time comprising sensory and active states. We continue the intuitive example from figure 3 of the bacillus as representing a Markov blanket that persists over time. The only difference is that we partition blanket states into sensory and active states. In this example, the sensory states can be seen as the bacillus’ membrane, while the active states correspond to the actin filaments of the cytoskeleton.

    (a) Active inference

    We now proceed to characterize autonomous states, given sensory states, using the free energy. Unpacking blanket states, the free energy (3.6) reads

    F(s,a,μ)=DKL[qμ(η)||p(η|s,a)]logp(μ|s,a)logp(a|s)logp(s).
    Crucially, it follows that the expected autonomous states minimize free energy
    F(st,at,μt)F(st,at,μt)at:=a(st):=Ep(at|st)[at]=ΣasΣs1st,
    where at denotes the expected active states given sensory states, which is the mean of p(at|st). This result forms the basis of active inference, a well-known framework to describe and generate adaptive behaviour in neuroscience, machine learning and robotics [25,58,6572]. See figure 8.
    Figure 8.

    Figure 8. Active inference. This figure illustrates a system’s behaviour after experiencing a surprising sensory state, averaging internal variables for any blanket state. We simulated an Ornstein–Uhlenbeck process with two external, one sensory, one active and two internal variables, initialized at the steady-state density conditioned upon an improbable sensory state p(x0|s0). (a) The white line shows the expected active state given sensory states: this is the action that performs active inference and optimal stochastic control. As the process experiences a surprising sensory state, it initially relaxes to steady state in a winding manner due to the presence of solenoidal flow. Even though solenoidal flow drives the actions away from the optimal action initially, it allows the process to converge faster to steady state [60,61,73] where the actions are again close to the optimal action from optimal control. (b) We plot the free energy of the expected internal state, averaged over multiple trajectories. In this example, the average free energy does not decrease monotonically—see figure 5 for an explanation. (Online version in colour.)

    (b) Multivariate control

    Active inference is used in various domains to simulate control [65,69,71,72,7477], thus, it is natural that we can relate the dynamics of active states to well-known forms of stochastic control.

    By computing the free energy explicitly (see appendix C), we obtain that

    (at,μt)minimizes(a,μ)[st,a,μ]K[staμ]K:=Σb:μ1, 4.1
    where we denoted by K the concentration (i.e. precision) matrix of p(s,a,μ). We may interpret (a,μ) as controlling how far particular states [s,a,μ] are from their target set-point of [0,0,0], where the error is weighted by the precision matrix K. See figure 9. (Note that we could choose any other set-point by translating the frame of reference or equivalently choosing a Gaussian steady-state centred away from zero). In other words, there is a cost associated with how far away s,a,μ are from the origin and this cost is weighed by the precision matrix, which derives from the stationary covariance of the steady state. In summary, the expected internal and active states can be seen as performing multivariate stochastic control, where the matrix K encodes control gains. From a biologist’s perspective, this corresponds to a simple instance of homeostatic regulation: maintaining physiological variables within their preferred range.
    Figure 9.

    Figure 9. Stochastic control. This figure plots a sample path of the system’s particular states after it experiences a surprising sensory state. This is the same sample path as shown in figure 8a; however, here the link with stochastic control is easier to see. Indeed, it looks as if active states (in red) are actively compensating for sensory states (in green): rises in active states lead to plunges in sensory states and vice versa. Note the initial rise in active states to compensate for the initial perturbation in the sensory states. Furthermore, active states follow a similar trajectory as sensory states, with a slight delay, which can be interpreted as a reaction time [78]. In fact, the correspondence between sensory and active states is a consequence of the solenoidal flow–see figure 8a. The damped oscillations as the particular states approach their target value of 0 (in grey) is analogous to that found in basic implementations of stochastic control, e.g. [79, fig. 4.9]. (Online version in colour.)

    (c) Stochastic control in an extended state-space

    More sophisticated control methods, such as PID (proportional-integral-derivative) control [77,80], involve controlling a process and its higher orders of motion (e.g. integral or derivative terms). So how can we relate the dynamics of autonomous states to these more sophisticated control methods? The basic idea involves extending the sensory state-space to replace the sensory process st by its various orders of motion s~t=(st(0),,st(n)) (integral, position, velocity, jerk etc, up to order n). To find these orders of motion, one must solve the stochastic realization problem.

    (i) The stochastic realization problem

    Recall that the sensory process st is a stationary stochastic process (with a Gaussian steady state). The following is a central problem in stochastic systems theory: Given a stationary stochastic process st, find a Markov process s~t, called the state process, and a function f such that

    st=f(s~t)for allt. 4.2
    Moreover, find an Itô stochastic differential equation whose unique solution is the state process s~t. The problem of characterizing the family of all such representations is known as the stochastic realization problem [81].

    What kind of processes st can be expressed as a function of a Markov process (4.2)?

    There is a rather comprehensive theory of stochastic realization for the case where st is a Gaussian process (which occurs, for example, when xt is a Gaussian process). This theory expresses st as a linear map of an Ornstein–Uhlenbeck process [39,82,83]. The idea is as follows: as a mean-zero Gaussian process, st is completely determined by its autocovariance function C(tr)=E[stsr], which by stationarity only depends on |tr|. It is well known that any mean-zero stationary Gaussian process with exponentially decaying autocovariance function is an Ornstein–Uhlenbeck process (a result sometimes known as Doob’s theorem) [39,8486]. Thus if C equals a finite sum of exponentially decaying functions, we can express st as a linear function of several nested Ornstein–Uhlenbeck processes, i.e. as an integrator chain from control theory [87,88]

    st=f(st(0))dst(0)=f0(st(0),st(1))dt+ς0dWt(0)dst(1)=f1(st(1),st(2))dt+ς1dWt(1)dst(n1)=fn1(st(n1),st(n))dt+ςn1dWt(n1)dst(n)=fn(st(n))dt+ςndWt(n). 4.3

    In this example, f,fi are suitably chosen linear functions, ςi are matrices and W(i) are standard Brownian motions. Thus, we can see st as the output of a continuous-time hidden Markov model, whose (hidden) states st(i) encode its various orders of motion: position, velocity, jerk etc. These are known as generalized coordinates of motion in the Bayesian filtering literature [8991]. See figure 10.

    Figure 10.

    Figure 10. Continuous-time Hidden Markov model. This figure depicts (4.3) in a graphical format, as a Bayesian network [3,31]. The encircled variables are random variables—the processes indexed at an arbitrary sequence of subsequent times t1<t2<<t9. The arrows represent relationships of causality. In this hidden Markov model, the (hidden) state process s~t is given by an integrator chain—i.e. nested stochastic differential equations st(0),st(1),,st(n). These processes st(i),i0, can, respectively, be seen as encoding the position, velocity, jerk etc, of the process st.

    More generally, the state process s~t and the function f need not be linear, which enables to realize nonlinear, non-Gaussian processes st [89,92,93]. Technically, this follows as Ornstein–Uhlenbeck processes are the only stationary Gaussian Markov processes. Note that stochastic realization theory is not as well developed in this general case [81,89,9395].

    (ii) Stochastic control of integrator chains

    Henceforth, we assume that we can express st as a function of a Markov process s~t (4.2). Inserting (4.2) into (4.1), we now see that the expected autonomous states minimize how far themselves and f(s~t) are from their target value of zero

    (at,μt)minimizes(a,μ)[f(s~t),a,μ]K[f(s~t)aμ]. 4.4

    Furthermore, if the state process s~t can be expressed as an integrator chain, as in (4.3), then we can interpret expected active and internal states as controlling each order of motion st(i). For example, if f is linear, these processes control each order of motion st(i) towards their target value of zero.

    (iii) PID-like control

    PID control is a well-known control method in engineering [77,80]. More than 90% of controllers in engineered systems implement either PID or PI (no derivative) control. The goal of PID control is to control a signal st(1), its integral st(0), and its derivative st(2) close to a pre-specified target value [77].

    This turns out to be exactly what happens here when we consider the stochastic control of an integrator chain (4.4) with three orders of motion (n=2). When f is linear, expected autonomous states control integral, proportional and derivative processes st(0),st(1),st(2) towards their target value of zero. Furthermore, from f and K one can derive integral, proportional and derivative gains, which penalise deviations of st(0),st(1),st(2), respectively, from their target value of zero. Crucially, these control gains are simple by-products of the steady-state density and the stochastic realization problem.

    Why restrict ourselves to PID control when stochastic control of integrator chains is available? It turns out that when sensory states st are expressed as a function of an integrator chain (4.3), one may get away by controlling an approximation of the true (sensory) process, obtained by truncating high orders of motion as these have less effect on the dynamics, though knowing when this is warranted is a problem in approximation theory. This may explain why integral feedback control (n=0), PI control (n=1) and PID control (n=2) are the most ubiquitous control methods in engineering applications. However, when simulating biological control—usually with highly nonlinear dynamics—it is not uncommon to consider generalized motion to fourth (n=4) or sixth (n=6) order [92,96].

    It is worth mentioning that PID control has been shown to be implemented in simple molecular systems and is becoming a popular mechanistic explanation of behaviours such as bacterial chemotaxis and robust homeostatic algorithms in biochemical networks [77,97,98]. We suggest that this kind of behaviour emerges in Markov blankets at non-equilibrium steady state. Indeed, stationarity means that autonomous states will look as if they respond adaptively to external perturbations to preserve the steady state, and we can identify these dynamics as implementations of various forms of stochastic control (including PID-like control).

    5. Discussion

    In this paper, we considered the consequences of a boundary mediating interactions between states internal and external to a system. On unpacking this notion, we found that the states internal to a Markov blanket look as if they perform variational Bayesian inference, optimizing beliefs about their external counterparts. When subdividing the blanket into sensory and active states, we found that autonomous states perform active inference and various forms of stochastic control (i.e. generalizations of PID control).

    (a) Interacting Markov blankets

    The sort of inference we have described could be nuanced by partitioning the external state-space into several systems that are themselves Markov blankets (such as Markov blankets nested at several different scales [1]). From the perspective of internal states, this leads to a more interesting inference problem, with a more complex generative model. It may be that the distinction between the sorts of systems we generally think of as engaging in cognitive, inferential, dynamics [99] and simpler systems rest upon the level of structure of the generative models (i.e. steady-state densities) that describe their inferential dynamics.

    (b) Temporally deep inference

    This distinction may speak to a straightforward extension of the treatment on offer, from simply inferring an external state to inferring the trajectories of external states. This may be achieved by representing the external process in terms of its higher orders of motion by solving the stochastic realization problem. By repeating the analysis above, internal states may be seen as inferring the position, velocity, jerk, etc of the external process, consistently with temporally deep inference in the sense of a Bayesian filter [91] (a special case of which is an extended Kalman–Bucy filter [100]).

    (c) Bayesian mechanics in non-Gaussian steady states

    The treatment from this paper extends easily to non-Gaussian steady states, in which internal states appear to perform approximate Bayesian inference over external states. Indeed, any arbitrary (smooth) steady-state density may be approximated by a Gaussian density at one of its modes using a so-called Laplace approximation. This Gaussian density affords one with a synchronization map in closed form4 that maps the expected internal state to an approximation of the expected external state. It follows that the system can be seen as performing approximate Bayesian inference over external states—precisely, an inferential scheme known as variational Laplace [101]. We refer the interested reader to a worked-out example involving two sparsely coupled Lorenz systems [30]. Note that variational Laplace has been proposed as an implementation of various cognitive processes in biological systems [25,52,58] accounting for several features of the brain’s functional anatomy and neural message passing [53,70,99,102,103].

    (d) Modelling real systems

    The simulations presented here are as simple as possible and are intended to illustrate general principles that apply to all stationary processes with a Markov blanket (3.1). These principles have been used to account for synthetic data arising in more refined (and more specific) simulations of an interacting particle system [27] and synchronization between two sparsely coupled stochastic Lorenz systems [30]. Clearly, an outstanding challenge is to account for empirical data arising from more interesting and complex structures. To do this, one would have to collect time-series from an organism’s internal states (e.g. neural activity), its surrounding external states, and its interface, including sensory receptors and actuators. Then, one could test for conditional independence between internal, external and blanket states (3.1) [104]. One might then test for the existence of a synchronization map (using lemma 2.3). This speaks to modelling systemic dynamics using stochastic processes with a Markov blanket. For example, one could learn the volatility, solenoidal flow and steady-state density in a stochastic differential equation (3.2) from data, using supervised learning [105].

    6. Conclusion

    This paper outlines some of the key relationships between stationary processes, inference and control. These relationships rest upon partitioning the world into those things that are internal or external to a (statistical) boundary, known as a Markov blanket. When equipped with dynamics, the expected internal states appear to engage in variational inference, while the expected active states appear to be performing active inference and various forms of stochastic control.

    The rationale behind these findings is rather simple: if a Markov blanket derives from a steady-state density, the states of the system will look as if they are responding adaptively to external perturbations in order to recover the steady state. Conversely, well-known methods used to build adaptive systems implement the same kind of dynamics, implicitly so that the system maintains a steady state with its environment.

    Footnotes

    1 Note that Πη,Πμ are invertible as principal submatrices of a positive definite matrix.

    2 This mapping was derived independently of our work in [36, §3.2].

    3 Replacing μ(μ) by any other element of (2.5) would lead to the same synchronization map provided that the conditions of lemma 2.3 are satisfied.

    4 Another option is to empirically fit a synchronization map to data [27].

    Data accessibility

    All data and numerical simulations can be reproduced with code freely available at https://github.com/conorheins/bayesian-mechanics-sdes.

    Authors' contributions

    Conceptualization: L.D.C., K.F., C.H., G.A.P. Formal analysis: L.D.C., K.F., G.A.P. Software: L.D.C., C.H. Supervision: K.F., G.A.P. Writing-original draft: L.D.C. Writing-review and editing: K.F., C.H., G.A.P. All authors gave final approval for publication and agree to be held accountable for the work performed therein.

    Competing interests

    We have no competing interests.

    Funding

    L.D. is supported by the Fonds National de la Recherche, Luxembourg (Project code: 13568875). This publication is based on work partially supported by the EPSRC Centre for Doctoral Training in Mathematics of Random Systems: Analysis, Modelling and Simulation (EP/S023925/1). K.F. was a Wellcome Principal Research Fellow (Ref: 088130/Z/09/Z). C.H. is supported by the U.S. Office of Naval Research (N00014-19-1-2556). The work of G.A.P. was partially funded by the EPSRC, grant no. EP/P031587/1, and by JPMorgan Chase & Co. Any views or opinions expressed herein are solely those of the authors listed, and may differ from the views and opinions expressed by JPMorgan Chase & Co. or its affiliates. This material is not a product of the Research Department of J.P. Morgan Securities LLC. This material does not constitute a solicitation or offer in any jurisdiction.

    Acknowledgements

    L.D. would like to thank Kai Ueltzhöffer, Toby St Clere Smithe and Thomas Parr for interesting discussions. We are grateful to our two anonymous reviewers for feedback which substantially improved the manuscript.

    Appendix A. Existence of synchronization map: proof

    We prove lemma 2.3.

    Proof.

    (i)(ii) follows by definition of a function.

    (ii)(iii) is as follows

    b1,b2B:μ(b1)=μ(b2)η(b1)=η(b2)(b1,b2B:ΣμbΣb1b1=ΣμbΣb1b2ΣηbΣb1b1=ΣηbΣb1b2)(bB:ΣμbΣb1b=0ΣηbΣb1b=0)kerΣμbkerΣηb

    (iii)(iv) From [106, §0.7.3], using the Markov blanket condition (2.3), we can verify that

    ΠμΣμb=ΠμbΣb
    and
    ΠηΣηb=ΠηbΣb.
    Since Πμ,Πη,Σb are invertible, we deduce
    kerΣμbkerΣηbkerΠμΣμbkerΠηΣηbkerΠμbΣbkerΠηbΣbkerΠμbkerΠηb.

    Appendix B. The Helmholtz decomposition

    We consider a diffusion process xt on Rd satisfying an Itô stochastic differential equation (SDE) [39,107,108],

    dxt=f(xt)dt+ς(xt)dWt, B 1
    where Wt is an m-dimensional standard Brownian motion (a.k.a., Wiener process) [38,39] and f:RdRd,ς:RdRd×m are smooth functions satisfying for all x,yRd:

    Bounded, linear growth condition: |f(x)|+|ς(x)|K(1+|x|),

    Lipschitz condition: |f(x)f(y)|+|ς(x)ς(y)|K|xy|,

    for some constant K and |ς|=ij|ςij|. These are standard regularity conditions that ensure the existence and uniqueness of a solution to the SDE (B1) [108, theorem 5.2.1].

    We now recall an important result from the theory of stationary diffusion processes, known as the Helmholtz decomposition. It consists of splitting the dynamic into time-reversible (i.e. dissipative) and time-irreversible (i.e. conservative) components. The importance of this result in non-equilibrium thermodynamics was originally recognized by Graham in 1977 [109] and has been of great interest in the field since [39,110112]. Furthermore, the Helmholtz decomposition is widely used in statistical machine learning to generate Monte-Carlo sampling schemes [39,73,113116].

    Lemma B.1. (Helmholtz decomposition)

    For a diffusion process (B1) and a smooth probability density p>0, the following are equivalent:

    (i)

    p is a steady state for xt.

    (ii)

    We can write the drift as

    f=frev+firrevfrev:=Γlogp+Γfirrev:=Qlogp+Q. B 2

    where Γ=ςς/2 is the diffusion tensor and Q=Q is a smooth antisymmetric matrix field. denotes the divergence of a matrix field defined as (Q)i:=j(/xj)Qij.

    Furthermore, frev is invariant under time-reversal, while firrev changes sign under time-reversal.

    In the Helmholtz decomposition of the drift (B2), the diffusion tensor Γ mediates the dissipative flow, which flows towards the modes of the steady-state density, but is counteracted by random fluctuations Wt, so that the system’s distribution remains unchanged—together these form the time-reversible part of the dynamics. In contrast, Q mediates the solenoidal flow—whose direction is reversed under time-reversal—which consists of conservative (i.e. Hamiltonian) dynamics that flow on the level sets of the steady state. See figure 11 for an illustration. Note that the terms time-reversible and time-irreversible are meant in a probabilistic sense, in the sense that time-reversibility denotes invariance under time-reversal. This is opposite to reversible and irreversible in a classical physics sense, which respectively mean energy preserving (i.e. conservative) and entropy creating (i.e. dissipative).

    Figure 11.

    Figure 11. Helmholtz decomposition. (a) A sample trajectory of a two-dimensional diffusion process (B1) on a heat map of the (Gaussian) steady-state density. (b) The Helmholtz decomposition of the drift into time-reversible and time-irreversible parts: the time-reversible part of the drift flows towards the peak of the steady-state density, while the time-irreversible part flows along the contours of the probability distribution. The lower panels plot sample paths of the time-reversible (c) and time-irreversible (d) parts of the dynamics. Purely conservative dynamics (d) are reminiscent of the trajectories of massive bodies (e.g. planets) whose random fluctuations are negligible, as in Newtonian mechanics. The lower panels help illustrate the meaning of time-irreversibility: if we were to reverse time (cf. (B3)), the trajectories the time-reversible process would be, on average, no different, while the trajectories of the time-irreversible process would flow, say, clockwise instead of counterclockwise, which would clearly be distinguishable. Here, the full process (a) is a combination of both dynamics. As we can see the time-reversible part affords the stochasticity while the time-irreversible part characterizes non-equilibria and the accompanying wandering behaviour that characterizes life-like systems [11,117]. (Online version in colour.)

    Proof.

    • `' It is well known that when xt is stationary at p, its time-reversal is also a diffusion process that solves the following Itô SDE [118]

      dxt=f(xt)dt+ς(xt)dWtf:=b+p1(2Γp). B 3
      This enables us to write the drift f as a sum of two terms: one that is invariant under time reversal, another that changes sign under time-reversal
      f=f+f2+ff2=:frev+firrev.
      It is straightforward to identify the time-reversible term
      frev=p1(Γp)=p1Γp+p1pΓ=Γlogp+Γ. B 4
      For the remaining term, we first note that the steady-state p solves the stationary Fokker–Planck equation [39,119]
      (fp+(Γp))=0.
      Decomposing the drift into time-reversible and time-irreversible parts, we obtain that the time-irreversible part produces divergence-free (i.e. conservative) flow w.r.t. the steady-state distribution
      (firrevp)=0.
      Now recall that any (smooth) divergence-free vector field is the divergence of a (smooth) antisymmetric matrix field A=AT [109,110,120]
      firrevp=A.
      We define a new antisymmetric matrix field Q:=p1A. It follows from the product rule for divergences that we can rewrite the time-irreversible drift as required
      firrev=Qlogp+Q.

    • ’ From (B4), we can rewrite the time-reversible part of the drift as

      frev=p1(Γp). B 5
      In addition, we define the auxiliary antisymmetric matrix field A:=pQ and use the product rule for divergences to simplify the expression of the time-irreversible part
      firrev=p1A.
      Note that
      (firrevp)=0
      as the matrix field A is smooth and antisymmetric. It follows that the distribution p solves the stationary Fokker–Planck equation
      (fp+(Γp))=(frevpfirrevp+(Γp))=(firrevp)=0.

    Appendix C. Free energy computations

    The free energy reads (3.6)

    F(b,μ)=DKL[qμ(η)||p(η|b)]logp(b,μ).
    Recalling from (2.3) and (3.3) that qμ(η) and p(η|b) are Gaussian, the KL divergence between multivariate Gaussians is well known
    qμ(η)=N(η;σ(μ),Πη1)andp(η|b)=N(η;η(b),Πη1)DKL[qμ(η)||p(η|b)]=12(σ(μ)η(b))Πη(σ(μ)η(b)).
    Furthermore, we can compute the log partition
    logp(b,μ)=12[b,μ]Σb:μ1[bμ](up to a constant).
    Note that Σb:μ1 is the inverse of a principal submatrix of Σ, which in general differs from Πb:μ, a principal submatrix of Π. Finally,
    F(b,μ)=12(σ(μ)η(b))Πη(σ(μ)η(b))+12[b,μ]Σb:μ1[bμ](up to a constant).

    Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited.