Stochastic models of gene transcription with upstream drives: exact solution and sample path characterization

Gene transcription is a highly stochastic and dynamic process. As a result, the mRNA copy number of a given gene is heterogeneous both between cells and across time. We present a framework to model gene transcription in populations of cells with time-varying (stochastic or deterministic) transcription and degradation rates. Such rates can be understood as upstream cellular drives representing the effect of different aspects of the cellular environment. We show that the full solution of the master equation contains two components: a model-specific, upstream effective drive, which encapsulates the effect of the cellular drives (e.g., entrainment, periodicity or promoter randomness), and a downstream transcriptional Poissonian part, which is common to all models. Our analytical framework allows us to treat cell-to-cell and dynamic variability consistently, unifying several approaches in the literature. We apply the obtained solution to characterize several gene transcription models of experimental relevance, and to explain the influence on gene transcription of synchrony, stationarity, ergodicity, as well as the effect of time-scales and other dynamic characteristics of drives. We also show how the solution can be applied to the analysis of single-cell data, and to reduce the computational cost of sampling solutions via stochastic simulation.


Introduction
Gene transcription, the cellular mechanism through which DNA is copied into mRNA transcripts, is a complex, stochastic process involving small numbers of molecules [1]. As a result, the number of mRNA copies for most genes is highly heterogeneous over time within each cell, and across cells in a population [2][3][4]. Such fundamental randomness is biologically relevant: it underpins the cell-to-cell variability linked with phenotypic outcomes and cell decisions [5 -9].
The full mathematical analysis of gene expression variability requires the solution of master equations. Given a gene transcription model, its master equation (ME) is a differential -difference equation that describes the evolution of P(n, t), the probability of having n mRNA molecules in a single cell at time t. However, MEs are problematic to solve, both analytically and numerically, due to the difficulties associated with discrete stochastic variables-the number of molecules n is an integer [10]. Most existing analytical solutions of the ME are specific to particular models and are typically obtained via the probability generating function under stationarity assumptions [11][12][13][14][15]. A few other solutions include the decaying time dependence describing the relaxation transient towards stationarity from a given initial distribution [16 -19]. In the usual situation when analytical solutions are intractable, the first few moments of the distribution are approximated, usually at stationarity, although error bounds are difficult to obtain under closure schemes [20,21]. Alternatively, full stochastic simulations are used, yet the computational cost to sample P(n, t) at each t is often impractical, and many methods lead to estimation bias in practice [22]. The emergence of accurate time-course measurements of mRNA counts in single cells [3,4,23 -25] has revealed the high dynamic variability of gene expression both at the single-cell and population levels. This variability has several sources. Cells express genes heterogeneously [4,26] and hence models need to capture intercellular variability; but cells are also subjected to time-varying inputs of a stochastic and/or deterministic nature, either from their environment or from regulatory gene networks inside the cell. Therefore, standard ME models with stationary solutions, which also tacitly assume that gene expression is uncorrelated between cells, cannot capture fully such sources of variability. Mathematically, ME models must be able to describe timedependent gene transcription in single cells within an inhomogeneous population, i.e. they must allow a varying degree of synchrony and of cell-to-cell variability across the population. In addition, they must be able to account for non-stationary dynamic variability due to upstream biological drives, such as circadian rhythms and cell cycle [27,28], external signalling [29], or stimulus-induced modulation or entrainment [30,31].
Recent techniques to model cell-to-cell correlations have used the marginalization of extrinsic components [32], mixed-effects models [33] or deterministic rate parameters [34]. Several of the results correspond to deterministic rates and are well known in queuing theory [35]. However, full solutions of the ME that capture temporal heterogeneity as well as variability in parameters, from the single-cell to the population level, are yet to be explored, and could help unravel in conjunction with experiments how the dynamics of upstream drives within a biological network affect gene transcription.
Here, we consider a simple, yet generic, framework for the solution of the ME of gene transcription and degradation for single cells under upstream drives, i.e. when the transcription and degradation parameters are time-dependent functions or stochastic variables. We show that the exact solution P(n, t) for such a model naturally decouples into two parts: a discrete transcriptional Poissonian downstream component, which is common to all transcription models of this kind, and a model-specific continuous component, which describes the dynamics of the parameters reflecting the upstream variation. To obtain the full solution P(n, t), one only needs to calculate the probability density for the model-specific upstream drive, which we show corresponds to a continuous variable satisfying a linear random differential equation directly related to traditional differential rate equations of chemical kinetics. Our results can thus be thought of as a generalization of the Poisson representation [36,37] (originally defined as an ansatz with constant rate parameters) to allow for both time-varying and stochastic rates in transcription-degradation systems. Our work also departs from the work of Jahnke & Huisinga [34] by allowing the presence of cell-to-cell variability (or uncertainty) in the dynamical drive.
Below, we present the full properties of the general solution, and we derive the relationship of the observable time-varying moments with the moments of the dynamic upstream component. Because our framework treats dynamic and population variability consistently, we clarify the different effects of variability in the drives by considering the Fano factor across the population and across time. To illustrate the utility of our approach, we present analytical and numerical analyses of several models from the literature, which are shown to simply correspond to different upstream drives, deterministic or stochastic. These examples highlight our modelling approach: a flexible solvable model with upstream dynamic variability, reflecting the generic hypothesis that experimentally observed outputs are usually driven by fluctuating, usually unmeasurable and uncertain, upstream intra-and extracellular signals. Our framework provides a means to characterize such upstream variability, dynamical and population-wide, and we provide examples of its use for computational biology and data analysis in relation to experiments. To study gene expression in a single cell with time-dependent upstream drives, we consider the stochastic process in con- where N t is a discrete random variable describing the number of mRNA molecules in the cell. We look to obtain the probability mass function, P(n, t)UPr(N t ¼ n).
The mRNA copy number increases via transcription events and decreases via degradation events but, importantly, we acknowledge that the observed gene reflects the dynamic variability of intra-and extracellular processes and that cells are heterogeneous. Thus, the transcription and degradation rates can depend on time and can be different for each cell (figure 1). To account for such variability, we describe transcription and degradation rates as stochastic , without specifying any functional form except requiring that M and L do not depend on the number of mRNA molecules already present. Deterministic time-varying transcription -degradation rates, with or without cell-to-cell correlations, are a particular case of this definition.
Following standard notation in the stochastic processes literature, M t and L t denote the random variables at time t. To simplify notation, however, we depart from standard notation and denote the sample paths (i.e. realizations) of M and L by fmðtÞg t!0 and flðtÞg t!0 , respectively, thinking of them as particular functions of time describing the transcription and degradation rates under the changing cellular state and environmental conditions in an 'example' cell (figure 1). The sample paths of other random variables are denoted similarly, e.g. the sample paths of N t are fnðtÞg t!0 .
The sample paths fmðtÞg t!0 and flðtÞg t!0 represent cellular drives encapsulating the variability across time and across the population consistently. This formulation unifies several models in the literature, which implicitly or explicitly assume time-varying transcription and/or degradation processes [4,16,[38][39][40][41][42], and can be shown to correspond to particular types of dynamic upstream variability. In addition, the framework allows us to specify cell-to-cell correlations across the population, which we refer to as the 'degree of synchrony'. A population will be perfectly synchronous when the sample paths of the drives for every cell in the population are identical, i.e. if M t and L t have zero variance. If, however, transcription and/or degradation rates differ between cells, M t and L t themselves emerge from a probability density: the wider the density, the more asynchronous the cellular drives (figure 1).
Our aim is to obtain the probability distribution of the copy number N t under upstream time-varying cellular drives M t and L t , themselves containing stochastic parameters reflecting cell-to-cell variability. We proceed in two steps: first, we obtain the solution for the perfectly synchronous system without cell-tocell variability; then we consider the general asynchronous case.  Figure 1. Single-cell gene transcription under upstream drives. The transcription of each cell i takes place under particular cellular drives fm i ðtÞg t!0 and fl i ðtÞg t!0 , representing time-varying transcription and degradation rates. Both cellular drives are combined into the upstream effective drive fx i ðtÞg t!0 , which dictates the long-term probability distribution describing the stochastic gene expression fn i ðtÞg t!0 within each cell (2.10). When there is cell-to-cell variability in the population, the cellular drives are described by processes M and L leading to the upstream effective drive X. The probability distribution of the population corresponds to the mixture of the upstream process X and the Poissonian downstream transcriptional component, as given by (2.14). Increased synchrony in the population implies decreased ensemble variability of the random variables M t , L t , X t and N t . (Online version in colour.) rsif.royalsocietypublishing.org J. R. Soc. Interface 14: 20160833

Perfectly synchronous population
As a first step to the solution of the general case, consider a population of cells with perfectly synchronous transcription and degradation rate functions, M ¼ fmðtÞg t!0 and L ¼ flðtÞg t!0 ; i.e. the transcription and degradation processes are defined by the same sample path for the whole population and the stochastic processes M and L have zero variance at all times (figure 1).
In the perfectly synchronous case, we have an immigration-death process with reaction diagram ; À ! mðtÞ mRNA À ! lðtÞ ;, ð2:1Þ and its ME is standard: where Pðn, tjfmðtÞg t[½0,t , flðtÞg t[½0,t Þ denotes the probability of having n mRNAs at time t for the given history of the cellular drives fmðtÞg t[½0,t and flðtÞg t[½0,t . Using the probability generating function we transform the ME (2.2) into @G @t ¼ ðz À 1Þ mðtÞG À ðz À 1Þ lðtÞ @G @z : Without loss of generality, let us first consider an initial condition with n 0 mRNA molecules. Using the method of characteristics, we obtain the solution which is given in terms of We will refer to the time-varying continuous function x(t) as the effective drive, as it integrates the effect of both cellular drives.
Note that solution (2.3) can be rewritten as the product of two probability generating functions: corresponding to a binomial and a Poisson distribution, respectively. Hence, for the perfectly synchronous case, the solution is given by where N ic t jn 0 is a binomial random variable with n 0 trials and success probability e À Ð t 0 lðtÞ dt , and N s t is a Poisson random variable with parameter x(t). The physical interpretation of this breakdown is that N ic t describes the mRNA transcripts that were initially present in the cell and still remain at time t, whereas N s t describes the number of mRNAs transcribed since t ¼ 0.
Since N ic t and N s t are independent, it is easy to read off the first two moments directly: lðtÞ dt ) þ xðtÞ: lðtÞ dt ) n0Àk xðtÞ nÀk ðn À kÞ! e ÀxðtÞ :

ð2:8Þ
This mathematical form is well known when the rates are constant [37,43], and a classical result in queueing theory [35]. We also remark that the solution with timedependent rates (2.8) is the one-gene case of the main result in Jahnke & Huisinga [34].
If the initial state is itself described by a random variable N 0 with its own probability distribution, we apply the law of total probability to obtain the solution in full generality as follows (see appendix A): where N s t is distributed according to (2.5)-(2.7), and PrðN ic t ¼ kÞ is the mixture of the time-dependent binomial distribution (2.5) and the distribution of the initial condition N 0 .

The initial transient 'burn in' period
For biologically realistic degradation rates flðtÞg t!0 , the contribution from the initial condition (N ic t ) decreases exponentially. Hence, as time grows, the transcripts present at t ¼ 0 degrade, and the population is expected to be composed of mRNAs transcribed after t ¼ 0.
If the initial distribution of N 0 is not the stationary distribution of the ME (or, more generally, not equal to the attracting distribution of the ME, as defined in appendix A), there is an initial time dependence of P(n, t) lasting over a time scale T ic (given by Ð T ic 0 lðtÞ dt % 1), which corresponds to a 'burn-in' transient associated with the decay of the initial condition. We remark that the time-dependence described in [16][17][18][19] corresponds only to this 'burn-in' transient (see also figure 7).
On the other hand, when the initial distribution of N 0 is the stationary distribution (or the attracting distribution) of the ME, the component containing the initial condition (N ic t ) and the long-term component (N s t ) balance each other at every point in time, maintaining stationarity (or the attracting distribution), as shown analytically in appendix A. In this work, we focus on the time dependence of P(n, t) induced through non-stationarity of the parameters and/or correlated behaviour of the cells within the population. Hence, for the remainder of the paper, we neglect the transient terms. Consequently, for perfectly synchronous cellular drives, the solution of the ME (2.2) is a Poisson random variable with time-dependent rate equal to the effective upstream drive, x(t): ½N t jxðtÞ % ½N s t jxðtÞ PoiðxðtÞÞ, with distribution which makes explicit the dependence on the history of the sample paths fmðtÞg t[½0,t , flðtÞg t[½0,t , which is encapsulated in the value of the effective drive x(t) at time t. Indeed, from (2.4) it follows that the sample path fxðtÞg t!0 satisfies a first-order linear ordinary differential equation with time-varying coefficients: which is the rate law for a chemical reaction with zeroth-order production with rate m(t), and first-order degradation with rate l(t) per mRNA molecule. For biologically realistic (i.e. positive and finite) cellular drives, x(t) is a continuous function.

The general asynchronous case: cell-to-cell variability in the cellular drives
Consider now the general case where different sample paths for the cellular drives are possible, i.e. we allow explicitly for the transcription and degradation rates to vary from cell to cell. The cell population will have some degree of asynchrony, hence M t and L t have non-zero variance for at least some t ! 0. The transcription and degradation rates are then described by stochastic processes M and L: ; À ! Mt mRNA À ! Lt ;, ð2:12Þ and the collection of all differential equations of the form (2.11) is represented formally by the random differential Equations of this form appear in many sciences, and a large body of classical results allows us to determine f X t (x, t) the probability density function of the upstream process X t [44 -46]. Below, we use such results to obtain f Xt ðx, tÞ for biologically relevant models. Note that from equation (2.10) and the law of total probability, it follows that the probability mass function for the random variable N t under cellular drives described by the random processes M and L is given by the Poisson mixture (or compound) distribution: where the density f Xt ðx, tÞ of the effective drive X t (to be determined) can be understood as a mixing density. The notation P Xt ðn, tÞ recalls explicitly the dependence of the solution on the density of X t , but we drop this reference and use P(n, t) below to simplify notation. The problem of solving the full ME is thus reduced to finding the mixing density f Xt ðx, tÞ. Note that, for synchronous drives, we have f Xt ðx, tÞ ¼ dðxðtÞ À xÞ, where d is the Dirac delta function, and (2.14) reduces to (2.10). Equation (2.14) also shows that there are two separate sources of variability in gene expression, contributing to the distribution of N t . One source of variability is the Poisson nature of transcription and degradation, common to every model of the form considered here; the second source is the time variation or uncertainty in the cellular drives, encapsulated in the upstream process X t describing the 'degree of synchrony' between cells and/or their variability over time. In this sense, equation (2.14) connects with the concept of separable 'intrinsic' and 'extrinsic' components of gene expression noise pioneered by Swain et al. [47][48][49][50]. Yet rather than considering moments, the full distribution P(n, t) is separable into a model-dependent 'upstream component' given by f Xt ðx, tÞ, and a downstream transcriptional 'Poisson component' common to all models of this type.

The effective upstream drive in gene transcription models
The generic model of gene transcription and degradation with time-dependent drives introduced above provides a unifying framework for several models previously considered in isolation. In this section, we exemplify the tools to obtain the density of the effective drive f Xt ðx, tÞ analytically or numerically through relevant examples.

Gene transcription under upstream drives with static randomness
In this first section, we consider models of gene transcription where the upstream drives are deterministic, yet with random parameters representing cell variability.
3.1.1. Random entrainment to upstream sinusoidal drives: random phase offset in transcription or degradation rates Equation (2.13) can sometimes be solved directly to obtain f Xt ðx, tÞ from a transformation of the random variables M t and L t . We now show two such examples, where we explore the effect of entrainment of gene transcription and degradation to an upstream periodic drive [51].
First, consider a model of gene transcription of the form (2.12) with a transcription rate given by a sinusoidal function and where each cell has a random phase. This random entrainment (RE) model is a simple representation of a cell population with transcription entrained to an upstream rhythmic signal, yet with a random phase offset for each cell: where m and v are given constants and F is a (static) random variable describing cell-to-cell variability (or uncertainty).
rsif.royalsocietypublishing.org J. R. Soc. Interface 14: 20160833 Solving equation (2.13) in this case, we obtain Inverting the sine with F* restricted to [2r, r], we obtain f Xt ðx, tÞ ¼ kðtÞ where kðtÞ [ f0, 1, 2g is the number of solutions of sin u ¼ ðx À BÞ=A for u [ ðvt À r, vt þ rÞ. As the phase distribution of the drives becomes narrower, the upstream variability disappears: r ! 0 ) f Xt ðx, tÞ ! dððB þ A sin vtÞ À xÞ. In this limit, all cells follow the entraining drive exactly, and P(n, t) becomes a Poisson distribution at all times. Figure 2 depicts f Xt for r ¼ 0 (no cell-to-cell phase variation, figure 2a and for r ¼ p/2, and r ¼ p (increasingly wider uniform distribution of phases, figure 2b,c). The full distribution P(n, t) is obtained using (3.2) and (2.14).
Second, let us consider the same model of entrainment to an upstream sinusoidal signal with a random offset, but this time via the degradation rate: where m, a, b and v are given constants, and F is a (static) random variable. Equation (2.13) can be solved approximately [51] to give , r p, the density of the effective drive takes the same form (3.2) as above.

Upstream Kuramoto promoters with varying degree of synchronization
As an illustrative computational example, we study a population of cells whose promoter strengths display a degree of synchronization across the population. To model this upstream synchronization, consider the Kuramoto promoter model, where the promoter strength of each cell i depends on an oscillatory phase u i (t), and cells are coupled via a Kuramoto model [52][53][54]. We then have a model of the form (2.12) with and L t : ¼ 1: Here m and b are constants and fu i ðtÞg C i¼1 are the phase variables for the C cells governed by the globally coupled Kuramoto model: where K is the coupling parameter and the intrinsic frequency of each cell, v i , is drawn from the random distribution V N ð0, 0:05 2 Þ. The Kuramoto model allows us to tune the degree of synchrony through the coupling K: for small K, the cells do not display synchrony since they all have a slightly different intrinsic frequency; as K is increased, the population becomes more synchronized. This model is a simple representation of nonlinear synchronization processes in cell populations with intrinsic heterogeneity [55 -58]. In figure 6a, we show how the sample paths change as the degree of synchrony increases,   and we exemplify the use of (2.14) for the numerical solution of the gene expression of this model.

Asynchronous transcription under stochastic multistate promoters
In the previous section, we obtained f Xt ðx, tÞ by capitalizing on the precise knowledge of the sample paths of M and L to solve (2.13) explicitly. In other cases, we can obtain f Xt ðx, tÞ by following the usual procedure of writing down an evolution equation for the probability density of an expanded state that is Markovian, followed by marginalization. More specifically, let the vector process Y prescribe the upstream drives, so that M ¼ MðY, tÞ and L ¼ LðY, tÞ, and consider the expanded Note that since Y is upstream, it prescribes X (and not vice versa). We can then write the evolution equation for the joint probability density f Xt ðx, y, tÞ: which follows from conservation of probability. In equation (3.6), the differential operator for X, which follows from (2.13), is the first jump moment [59] conditional upon Y t ¼ y (and hence upon M t ¼ mðy, tÞ and L t ¼ lðy, tÞ); the second term L Yt ½: is the infinitesimal generator of the upstream processes. In particular, for continuous stochastic processes L Yt ½: is of Fokker-Planck type, and for Markov chains L Yt ½: is a transition rate matrix. The desired density f Xt ðx, tÞ can then be obtained via marginalization. Equation (3.6) can be employed to analyse the widely used class of transcription models with asynchronous, random promoter switching between discrete states, where each state has different transcription and degradation rates representing different levels of promoter activity due to, for example, transcription factor binding or chromatin remodelling [40]. A classic example is the random telegraph (RT) model, first used by Ko in 1991 [60] to explain cell-to-cell heterogeneity and bursty transcription (figure 3a).
In our framework, such random promoter switching can be understood as an upstream stochastic process driving transcription as follows. Let us assume that the promoter can attain D states s, and each state has constant transcription rate m s and constant degradation rate ' s . The state of the promoter is described by a random process S ¼ fS t [ f1, 2, . . . ,Dg: t ! 0g, with sample paths denoted by f6ðtÞg t!0 , and its evolution is governed by the D-state Markov chain with transition rate k sr from state r to state s. The state of the promoter S t ¼ s prescribes that M t ¼ m s and L t ¼ ' s . Hence, the sample paths of M and L are a succession of step functions with heights m s and ' s , respectively, occurring at exponentially distributed random times.
As described above, we expand the state space of the cellular drives to include the promoter state X t ¼ fX t , S t g. The evolution equation (

The random telegraph model (2 states)
Although the RT model has been solved by several methods [16, 38,39], we briefly rederive its solution within the above framework to clarify its generalization to other promoter architectures.
Consider the standard RT model (figure 3a), with promoter switching stochastically between the active state s on ¼ 1, with constant transcription rate m 1 ¼ m, and the inactive state s off ¼ 0, where no transcription takes place, m 0 ¼ 0. The transition rates between the two states are k 10 ¼ k on and k 01 ¼ k off .
Without loss of generality, we assume At stationarity, it then follows [62] that where Bða, bÞ ¼ GðaÞGðbÞ=Gða þ bÞ is the Beta function. In other words, at stationarity, the normalized effective drive is described by a Beta distribution: Z t Beta(k on , k off ), 8t: Using (3.10) and (2.14), we obtain that the full stationary solution is the Poisson-Beta mixture: ð3:11Þ is the confluent hypergeometric function [63].

The refractory promoter model (3 states)
In the standard RT model, the waiting times in each state are exponentially distributed. In recent years, time-course data have shown that t off does not conform to an exponential distribution, leading some authors to incorporate a second inactive (refractory) state, which needs to be cycled through before returning to the active state [42,64]. The net 'OFF' time is then the sum of two exponentially distributed waiting times.
In this refractory promoter model (figure 3b), the promoter switches through the states s * , s 1 and s 2 with rates k * , k 1 and k 2 , respectively. Transcription takes place at constant rate m only when the promoter is in the active state s * and, without loss of generality, we assume a constant degradation rate lðtÞ ; 1 for all states. This model is of the same form as (3.9), and is solved similarly.
Making the change of variables Z t ¼ lX t =m ¼ X t =m and, using the notation f i ðz, tÞ :¼ f Zt,St ðz, t, s i Þ, the multistate Fokker-Planck-Kolmogorov equations are At stationarity, we find þ , a ð1Þ À ; 1 þ k 1 À k 2 ; z] þ C 2 z k2À1 þ , a ð2Þ À ; 1 À k 1 þ k 2 ; z] and a Ó Z and 2 F 1 (a, b; c; z) is the Gauss hypergeometric function [63]. The full stationary solution P(n) is then obtained from (2.14). For a detailed derivation (including expressions for the integration constants C 1 and C 2 ), see appendix B.

Asynchronous multistate models with upstream promoter modulation
Finally, we consider a model of gene transcription that incorporates features of models described in § §3.1 and 3.2. Such a situation is of biological interest and appears when individual cells exhibit correlated dynamics in response to upstream factors (e.g. changing environmental conditions, drives or stimulations), but still maintain asynchrony in internal processes, such as transcription factor binding [32,65].
To illustrate this concept, we consider the modulated random telegraph (MRT) model, a combination of the RE model (3.1) and the RT model (3.9), i.e. the promoter strength is modulated by an upstream sinusoidal drive with random phase F, as in the RE model, yet the promoter switches stochastically between active/inactive states with rates k on and k off , as in the RT model. In this model, the transcription rate is correlated across cells through the entrainment to the upstream sinusoidal drive as a simple representation for, for example, circadian gene expression: where m, v . 0 are constants; F is the random phase across the cell population; and S ¼ fS t [ f0, 1g: t ! 0g, with exponential waiting times, describes the stochastic promoter switching (figure 4a).
The solution of this model follows from the RT probability density (3.10) conditioned on the random phase F, which prescribes the sample path frðt; fÞg t!0 of the promoter strength R. The resulting scaled Beta distribution f XtjF ðx, tjfÞ ¼ x konÀ1 ðrðt; fÞ À xÞ k off À1 Bðk on , k off Þ rðt; fÞ konþk off À1 is then marginalized over the phase F to obtain the density f Xt ðx, tÞ of the effective drive. For instance, if the phases are normally distributed F N ð0, s 2 Þ, we have (figure 4b) s ffiffiffiffiffiffi 2p p df: As s ! 1, the population becomes asynchronous in the promoter strength, as well as in the state transitions, and time dependence wanes (figure 4b).

Ensemble noise characteristics in time-varying populations
In the previous sections, we were concerned with the full time-dependent probability distribution P(n, t) for the mRNA copy number N. However, in many circumstances such detailed information is not required, and simpler rsif.royalsocietypublishing.org J. R. Soc. Interface 14: 20160833 characterizations based on ensemble averages (e.g. Fano factor, coefficient of variation) are of interest. Simple corollaries from the Poisson mixture expression (2.14) allow us to derive expressions for the ensemble moments and other noise characteristics, as shown below. We remark that, in this section, all the expectations are taken over the distribution describing the cell population.

Time-dependent ensemble moments over the distribution of cells
To quantify noise characteristics of gene expression in a population, the ensemble moments E[N k t ], k [ N, t ! 0 are often determined via the probability generating function [13,38,66] or by integrating the ME [21,40,67]. However, stationarity is usually assumed and the moments derived are not suitable for time-varying systems. Here we use corollaries of the Poisson mixture expression (2.14) to derive expressions for the ensemble moments for time-varying systems under upstream drives.
Therefore, the ensemble moments of the mRNA copy number E[N k t ] can be obtained in terms of the moments of the

Decomposing the sources of noise
From equation (4.1), it follows that the variability of the mRNA count N t can be rewritten as Expression (4.2) can be mapped onto the common decomposition into 'intrinsic' and 'extrinsic' components [48,49] if we note that, in our model, the 'intrinsic' components are the downstream processes of transcription and degradation, whose rates are affected by the 'extrinsic' variability due to upstream factors. Such upstream factors can be biologically diverse, and can be both intra-and extracellular. Therefore throughout this paper, we refer to 'upstream/ downstream' processes instead of 'intrinsic/extrinsic' noise, to emphasize that upstream processes can reflect variability that is internal to the cell as well as cell-to-cell variability. For example, the asynchronous stochastic promoter switching described in §3.2 is an upstream process here, which in the literature might have been classed as 'intrinsic' (although in fact, asynchronicity implies an assumption about cell-to-cell variability). On the other hand, the modulated promoter switching in §3.3 includes both 'intrinsic' and 'extrinsic' sources of variability, as usually classed in the literature. In our framework, such processes are treated consistently as 'upstream' sources of variability.

Analysis of time-dependent moments
The relationship (4.1) between downstream and upstream moments together with the dynamical equation (2.13) enables us to solve for the time dependence of the moments of mRNA counts in terms of the moments of the drive:

ð4:4Þ
where for simplicity we have assumed a constant degradation rate l. (For the most general case with degradation rate L t , see for example [45].) Therefore, the observed moments E[N k t ] from the data can be used to infer the time-dependent moments of the (usually unobserved) upstream drives.
As a motivating example, we consider a recent experiment [29] measuring single-cell time courses of the cAMP m (t) transcription rate (unobserved) ba w 2 + l 2 ÷ aeaeae ae Figure 5. Analysis of single-cell temporal transcription of a gene in response to an upstream oscillatory cAMP signal, motivated by recent experiments [29]. Individual single-cell time courses n(t) of mRNA counts are highly variable with no clear entrainment to the driving signal, whereas the time-dependent ensemble average nðtÞ oscillates with the same frequency as the external drive. This is consistent with equations (4.5) -(4.6), which also show that the total phase lag is the resultant of the signal transduction and transcription lags. For a signal with period T ¼ 5 min and a gene with degradation rate l ¼ 0.04 min 21 [29], the transcription phase lag is arctanðv=lÞ ≃ p=2, which corresponds to a delay of ≃ 1:25 min. Given a measured total mean lag of 9p/10, this implies that the signal transduction introduces a phase lag D ≃ 2p=5, equivalent to a delay of ≃1 min.  [29]. The experiment showed that the population-averaged mRNA expression was approximately sinusoidal. Hence, the data can be fitted to the function ð4:5Þ Assuming a constant degradation rate l, equations (2.13) and (4.3) show that the upstream transcription rate is also sinusoidal with the same frequency, yet with a modified amplitude and a phase shift ( figure 5): This is similar to phase relationships in electrical and electronic circuits.
Consistent with equation (4.6), the cAMP phase and E[N t ] were measured experimentally to have the same frequency v % 2p=5 % 1:26 min À1 [29]. The experiments also showed that E[N t ] had a mean phase lag of 9p/10 (equivalent to a delay of %2.25 min) after the cAMP signal. Using the degradation rate l ¼ 0.04 min 21 [69] for gene csaA, it follows that the transcriptional phase lag is arctan ðv=lÞ ¼ 0:49p, and signal transduction introduces a phase lag D ≃ 9p=10 À p=2 ¼ 2p=5, equivalent to a transduction delay D=v ≃ 1 min. Hence our results can be used to adjudicate the time scales linked to cAMP signal transduction within the cell.
Our model also clarifies the effect of the degradation rate l and frequency v in the observed responses. Given the mRNA population average oscillating around a mean value b (4.5), the (unobserved) transcription rate oscillates with the same frequency v around a value bl and amplitude scaled by ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi v 2 þ l 2 p . The transcriptional phase lag arctanðv=lÞ is bounded between 0 (when v=l ! 0) and p/ 2 (when v=l ! 1). Hence, large degradation rates reduce the phase lag and the amplitude of the mRNA oscillations downstream (through the dimensionless factor v/l), and reduce the mean value of mRNA expression independently of v.
A similar analysis for the correlation function E[M t M s ] can be achieved by solving equation (4.4) numerically for given data.

Time-dependent ensemble Fano factor: a measure of synchrony in the population
A commonly used measure of variability in the population is the ensemble Fano factor: solution of the transcription of an unregulated gene with constant rates [70,71], which is Poisson; hence with FanoðN t Þ ; 1, 8t.
For time-varying systems, however, the ensemble Fano factor conveys how the dynamic variability in single cells combines at the population level. Indeed, Fano(N t ) can be thought of as a measure of synchrony in the population at time t. For instance, it follows from equation (2.10) that the ensemble Fano factor for a population with perfectly synchronous drives is always equal to one, FanoðN t Þ ; 1, 8t. Even if the upstream drive x(t) changes in time, the population remains synchronous and has a Poisson distribution at all times. On the other hand, under the assumptions of our model, when Fano(N t ) varies in time, it reflects a change in the degree of synchrony between cells, as captured by the upstream drive X t . From (4.1) it follows that Hence, the greater the synchrony at time t, the closer Fano(N t ) is to unity, since the deviation from the Poisson distribution emanates from the ensemble Fano factor of the upstream drive X t .
As an example, consider the Kuramoto promoter model (3.4) -(3.5) introduced in §3.1.2, where the cells in the population become more synchronized as the value of the coupling K is increased. Figure 6 shows simulation results for 100 cells with a range of couplings. The order parameter rðtÞ [ ½0, 1 measures the phase coherence of the oscillators at time t; as r(t) approaches 1, the degree of synchrony increases. Using the Kuramoto numerics, we calculate the ensemble Fano factor Fano(N t ) for the transcription model. As seen in figure 6b,c, the more synchronous the system is, the closer the Fano factor is to the Poisson value, i.e. krðtÞl ! 1 ) kFanoðN t Þl ! 1. Figure 6 also illustrates the computational advantages of our method. The cost to approximate the time-varying ensemble moments is drastically reduced by using (4.1), because transcription and degradation events do not have to be simulated. The sample paths of the effective drive x i (t) were used to estimate the time-varying moments: black). These correspond to the numerical simulation of ODEs, and are far more efficient than sampling from realizations n i (t) of the mRNA copy number.

Variability over time: stationarity and ergodicity
Our results up to now have not assumed stationarity; in general, the distribution (2.14) and moments (4.1) depend on time. If the cells in the population are uncorrelated and both M and L are stationary (i.e. their statistics do not change over time), then f Xt ðx, tÞ tends to a stationary density f Xt ðxÞ [45], and the full solution P(n, t) also tends to a stationary distribution P(n). Under such assumptions leading to stationarity, any time dependence in P(n, t) only describes the 'burn-in' transient from an initial condition towards the attracting stationary distribution, as discussed in §2. 1. Several examples of such transience have been studied in the literature, both in state switching models with constant rate parameters [16][17][18], and in a model with state-dependent rates [19], to describe how the distribution P(n, t) settles to stationarity when the process is started from an initial Kronecker delta distribution Pðn, 0Þ ¼ d n0 . Figure 7 and appendix A.2 analyse this transience explicitly for the RT model.
If, in addition to stationarity, we assume the cells to be indistinguishable, the population is ergodic. In this case, the distribution obtained from a single cell over a time T, as T ! 1, is equivalent to the distribution obtained from a time snapshot at stationarity of a population of C cells, as C ! 1, i.e.
PðnÞ ¼ kPðnÞl, ð5:1Þ where PðnÞ :¼ lim Pðn, tÞ Here, k:l denotes time-averaging, and x(t) in equation (5.3) is the sample path of the effective drive for a randomly chosen cell. Therefore, under the assumption of ergodicity, the averages computed over single-cell sample paths can be used to estimate the stationary distribution of the population.

Ergodic systems: stochastic versus deterministic drives
It has been suggested that stochastic and periodic drives lead to distinct properties in the noise characteristics within a cell population [49]. We investigate the effect of different temporal drives on the full distribution (2.14) under ergodicity using (5.1) -(5.3). Note that when x(t) is periodic with period T, the limit in equation (5.3) is not required.
In figure 8, we show the time-averaged distribution kPðnÞl for a cell under three different upstream drives m(t): (i) a continuous sinusoidal form; (ii) a discontinuous square wave form; (iii) a RT form, which can be thought of as the stochastic analogue of the square wave. In all cases, the drive fmðtÞg t!0 [ ½0, 20 with the same period, or expected period, T. For simplicity, we set lðtÞ ; 1.
As the period T is varied, the similarity between the distributions under the three upstream drives varies considerably ( figure 8). At small T, the distributions under sinusoidal and square wave forms are most similar; whereas at large T, the distributions under square wave and RT forms become most similar. In general, the distribution of the RT model has longer tails (i.e. n low and high) as a consequence of long (random) waiting times that allow the system to reach equilibrium in the active and inactive states, although this effect is less pronounced when the promoter switching is fast relative to the time scales of transcription and degradation (e.g. T ¼ 1/5). On the other hand, as T grows, the square wave and RT drives are slow and the system is able to reach the equilibrium in both active and inactive states. Hence the probability distributions of the square wave and RT drives become similar, with a more prominent bimodality. The temporal Fano factor (TFF) is defined similarly to the ensemble version (4.7), but is calculated from the variance and mean of a single time series fnðtÞg t!0 over a time window W U (t 1 , t 2 ): In fact, this is the original definition of the Fano factor [72], which is used in signal processing to estimate statistical fluctuations of a count variable over a time window. Although N t is not a count variable (it decreases with degradation events), the TFF can be used to detect windows of stationarity in single-cell time courses. Figure 9a shows a single-cell sample path fnðtÞg t!0 generated by the (leaky) RT model with constant degradation rate l, and transcription rates m 1 . m 0 . 0 for the active and inactive promoter states. The leaky RT model is equivalent to the standard RT model, and switches between two states with expectations m 1 /l and m 0 /l. In the time windows W between promoter switching, fnðtÞg t[W can be considered almost at stationarity and described by a Poisson distribution with parameter m 0 /l (respectively, m 1 /l) in the inactive (respectively, active) state. Hence TFF fnðtÞg ðWÞ ≃ 1 across most of the sample path, except over the short transients W trans when the system is switching between states, where TFF fnðtÞg ðW trans Þ . 1 (figure 9b).
Alternatively, this information can be extracted robustly from the cumulative Fano factor (cTFF): cTFF fnðtÞg ðt 1 , tÞ ¼ TFF fnðtÞg (ðt 1 , tÞ), t ! t 1 ð5:5Þ which is computed over a growing window from a fixed starting time t 1 . The cTFF is a cumulative moving average giving an integrated description of how the stationary regimes are attained between switching events indicated by the step-like structure of the heat map in figure 9c.

Discussion
We have presented the solution of the ME for gene transcription with upstream dynamical variability in a setting that allows a unified treatment of a broad class of models, enabling quantitative biologists to go beyond stationary solutions when analysing noise sources in single-cell experiments. As a complementary approach to the explicit stochastic simulation of networks with many genes to account for the variability in data, our work uses a parsimonious transcription model of Poissonian type that includes explicitly the effect of dynamical and cell-to-cell upstream variability in the ME. We show that the solution to this gene transcription -degradation model can be expressed as a Poisson mixture form (2.14). This solution can be interpreted as the combination of an upstream component (dynamic or static; deterministic or stochastic) with a downstream Poissonian immigration-death process. Since only the upstream process is model-specific, different models are solved by obtaining the different mixing densities f Xt of the upstream process. This generic mathematical structure can describe both time-dependent snapshots across the population, as well as the dynamical variability over single-cell time courses in a coherent fashion. The solution (2.14) can also be understood from the perspective of Gardiner and Chaturvedi's Poisson representation [36,37]. Originally, the Poisson representation was introduced as an ansatz for asymptotic expansions of stationary systems, and included only constant rate parameters. Hence the original Poisson representation ansatz has not been used widely for time-varying solutions [36]. In contrast, our time-dependent Poisson mixture (2.14) is obtained here as a solution to a non-stationary ME model, rather than backwards via basis expansions, and the mixing density f Xt has a physical interpretation in terms of single-cell sample paths relatable to data.
In this respect, our preparatory result (2.8) for the perfectly synchronous population can be thought of as an extension of the 'Poisson representation' to include time-varying rate parameters. Note also that this perfectly synchronous solution (2.8) corresponds to a particular case of the multigene solution obtained by Jahnke & Huisinga [34]. However, (2.8) does not yet encapsulate the cell-to-cell variability. It is the full Poisson mixture solution (2.14) that extends the scope of the timevarying 'Poisson representation' a step further, by allowing for stochastic rate parameters that can describe cell-to-cell variability as well as dynamic variability. Our solution confers two broad advantages. The first is pragmatic: since X t is a continuous random variable satisfying a linear random differential equation, we can draw upon the rich theory and analytical results for f Xt , even for non-stationary models, or we can use ODE and PDE solvers as further options to solve the differential equation for f Xt . If simulations are still necessary, sampling P(n, tjM, L) directly using stochastic simulation algorithms becomes computationally expensive, particularly if the upstream processes M and L are time-varying [73]. Instead, we can sample f Xt ðx, tÞ directly using the random differential equation (2.13), and then obtain the full distribution via numerical integration using (2.14). This approach leads to a significant reduction in computational cost, as shown in figure 10.
Our approach can also be used to analyse noise characteristics in conjunction with biological hypotheses. If measurements of additional cellular variables (e.g. cell cycle) are available, they can be incorporated as a source of variability for gene regulation to test biological hypotheses computationally against experimental data. Conversely, it is possible to discount the Poissonian component from observed data, so as to fit different promoter models to experimental data and perform model comparison [61]. Our discussion of a recent experiment of gene expression driven by cAMP signalling [29] exemplifies this approach.
The second advantage of our framework is conceptual. Through the natural decoupling of the solution into a discrete, Poisson component (downstream) and a continuous, mixing component (upstream), we derive time-dependent expressions for both ensemble and temporal moments, recasting the concept of 'intrinsic'/'extrinsic' noise for dynamic upstream cellular drives. Importantly, all upstream variability gets effectively imbricated through the upstream effective drive X, which can be interpreted in terms of a biochemical differential rate equation. This analysis clarifies how upstream fluctuations are combined to affect the probability distribution of the mRNA copy number, providing further intuition about the sources of noise and their temporal characteristics. Indeed, stripping the model down to its extrinsic component f Xt can provide us with additional understanding of its structure and time scales [61].
Finally, although we have concentrated here on the amenable analytical solutions that can be obtained for the single gene case, we remark that our solution could be extended to monomolecular multigene networks, by merging Jahnke & Huisinga's result [34] for synchronous networks with our mixture result for the asynchronous case. Such a generalization could be implemented computationally to reduce the cost of simulating stochastic networks. Such an extension will be the subject of future work. The solutions of higher-order reaction systems obtained through the Poisson representation ansatz could also be extended to include stochastic rates. This approach could lead to deeper understanding of models with and without feedback [74].   Figure 10. Efficient sampling of the full distribution P(n, t) for transcription with upstream cellular drives. We consider upstream drives governed by the Kuramoto promoter model (3.4) for C ¼ 10 000 coupled oscillatory cells. Sample paths of N are simulated directly with the Gillespie algorithm to approximate P(n, t) at time t 1 (bottom, blue). Alternatively, sample paths of X are used to estimate f X t , which is then mixed by performing the numerical integration (2.14) to obtain P(n, t) (top, red). The latter sampling through X is more regular and far less costly: CPU time via N is %36000 s, whereas CPU time via X is %0.1 s. (Online version in colour.) A.2. 'Burn-in' transience in the random telegraph model A time-dependent solution of the probability generating function for the RT model appeared in [16], although the explicit expression for P(n, t) was omitted. As discussed above, the RT model represents asynchronous and stationary behaviour, hence the time dependence appears only through convergence to the stationary distribution from the initial condition. We include the derivation here for completeness and to complement figure 7. Consider the RT model depicted in figure 3a. Assuming that every cell is initialized in the inactive state with n 0 ¼ 0 mRNA molecules, the probability generating function for the cell population is [16] Gðz, tÞ ¼ F s ðz, tÞF s ðz, tÞ À mk on e Àðkonþk off Þt ðk on þ k off Þð1 À k on À k off Þ ð1 À zÞ Â F ns ðz, tÞF ns ðz, tÞ, where F s ðz, tÞ :¼ 1 F 1 ½Àk on ,1 À k on À k off ; me Àt ð1 À zÞ, F s ðz, tÞ :¼ 1 F 1 ½k on , k on þ k off ; À mð1 À zÞ, F ns ðz, tÞ :¼ 1 F 1 ½k off , 1 þ k on þ k off ; me Àt ð1 À zÞ and F ns ðz, tÞ :¼ 1 F 1 ½1 À k off , 2 À k on À k off ; À mð1 À zÞ. Here 1 F 1 (a, b; z) is the confluent hypergeometric function [63].