Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle
Abstract
We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined.
1. Introduction
Entropy is actively used by many fields of science including physics, chemistry, computer science, biology, etc. [1]
Today there are many different types of entropy in use, which often become the centre of discussions in both statistical physics and thermodynamics. The most famous is the Shannon entropy [2]:
In 1961, Rényi [3] introduced a generalized Shannon entropy as a one-parameter family of entropy
Extension of the Rényi entropy for the continuous case can be defined as
The Rényi entropy is often taken as a typical measure of complexity to describe dynamical systems in physics, engineering and information theory [4]. The Rényi entropy applications are overviewed in [5].
A variety of physical systems obey the famous maximum entropy (MaxEnt) principle: their entropy achieves maximum under constraints caused by other physical laws. Since 1957, when the seminal works of Jaynes were published [6]–[8], and until now [9]–[11], the MaxEnt principle has caused lively interest among researchers. For example, the MaxEnt for the Rényi entropy has been successfully applied to generalize the Thomas–Fermi model in [12].
Although the states of maximum entropy are widely discussed in scientific articles and studies, the dynamics of evolution and transient behaviour of systems are still not well investigated.
In this paper, we propose a set of equations that describe the dynamics of the PDFs for non-stationary processes that follow the maximum of the Rényi entropy principle.
The speed-gradient (SG) principle [13]–[16] used here is originated from control theory. This principle has already been applied in [13],[15],[17] to derive the equations of dynamics for systems with a finite number of particles for the case of the maximum Shannon entropy principle. Systems with continuous probability distributions are considered in [18]. The dynamics of discrete systems for the Tsallis entropy is discussed in [19]. The SG principle for systems with a finite number of particles is studied in [15], where systems are emulated with the molecular dynamics method. We take a similar approach for systems with discrete and continuous probability distributions while researching the Rényi entropy. The derived equations describe the dynamics of non-stationary (transient) states and show the way and trajectory of a system tending to the state with maximum Rényi entropy.
The well-known Fokker–Planck (FP) equations [20] describe the time evolution of the PDF. Jaynes's MaxEnt approach can also be applied to these equations [21],[22]. Another general form of time-evolution equations for non-equilibrium systems is known as GENERIC (general equation for the non-equilibrium reversible–irreversible coupling) [23],[24]. The relation between GENERIC and FP equations is established in [23]. It states that FP equations are a particular case of the GENERIC when a noise term is added into the GENERIC. Thus, if the FP equation is represented as a stochastic differential equation, from which fluctuations are eliminated, this equation matches the GENERIC equation [25].
Following this, we can also claim that the SG principle matches the GENERIC (and thus it matches the FP equation) if a goal function is set as entropy and constraints are specified by energy [19]. Moreover, the SG principle is a more general case of the GENERIC because almost every smooth function can be taken as a goal function, not only entropy. The GENERIC and the SG principle relations are examined in §3.
We propose an evolution law of the system in the following form:
A way to find the distribution achieving the extreme value (maximum or minimum) of entropy within a given variational distance from any given distribution is proposed in [26]. An approximation of the probability distribution is built there based on new bounds of entropy distance. These bounds are also applied for the entropy estimation in [27].
Jaynes's MaxEnt principle was successfully used for inductive inference in [28]. It is shown that, given information in the form of constraints on expected values, there is only one appropriate distribution which can be obtained by maximizing entropy. Inductive inference can be used in systems with non-stationary processes which satisfy the MaxEnt [29]. In [29], the notion of entropy dynamics (ED) is used. ED is a theoretical framework which combines inductive inference [28] and information geometry [30]. ED investigates the possibility of deriving dynamics from purely entropic arguments.
Besides the Rényi entropy, more general forms of relative entropies and divergences can also be studied from a perspective of the SG principle, for example the Cressie–Read and the Csiszár–Morimoto conditional entropies (f-divergences) [31].
This paper has the following structure. The next section describes the SG principle, accompanied with two examples. The third section examines the relations between the SG principle, the GENERIC and the FP equations. The fourth section introduces Jaynes's formalism; the Rényi distribution (RD) is derived from the maximum of the Rényi entropy principle. The fifth section gives an example of a dynamic system with discrete distribution of parameters that follows the maximum of the Rényi entropy principle (we consider cases with one and two constraints and derive equations for the transient states). The sixth section extends the results obtained in the fifth section for the case of the continuous PDF. The asymptotic stability of PDF dynamics is proved for the cases with one and two constraints.
2. The speed-gradient principle
There is a connection between the laws of control in technical systems and the laws of dynamics in physical systems. It is known that the methods for the synthesis of control algorithms allow one to derive the laws of dynamics for physical systems. In particular, the model of the dynamics for a number of physical systems can be derived based on the SG method with an appropriate choice of goal function.
Consider the class of open physical systems whose dynamics can be described by the system of differential equations
Such formulations are well known in physics. Variational principles of systems models have long been recognized. They usually involve the task of an integral functional that characterizes the behaviour of the system [32]. Minimization of the functional defines the real possible trajectories of the system {x(t),u(t)} as points in the corresponding functional space. To explicitly specify the dynamics of the system a developed apparatus of the variations calculus is used.
Methods of optimal control (e.g. Bellman dynamic programming, Pontryagin maximum principle, etc.) are the result of the development of classical variational calculus methods. And it can be used to build dynamic models of mechanical systems in Nature and society.
Together with integral principles, differential local time principles have also been proposed, such as the Gauss principle of least constraint, the principle of minimum energy dissipation, etc. As noted by Planck [33], local principles have some advantages over integral ones, because they do not make the current state and the movement of the system dependent on later states and movements. Let us formulate another local variational principle based on the method of SG [14],[17], as follows.
The speed-gradient principle.Only those possible movements of the system are realized (among all possible movements) for which the input variables change proportionally to the SG of a ‘goal’ functional.
The SG principle offers researchers the choice of two types of systems dynamics models:
(A) models which follow the algorithm of the SG in differential form:
2.2(B) models following the algorithm of the SG in finite form:
2.3
where is the rate of change of the target functional along the trajectories of the system (2.1). We describe the application of the SG principle in the simplest (and most important) case when the class of models of dynamics is given by the relation:
In accordance with the SG principle, the goal functional Q(x) has to be determined first. Selection of Q(x) should be based on the physics of the real system and reflect the presence of a tendency to decrease the current value of Q(x(t)). After that, the law of dynamics can be written in the form (2.2) or (2.3).
Let us illustrate the introduced SG principle with several examples.
(a) Example 1: the motion of a particle in a potential field
As a first example, consider the problem of describing the motion of a particle in a potential field. State variables here are the coordinates of the point x1,x2,x3 which form a vector x=(x1,x2,x3)T.
We choose a smooth ‘goal’ functional Q(x) as the potential energy of a particle and derive the SG law in the differential form. We calculate the speed
Note that the SG laws with non-diagonal gain matrices Γ can be incorporated if a non-Euclidean metric in the space of inputs is introduced by the matrix Γ−1. Admitting dependence of the metric matrix Γ on x, one can obtain evolution laws for complex mechanical systems described by Lagrangian or Hamiltonian formalism.
The SG principle applies not only to finite-dimensional systems but also to infinite-dimensional (distributed) ones. Particularly, x may be a vector of a functional space and f(x,u,t) may be a nonlinear differential operator (in such a case, the solutions of (2.1) should be understood as generalized ones). We will omit the mathematical details for simplicity.
(b) Example 2: the viscous fluid dynamics
Let the infinite-dimensional state vector be formed of two functions: x=(v(⋅,t),p(⋅,t))T, where is the velocity field of the three-dimensional fluid flow and p(r,t) is the pressure field.
We introduce the ‘goal’ functional in the following way:
Note that the differential form of the SG laws often corresponds to reversible processes, while the finite form generates irreversible ones. For modelling of more complex dynamics, a combination of finite and differential SG laws may be useful.
In a similar way, dynamical equations for many other mechanical, electrical and thermodynamic systems can be recovered. The SG principle applies to a broad class of physical systems subjected to potential and/or dissipative forces.
This paper is aimed at application of the SG principle to entropy-driven systems.
3. GENERIC and the speed-gradient principle
The GENERIC time-evolution equation is formulated as [23]:
Let us show the relation between the GENERIC and SG equations. Assume that we have to maximize the entropy function S(x) of a system that has an additional constraint for a total energy E(x)=E=const. The Lagrangian for this case can be defined as:
Let us use the SG principle equation (2.3) for the Lagrangian (3.2), i.e. Qt=Λ,
According to (2.5) and (2.6) we can write equation (3.3) as:
We can see that the dynamics equation obtained from the SG principle (3.4) coincides with the GENERIC equation (3.1) for dx/dt=u, L(x)=−Γλ and M(x)=−Γ.
GENERIC is based on two ‘potentials’ of total energy and entropy. The SG principle can use any smooth functional that has to be maximized (minimized) as a goal function. So it is not necessary to be only Lagrangian (3.2) or entropy functional. The SG principle can be treated as a more general approach to describe the dynamics of a system. Nevertheless, GENERIC is also a general equation. It uses parametrized matrices L(x) and M(x) that make it possible to use GENERIC for a wide range of time-evolution systems.
The relation between GENERIC and FP equations is established in [23], which means that there is also a relation between the FP equations and the SG principle. So many different methods allow the researcher to select the one that is most convenient or more fully describes the problem being solved.
4. Maximum entropy principle
(a) Jaynes's MaxEnt
Jaynes [6]–[8] proposed the approach which became one of the foundations for modern-day statistical physics.
Let p(x) be an unknown PDF of a multi-dimensional random variable x. Suppose that we have to define it on the basis of certain system information. Consider a continuous case and suppose that there is a priori known information about some average values ,
Conditions (4.1) can be insufficient to derive p(x) in general. According to Jaynes, applying maximization of information entropy H(X) is the most objective method to define the PDF in this case.
Lagrange multipliers are used to perform the maximum entropy search with additional conditions (4.1). This leads to
These formulae show the match between the maximum information entropy and the Gibbs entropy in the case of equilibrium. So the information entropy can be identified with the thermodynamic entropy in this case.
As the Rényi entropy is a generalization of the Shannon entropy, it seems natural to extend Jaynes's maximum entropy principle to the case of the Rényi entropy.
(b) The Rényi distribution
The RD that corresponds to the state with maximum Rényi entropy is proposed in [34],[35]. The RD is considered there for a system with a discrete probability distribution when two constraints are imposed: normalization constraint (mass conservation law) and a fixed average energy (total energy) constraint .
If the Rényi entropy is used instead of the Shannon entropy in MaxEnt, then the equilibrium distribution (the RD) can be formulated as
Following the constraints in form (4.1), the expression (4.3) for continuous distributions becomes
5. The speed-gradient dynamics of the Rényi entropy maximization process
We extend the approach introduced in [13] to the case of the Rényi entropy. Similar to [13] for the Shannon entropy, we consider a discrete system which consists of N identical particles distributed over m cells.
In the case when the mass conservation constraint holds, it is true that . It can be normalized as
Particles can move from one cell to another. The steady-state and the transient behaviour of the system are both interesting for us. According to the MaxEnt principle, the limit behaviour of the system maximizes its entropy for the steady state when nothing else is known [7],[8].
To obtain a transient mode behaviour, we apply the SG principle, choosing the Rényi entropy as the goal function to be maximized
Assume that the motion is continuous in time and the numbers Ni are changing continuously. Then the law of motion can be represented as where ui=ui(t) are control functions which have to be determined.
The evaluation scheme is as follows. First the speed of entropy change is evaluated:
Then the gradient of the speed is evaluated with respect to the vector of controls ui considered as frozen parameters,
And finally the actual controls are defined proportional to the projection of the SG to the surface of constraints (5.1),
Now we can evaluate λ′:
The final form of the system dynamics law is as follows:
Let us find the equilibrium mode which corresponds to the asymptotic behaviour of the variables Ni. In this mode . Based on (5.3), this means that Hence all Ni are equal. According to constraint (5.1), we have that Ni=N/m. This result corresponds to RD (4.3) for the case of one constraint (i.e. λ=0 as there is no total energy constraint). It also corresponds to the maximum state of classical entropy and agrees with thermodynamics.
(a) Equilibrium stability
Let us examine the stability of the equilibrium mode. We introduce the Lyapunov function
Evaluation of yields
Consider the Cauchy–Bunyakovsky–Schwarz (CBS) inequality for two vectors :
According to the CBS inequality for vectors and b=(1,…,1), we have that . Equality holds if and only if all values Ni are equal. This is the maximum entropy state. Thus law (5.3) provides global asymptotic stability of the maximum entropy state. The physical meaning of this law is nothing but moving along the direction of the maximum entropy production rate (direction of the fastest entropy growth).
(b) Total energy constraint
The case of more than one constraint can be treated in the same way. Suppose that in addition to the mass conservation law (5.1) the energy conservation law also holds. Let Ei be the energy of the particle in the ith cell and the total energy does not change. The total energy constraint is
A new set of constraints can be formulated as
Then the evolution law should have the form
The solutions for λ1 and λ2 are given by formulae
The general form of the evolution law can be obtained by substitution of λ1 and λ2 from (5.8) into equation (5.7). In abbreviated form, we represent this law as
As before it can be shown that is a Lyapunov function and there is only one stable equilibrium state of the system in non-degenerate cases. We demonstrate this as:
We introduce a new scalar product function for two vectors as
For scalar product (5.12), the CBS inequality is true:
Using inequality (5.13) for vectors and g=(E1,…,Em)T, we get for (5.11) that . And occurs only for the case when for all i. Due to (5.7) at the equilibrium state of the system, the following equalities hold:
(i) Final distribution and the Rényi distribution equivalence
Let us show that the distribution in (5.14) is the RD (4.3). If we multiply (5.14) by Ni and sum up over i, we obtain
Let us substitute Ni from (5.16) into (5.1). We get
It is evident that (5.19) satisfies the normalization constraint (5.1). Let us check that the second constraint for energy (5.5) is also satisfied.
Let us substitute Ni from (5.16) into (5.5). Then we get
6. System with continuous probability density function
Let us extend the same approach based on the SG principle for the case of continuous PDFs. Consider a system with a continuous distribution of possible states that evolves on a compact carrier. Distribution over states is characterized by PDF p(t,x), which is continuous everywhere except for a set with zero measure. It is true that
The Rényi entropy for a continuous PDF is defined as
From constraint (6.1), it follows that
According to the SG principle, we calculate
The gradient of by u is equal to
The SG principle of motion forms the evolution law:
The final system dynamics equation has the following form:
Equation (6.6) can be represented in the more general form
(a) Equilibrium stability
Let us investigate a stability of the obtained equilibrium equation (6.6). Consider function . The derivative of this function is
After substitution of the expression for u from (6.6) to (6.8), we obtain:
Then we use the CBS inequality in integral form
(b) Asymptotic convergence
To show an asymptotic convergence of all solutions to p*, we will use Barbalat's lemma [36].
If differentiable functionf(t) has a finite limit forand its derivativeis uniformly continuous thenforLemma 6.1 (Barbalat's lemma)
For all PDFs defined by equation (6.6), it is true thatforTheorem 6.2
For the sake of simplicity, we define a notation for V in (6.9) as v(t)=V (p(t)). We will use Barbalat's lemma to show that We use v(t) as a function f(t) in Barbalat's lemma. Because of v(t)≥0 and , the function v(t) has a finite limit for . It can be shown that is uniformly continuous. Consider the expression for ,
Proof.
Due to the constraint (6.1) and compactness of the carrier Ω, it can be shown that function is bounded. This leads us to the fact that is uniformly continuous.
As all the necessary conditions of Barbalat's lemma for differentiable function v(t) are satisfied, we have that for .
Taking into account that and , the expression for from (6.9) may be rewritten as
If , then But this case conflicts with the constraint (6.1). Given (6.2) we obtain that This means that where and are normalized values for pβ−1 and 1, respectively.
It follows that p(t,x) tends to the stationary distribution. As explained earlier, this distribution is unique. Thus, for ▪
(c) Total energy constraint
The constraint (6.1) can be interpreted as the mass conservation law on the space Ω. Consider a system with an additional constraint for the total energy conservation, i.e. a conservative case when energy does not depend on time. The new constraint may be described as
The equation for the dynamics can be defined in the form
Based on constraints (6.1) and (6.13), we can find expressions for Lagrange multipliers λ1 and λ2:
The above equations are valid when the denominator in both fractions is not equal to zero. If we use the CBS inequality for f=h and g=1, then the following inequality becomes true: This inequality becomes equality when h=const. This means that all energy levels coincide. This case is supposed to be degenerate and not considered here. Thus the expression is always true.
The resulting equation for the dynamics can be obtained by substituting (6.4) and (6.15) into (6.14). It can be transformed to the brief form:
(i) Equilibrium stability
Let us examine the equilibrium of the obtained equation (6.14). We use the same Lyapunov function as we have used in the previous section with only one constraint. For two constraints, the new expression for is
Let us define a functional: ∀f,g∈L2(Ω)
The new functional has several useful properties (the proof of each property is provided in appendix A):
(1) linearity for the first argument ;
(2) symmetry ∀f,g∈L2(Ω)〈f,g〉=〈g,f〉;
(3) positiveness and the condition of zero value ∀f∈L2(Ω)〈f,f〉≥0, 〈f,f〉=0⇔f=μ=const.
Let us prove inequality (6.19) based on properties 1–3.
It is obvious that for any f,g∈L2(Ω) and it is true that f−λg∈L2(Ω). This function has property 3: 〈f−λg,f−λg〉≥0. Using properties 1 and 2, we obtain the quadratic inequality with respect to λ:
This inequality holds for any real λ. Hence the discriminant cannot be positive,
If the equality takes place in (6.21), then there exists a unique solution of equation 〈f−λg,f−λg〉=0. But then, by property 3, we have .
Substituting f=pβ−1,g=h into inequality (6.21), we get
Note that equality (6.19) holds if and only if
According to the SG for H (6.4), expression (6.14) can be rewritten for the case of equilibrium as
It can also be shown that p*(β) corresponds to the continuous form of the RD (4.4). To do this, we write expression (6.23) as
Following a similar approach to that used for a discrete case, expression (6.24) can be formulated as RD:
(ii) Asymptotic convergence
We will prove asymptotic convergence similarly to the case with one constraint (see proposition 1).
For all PDFs defined by equation (6.14), it is true thatforTheorem 6.3
To use Barbalat's lemma, we have to check the conditions under which the function is bounded. Based on the expression for in (6.18) and according to similar logic so that used in proposition 1, we can conclude that is bounded for the compact carrier Ω. According to Barbalat's lemma, it is true that
Proof.
We introduce a scalar product as . Having ∥f∥2=〈f,f〉, expression (6.18) can be rewritten as
As consider the case when first. In the CBS inequality
Given (6.25), (6.26) and , we have that . This implies that where and are normalized values for pβ−1 and h, respectively. Thus p tends to the only stationary distribution p*(β) since h does not depend on time. ▪
7. Conclusion
The Rényi entropy is widely used in communication and coding theory, quantum information theory, signal processing, data mining and many other areas [5],[4]. Stationary states which maximize Rényi entropy have already been well investigated. The MaxEnt principle defines the asymptotic behaviour of the system, but does not answer the question about how the system moves to this asymptotic behaviour.
We have investigated non-stationary states of processes that follow the MaxEnt principle for the Rényi entropy. We have derived equations (5.3), (5.9), (6.6) and (6.16) which describe the dynamics of the PDF for the system that tends to the state with maximum Rényi entropy. Systems with discrete probability distribution and continuous PDFs were considered under mass conservation and energy conservation constraints. We have shown that the limit PDF p*(β) is unique and corresponds to the RD. We have also proved the convergence of PDFs with dynamics described by equations (5.3), (5.9), (6.6) and (6.16) to PDF p*(β), which corresponds to the state with the maximum value of the Rényi entropy.
The key point of our approach is to use the SG method with the goal function chosen as the Rényi entropy of the process. The SG principle originates from control theory and it generates equations for the transient (non-stationary) states of the system's operation which help to track how the system evolves to the steady state.
There are many other generalizations of the Shannon entropy, e.g. the Tsallis entropy [37], the Khinchin entropy [38], the Burbea–Rao entropy [39], the Cressie–Read family [40], the Burg entropy [41], f-divergences [31], etc. Investigation of these entropies based on the SG principle seems to be promising for further investigations. A comparative analysis of the dynamics for different kinds of entropy may be performed.
Data accessibility
All methods and corresponding data are given in the manuscript and are fully reproducible.
Authors' contributions
A.L.F. designed the study to use the SG principle for non-stationary processes which tend to maximize its entropy. D.S.S. derived the equations for the Rényi entropy and prepared the manuscript. Both authors gave final approval for publication.
Competing interests
We have no competing interests.
Funding
This work was supported by SPbU grant nos. 6.37.181.2014 and 6.38.230.2015. Formulation of the SG principle for the Rényi entropy (§5) was performed in IPME RAS and supported by the RSF (grant no. 14-29-00142).
Appendix A. Extra materials
Here, we prove the properties of functional 〈f,g〉 (6.20) from §5c(i).
(1) Linearity in the first argument
Using the linearity of the integral, we obtain
Proof.
(2) Symmetry ∀f,g∈L2(Ω) 〈f,g〉=〈g,f〉.
Proof.
(3) Positiveness and the condition of zero value ∀f∈L2(Ω) 〈f,f〉≥0, 〈f,f〉=0⇔f=μ=const.
Let us consider a scalar multiplication . The CBS inequality becomes true for: |(f,g)|2≤(f,f)(g,g), Substituting g≡1, we get Thereby, it is true that Moreover, i.e. f=μ. ▪Proof.
Footnotes
References
- 1
Martyushev L, Seleznev V . 2006Maximum entropy production principle in physics, chemistry and biology. Phys. Rep. 426, 1–45. (doi:10.1016/j.physrep.2005.12.001) Crossref, ISI, Google Scholar - 2
Shannon C . 1948A mathematical theory of communication. Bell Syst. Tech. J. 27, 379–423. (doi:10.1002/j.1538-7305.1948.tb01338.x) Crossref, Google Scholar - 3
Rényi A . 1961On measures of entropy and information. Proc. Fourth Berkeley Symp. Math. Stat. Probab. 1, 547–561. Google Scholar - 4
Wilde MM . 2015Recoverability in quantum information theory. Proc. R. Soc. A 471, 20150338. (doi:10.1098/rspa.2015.0338) Link, Google Scholar - 5
Bercher J-F . 2008On some entropy functionals derived from Rényi information divergence. Inform. Sci. 178, 2489–2506. (doi:10.1016/j.ins.2008.02.003) Crossref, ISI, Google Scholar - 6
Jaynes E . 1980The minimum entropy production principle. Annu. Rev. Phys. Chem. 31, 579–601. (doi:10.1146/annurev.pc.31.100180.003051) Crossref, ISI, Google Scholar - 7
Jaynes E . 1957Information theory and statistical mechanics I. Phys. Rev. 106, 620–630. (doi:10.1103/PhysRev.106.620) Crossref, ISI, Google Scholar - 8
Jaynes E . 1957Information theory and statistical mechanics II. Phys. Rev. 108, 171–190. (doi:10.1103/PhysRev.108.171) Crossref, ISI, Google Scholar - 9
Ortega PA, Braun DA . 2013Thermodynamics as a theory of decision-making with information-processing costs. Proc. R. Soc. A 469, 20120683. (doi:10.1098/rspa.2012.0683) Link, Google Scholar - 10
Lucia U . 2008Probability, ergodicity, irreversibility and dynamical systems. Proc. R. Soc. A 464, 1089–1104. (doi:10.1098/rspa.2007.0304) Link, Google Scholar - 11
Martyushev LM . 2013Entropy and entropy production: old misconceptions and new breakthroughs. Entropy 15, 1152–1170. (doi:10.3390/e15041152) Crossref, ISI, Google Scholar - 12
Nagy Á, Romera E . 2009Maximum Rényi entropy principle and the generalized Thomas–Fermi model. Phys. Lett. A 373, 844–846. (doi:10.1016/j.physleta.2009.01.004) Crossref, ISI, Google Scholar - 13
Fradkov AL . 2008Speed-gradient entropy principle for nonstationary processes. Entropy 10, 757–764. (doi:10.3390/e10040757) Crossref, ISI, Google Scholar - 14
Fradkov A, Miroshnik I, Nikiforov V . 1999Nonlinear and adaptive control of complex systems. Dordrecht, The Netherlands: Kluwer Academic Publishers. Crossref, Google Scholar - 15
Fradkov A, Krivtsov A . 2011Speed-gradient principle for description of transient dynamics in systems obeying maximum entropy principle. In Proc. of the 30th Int. Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, Chamonix, France, 4–9 July 2010. AIP Conference Proceedings, vol. 1305, pp. 399–406. College Park, MD: American Institute of Physics. Google Scholar - 16
Fradkov A . 1979Speed-gradient scheme and its application in adaptive-control problems. Autom. Remote Control 40, 1333–1342. ISI, Google Scholar - 17
Fradkov AL . 2007Cybernetical physics: from control of chaos to quantum control. Berlin, Germany: Springer. Google Scholar - 18
Fradkov AL, Shalymov DS . 2015Dynamics of non-stationary nonlinear processes that follow the maximum of differential entropy principle. Commun. Nonlinear Sci. Numer. Simul. 29, 488–498. (doi:10.1016/j.cnsns.2015.06.001) Crossref, ISI, Google Scholar - 19
Fradkov AL, Shalymov DS . 2015Speed-gradient and MaxEnt principles for Shannon and Tsallis entropies. Entropy 17, 1090–1102. (doi:10.3390/e17031090) Crossref, ISI, Google Scholar - 20
- 21
Hick P, Stevens G . 1987Minimum Kullback entropy approach to the Fokker–Planck equation. Astron. Astrophys. 172, 350. Google Scholar - 22
Plastino AR, Miller HG, Plastino A . 1997Minimum Kullback entropy approach to the Fokker–Planck equation. Phys. Rev. E 56, 3927. (doi:10.1103/PhysRevE.56.3927) Crossref, ISI, Google Scholar - 23
Grmela M, Öttinger H . 1997Dynamics and thermodynamics of complex fluids. I. Development of a general formalism. Phys. Rev. E 56, 6620. (doi:10.1103/PhysRevE.56.6620) Crossref, ISI, Google Scholar - 24
Öttinger H, Grmela M . 1997Dynamics and thermodynamics of complex fluids. II. Illustrations of a general formalism. Phys. Rev. E 56, 6633. (doi:10.1103/PhysRevE.56.6633) Crossref, ISI, Google Scholar - 25
Öttinger H . 1998General projection operator formalism for the dynamics and thermodynamics of complex fluids. Phys. Rev. E 57, 1416–1420. (doi:10.1103/PhysRevE.57.1416) Crossref, ISI, Google Scholar - 26
Ho S-W, Yeung RW . 2010The interplay between entropy and variational distance. IEEE Trans. Inform. Theory 56, 5906–5929. (doi:10.1109/TIT.2010.2080452) Crossref, ISI, Google Scholar - 27
Ho S-W, Yeung RW . 2010On information divergence measures and a unified typicality. IEEE Trans. Inform. Theory 56, 5893–5905. (doi:10.1109/TIT.2010.2080431) Crossref, ISI, Google Scholar - 28
Shore JE, Johnson RW . 1980Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy. IEEE Trans. Inform. Theory 26, 26–37. (doi:10.1109/TIT.1980.1056144) Crossref, ISI, Google Scholar - 29
Ali SA, Cafaro C, Giffin A, Kim DH . 2012Complexity characterization in a probabilistic approach to dynamical systems through information geometry and inductive inference. Phys. Scr. 85, 025009. (doi:10.1088/0031-8949/85/02/025009) Crossref, ISI, Google Scholar - 30
Cafaro C . 2008Information geometry, inference methods and chaotic energy levels statistics. Mod. Phys. Lett. B 22, 1879–1892. (doi:10.1142/S0217984908016558) Crossref, ISI, Google Scholar - 31
Morimoto T . 1963Markov processes and the h-theorem. J. Phys. Soc. Jpn 12, 328–331. Crossref, Google Scholar - 32
Lanczos K . 1962The variational principles of mechanics. Toronto, Canada: University of Toronto Press. Google Scholar - 33
- 34
Bashkirov AG . 2004Maximum Rényi entropy principle for systems with power-law Hamiltonians. Phys. Rev. Lett. 93, 130601. (doi:10.1103/PhysRevLett.93.130601) Crossref, PubMed, ISI, Google Scholar - 35
Bashkirov AG . 2004On maximum entropy principle, superstatistics, power-law distribution and Rényi parameter. Physica A 340, 153–162. (doi:10.1016/j.physa.2004.04.002) Crossref, ISI, Google Scholar - 36
- 37
Tsallis C . 1988Possible generalization of Boltzmann–Gibbs statistics. J. Stat. Phys. 52, 479–487. (doi:10.1007/BF01016429) Crossref, ISI, Google Scholar - 38
- 39
Burbea J, Rao C . 1982On the convexity of some divergence measures based on entropy functions. IEEE Trans. Inform. Theory 28, 489–495. (doi:10.1109/TIT.1982.1056497) Crossref, ISI, Google Scholar - 40
Cressie N, Read T . 1984Multinomial goodness of fit tests. J. R. Stat. Soc. Ser. B 46, 440–464. Google Scholar - 41
Burg J . 1972The relationship between maximum entropy spectra and maximum likelihood spectra. Geophysics 37, 375–376. (doi:10.1190/1.1440265) Crossref, ISI, Google Scholar


