19,109 research outputs found
The Entropy of Conditional Markov Trajectories
To quantify the randomness of Markov trajectories with fixed initial and
final states, Ekroot and Cover proposed a closed-form expression for the
entropy of trajectories of an irreducible finite state Markov chain. Numerous
applications, including the study of random walks on graphs, require the
computation of the entropy of Markov trajectories conditioned on a set of
intermediate states. However, the expression of Ekroot and Cover does not allow
for computing this quantity. In this paper, we propose a method to compute the
entropy of conditional Markov trajectories through a transformation of the
original Markov chain into a Markov chain that exhibits the desired conditional
distribution of trajectories. Moreover, we express the entropy of Markov
trajectories - a global quantity - as a linear combination of local entropies
associated with the Markov chain states.Comment: Accepted for publication in IEEE Transactions on Information Theor
Conditional reversibility in nonequilibrium stochastic systems
For discrete-state stochastic systems obeying Markovian dynamics, we
establish the counterpart of the conditional reversibility theorem obtained by
Gallavotti for deterministic systems [Ann. de l'Institut Henri Poincar\'e (A)
70, 429 (1999)]. Our result states that stochastic trajectories conditioned on
opposite values of entropy production are related by time reversal, in the
long-time limit. In other words, the probability of observing a particular
sequence of events, given a long trajectory with a specified entropy production
rate , is the same as the probability of observing the time-reversed
sequence of events, given a trajectory conditioned on the opposite entropy
production, , where both trajectories are sampled from the same
underlying Markov process. To obtain our result, we use an equivalence between
conditioned ("microcanonical") and biased ("canonical") ensembles of
nonequilibrium trajectories. We provide an example to illustrate our findings.Comment: 13 pages, 1 figur
Inferring Microscopic Kinetic Rates from Stationary State Distributions.
We present a principled approach for estimating the matrix of microscopic transition probabilities among states of a Markov process, given only its stationary state population distribution and a single average global kinetic observable. We adapt Maximum Caliber, a variational principle in which the path entropy is maximized over the distribution of all possible trajectories, subject to basic kinetic constraints and some average dynamical observables. We illustrate the method by computing the solvation dynamics of water molecules from molecular dynamics trajectories
Markov processes follow from the principle of Maximum Caliber
Markov models are widely used to describe processes of stochastic dynamics.
Here, we show that Markov models are a natural consequence of the dynamical
principle of Maximum Caliber. First, we show that when there are different
possible dynamical trajectories in a time-homogeneous process, then the only
type of process that maximizes the path entropy, for any given singlet
statistics, is a sequence of identical, independently distributed (i.i.d.)
random variables, which is the simplest Markov process. If the data is in the
form of sequentially pairwise statistics, then maximizing the caliber dictates
that the process is Markovian with a uniform initial distribution. Furthermore,
if an initial non-uniform dynamical distribution is known, or multiple
trajectories are conditioned on an initial state, then the Markov process is
still the only one that maximizes the caliber. Second, given a model, MaxCal
can be used to compute the parameters of that model. We show that this
procedure is equivalent to the maximum-likelihood method of inference in the
theory of statistics.Comment: 4 page
Non-equilibrium steady states : maximization of the Shannon entropy associated to the distribution of dynamical trajectories in the presence of constraints
Filyokov and Karpov [Inzhenerno-Fizicheskii Zhurnal 13, 624 (1967)] have
proposed a theory of non-equilibrium steady states in direct analogy with the
theory of equilibrium states : the principle is to maximize the Shannon entropy
associated to the probability distribution of dynamical trajectories in the
presence of constraints, including the macroscopic current of interest, via the
method of Lagrange multipliers. This maximization leads directly to generalized
Gibbs distribution for the probability distribution of dynamical trajectories,
and to some fluctuation relation of the integrated current. The simplest
stochastic dynamics where these ideas can be applied are discrete-time Markov
chains, defined by transition probabilities between
configurations and : instead of choosing the dynamical rules a priori, one determines the transition probabilities and the associate
stationary state that maximize the entropy of dynamical trajectories with the
other physical constraints that one wishes to impose. We give a self-contained
and unified presentation of this type of approach, both for discrete-time
Markov Chains and for continuous-time Master Equations. The obtained results
are in full agreement with the Bayesian approach introduced by Evans [Phys.
Rev. Lett. 92, 150601 (2004)] under the name 'Non-equilibrium Counterpart to
detailed balance', and with the 'invariant quantities' derived by Baule and
Evans [Phys. Rev. Lett. 101, 240601 (2008)], but provide a slightly different
perspective via the formulation in terms of an eigenvalue problem.Comment: v4=final versio
- …