10,243 research outputs found
The Entropy of Conditional Markov Trajectories
To quantify the randomness of Markov trajectories with fixed initial and
final states, Ekroot and Cover proposed a closed-form expression for the
entropy of trajectories of an irreducible finite state Markov chain. Numerous
applications, including the study of random walks on graphs, require the
computation of the entropy of Markov trajectories conditioned on a set of
intermediate states. However, the expression of Ekroot and Cover does not allow
for computing this quantity. In this paper, we propose a method to compute the
entropy of conditional Markov trajectories through a transformation of the
original Markov chain into a Markov chain that exhibits the desired conditional
distribution of trajectories. Moreover, we express the entropy of Markov
trajectories - a global quantity - as a linear combination of local entropies
associated with the Markov chain states.Comment: Accepted for publication in IEEE Transactions on Information Theor
Conditional reversibility in nonequilibrium stochastic systems
For discrete-state stochastic systems obeying Markovian dynamics, we
establish the counterpart of the conditional reversibility theorem obtained by
Gallavotti for deterministic systems [Ann. de l'Institut Henri Poincar\'e (A)
70, 429 (1999)]. Our result states that stochastic trajectories conditioned on
opposite values of entropy production are related by time reversal, in the
long-time limit. In other words, the probability of observing a particular
sequence of events, given a long trajectory with a specified entropy production
rate , is the same as the probability of observing the time-reversed
sequence of events, given a trajectory conditioned on the opposite entropy
production, , where both trajectories are sampled from the same
underlying Markov process. To obtain our result, we use an equivalence between
conditioned ("microcanonical") and biased ("canonical") ensembles of
nonequilibrium trajectories. We provide an example to illustrate our findings.Comment: 13 pages, 1 figur
On the Inability of Markov Models to Capture Criticality in Human Mobility
We examine the non-Markovian nature of human mobility by exposing the
inability of Markov models to capture criticality in human mobility. In
particular, the assumed Markovian nature of mobility was used to establish a
theoretical upper bound on the predictability of human mobility (expressed as a
minimum error probability limit), based on temporally correlated entropy. Since
its inception, this bound has been widely used and empirically validated using
Markov chains. We show that recurrent-neural architectures can achieve
significantly higher predictability, surpassing this widely used upper bound.
In order to explain this anomaly, we shed light on several underlying
assumptions in previous research works that has resulted in this bias. By
evaluating the mobility predictability on real-world datasets, we show that
human mobility exhibits scale-invariant long-range correlations, bearing
similarity to a power-law decay. This is in contrast to the initial assumption
that human mobility follows an exponential decay. This assumption of
exponential decay coupled with Lempel-Ziv compression in computing Fano's
inequality has led to an inaccurate estimation of the predictability upper
bound. We show that this approach inflates the entropy, consequently lowering
the upper bound on human mobility predictability. We finally highlight that
this approach tends to overlook long-range correlations in human mobility. This
explains why recurrent-neural architectures that are designed to handle
long-range structural correlations surpass the previously computed upper bound
on mobility predictability
Markov processes follow from the principle of Maximum Caliber
Markov models are widely used to describe processes of stochastic dynamics.
Here, we show that Markov models are a natural consequence of the dynamical
principle of Maximum Caliber. First, we show that when there are different
possible dynamical trajectories in a time-homogeneous process, then the only
type of process that maximizes the path entropy, for any given singlet
statistics, is a sequence of identical, independently distributed (i.i.d.)
random variables, which is the simplest Markov process. If the data is in the
form of sequentially pairwise statistics, then maximizing the caliber dictates
that the process is Markovian with a uniform initial distribution. Furthermore,
if an initial non-uniform dynamical distribution is known, or multiple
trajectories are conditioned on an initial state, then the Markov process is
still the only one that maximizes the caliber. Second, given a model, MaxCal
can be used to compute the parameters of that model. We show that this
procedure is equivalent to the maximum-likelihood method of inference in the
theory of statistics.Comment: 4 page
- …