4,756 research outputs found

    Surprise probabilities in Markov chains

    Full text link
    In a Markov chain started at a state xx, the hitting time τ(y)\tau(y) is the first time that the chain reaches another state yy. We study the probability Px(τ(y)=t)\mathbf{P}_x(\tau(y) = t) that the first visit to yy occurs precisely at a given time tt. Informally speaking, the event that a new state is visited at a large time tt may be considered a "surprise". We prove the following three bounds: 1) In any Markov chain with nn states, Px(τ(y)=t)nt\mathbf{P}_x(\tau(y) = t) \le \frac{n}{t}. 2) In a reversible chain with nn states, Px(τ(y)=t)2nt\mathbf{P}_x(\tau(y) = t) \le \frac{\sqrt{2n}}{t} for t4n+4t \ge 4n + 4. 3) For random walk on a simple graph with n2n \ge 2 vertices, Px(τ(y)=t)4elognt\mathbf{P}_x(\tau(y) = t) \le \frac{4e \log n}{t}. We construct examples showing that these bounds are close to optimal. The main feature of our bounds is that they require very little knowledge of the structure of the Markov chain. To prove the bound for random walk on graphs, we establish the following estimate conjectured by Aldous, Ding and Oveis-Gharan (private communication): For random walk on an nn-vertex graph, for every initial vertex xx, \[ \sum_y \left( \sup_{t \ge 0} p^t(x, y) \right) = O(\log n). \

    Generalized Markov stability of network communities

    Full text link
    We address the problem of community detection in networks by introducing a general definition of Markov stability, based on the difference between the probability fluxes of a Markov chain on the network at different time scales. The specific implementation of the quality function and the resulting optimal community structure thus become dependent both on the type of Markov process and on the specific Markov times considered. For instance, if we use a natural Markov chain dynamics and discount its stationary distribution -- that is, we take as reference process the dynamics at infinite time -- we obtain the standard formulation of the Markov stability. Notably, the possibility to use finite-time transition probabilities to define the reference process naturally allows detecting communities at different resolutions, without the need to consider a continuous-time Markov chain in the small time limit. The main advantage of our general formulation of Markov stability based on dynamical flows is that we work with lumped Markov chains on network partitions, having the same stationary distribution of the original process. In this way the form of the quality function becomes invariant under partitioning, leading to a self-consistent definition of community structures at different aggregation scales

    Information dynamics: patterns of expectation and surprise in the perception of music

    Get PDF
    This is a postprint of an article submitted for consideration in Connection Science © 2009 [copyright Taylor & Francis]; Connection Science is available online at:http://www.tandfonline.com/openurl?genre=article&issn=0954-0091&volume=21&issue=2-3&spage=8

    PReMo : An Analyzer for P robabilistic Re cursive Mo dels

    Get PDF
    This paper describes PReMo, a tool for analyzing Recursive Markov Chains, and their controlled/game extensions: (1-exit) Recursive Markov Decision Processes and Recursive Simple Stochastic Games

    Maximum entropy estimation of transition probabilities of reversible Markov chains

    Full text link
    In this paper, we develop a general theory for the estimation of the transition probabilities of reversible Markov chains using the maximum entropy principle. A broad range of physical models can be studied within this approach. We use one-dimensional classical spin systems to illustrate the theoretical ideas. The examples studied in this paper are: the Ising model, the Potts model and the Blume-Emery-Griffiths model

    A Bayesian Markov Chain Approach Using Proportions Labour Market Data for Greek Regions

    Get PDF
    This paper focuses on Greek labour market dynamics at a regional base, which comprises of 16 provinces, as defined by NUTS levels 1 and 2 (Eurostat, 2008), using Markov Chains for proportions data for the first time in the literature. We apply a Bayesian approach, which employs a Monte Carlo Integration procedure that uncovers the entire empirical posterior distribution of transition probabilities from full employment to part employment, unemployment and economically unregistered unemployment and vice a versa. Our results show that there are disparities in the transition probabilities across regions, implying that the convergence of the Greek labour market at a regional base is far from being considered as completed. However, some common patterns are observed as regions in the south of the country exhibit similar transition probabilities between different states of the labour market.Greek Regions, Employment, Unemployment, Markov Chains.

    A Markov Chain state transition approach to establishing critical phases for AUV reliability

    Get PDF
    The deployment of complex autonomous underwater platforms for marine science comprises a series of sequential steps. Each step is critical to the success of the mission. In this paper we present a state transition approach, in the form of a Markov chain, which models the sequence of steps from pre-launch to operation to recovery. The aim is to identify the states and state transitions that present higher risk to the vehicle and hence to the mission, based on evidence and judgment. Developing a Markov chain consists of two separate tasks. The first defines the structure that encodes the sequence of events. The second task assigns probabilities to each possible transition. Our model comprises eleven discrete states, and includes distance-dependent underway survival statistics. The integration of the Markov model with underway survival statistics allows us to quantify the likelihood of success during each state and transition and consequently the likelihood of achieving the desired mission goals. To illustrate this generic process, the fault history of the Autosub3 autonomous underwater vehicle provides the information for different phases of operation. The method proposed here adds more detail to previous analyses; faults are discriminated according to the phase of the mission in which they took place

    Labour Market Dynamics in Greek Regions: a Bayesian Markov Chain Approach Using Proportions Data

    Get PDF
    This paper focuses on Greek labour market dynamics at a regional base, which comprises of 16 provinces, as defined by NUTS levels 1 and 2 (Eurostat, 2008), using Markov Chains for proportions data for the first time in the literature. We apply a Bayesian approach, which employs a Monte Carlo Integration procedure that uncovers the entire empirical posterior distribution of transition probabilities from full employment to part employment, unemployment and economically unregistered unemployment and vice a versa. Our results show that there are disparities in the transition probabilities across regions, implying that the convergence of the Greek labour market at a regional base is far from being considered as completed. However, some common patterns are observed as regions in the south of the country exhibit similar transition probabilities between different states of the labour marketGreek Regions, Employment, Unemployment, Markov Chains

    The 3-dimensional random walk with applications to overstretched DNA and the protein titin

    Full text link
    We study the three-dimensional persistent random walk with drift. Then we develop a thermodynamic model that is based on this random walk without assuming the Boltzmann-Gibbs form for the equilibrium distribution. The simplicity of the model allows us to perform all calculations in closed form. We show that, despite its simplicity, the model can be used to describe different polymer stretching experiments. We study the reversible overstretching transition of DNA and the static force-extension relation of the protein titin.Comment: 9 pages, 10 figure
    corecore