11,813 research outputs found

    The exit problem for diffusions with time-periodic drift and stochastic resonance

    Full text link
    Physical notions of stochastic resonance for potential diffusions in periodically changing double-well potentials such as the spectral power amplification have proved to be defective. They are not robust for the passage to their effective dynamics: continuous-time finite-state Markov chains describing the rough features of transitions between different domains of attraction of metastable points. In the framework of one-dimensional diffusions moving in periodically changing double-well potentials we design a new notion of stochastic resonance which refines Freidlin's concept of quasi-periodic motion. It is based on exact exponential rates for the transition probabilities between the domains of attraction which are robust with respect to the reduced Markov chains. The quality of periodic tuning is measured by the probability for transition during fixed time windows depending on a time scale parameter. Maximizing it in this parameter produces the stochastic resonance points.Comment: Published at http://dx.doi.org/10.1214/105051604000000530 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Diffusion of Context and Credit Information in Markovian Models

    Full text link
    This paper studies the problem of ergodicity of transition probability matrices in Markovian models, such as hidden Markov models (HMMs), and how it makes very difficult the task of learning to represent long-term context for sequential data. This phenomenon hurts the forward propagation of long-term context information, as well as learning a hidden state representation to represent long-term context, which depends on propagating credit information backwards in time. Using results from Markov chain theory, we show that this problem of diffusion of context and credit is reduced when the transition probabilities approach 0 or 1, i.e., the transition probability matrices are sparse and the model essentially deterministic. The results found in this paper apply to learning approaches based on continuous optimization, such as gradient descent and the Baum-Welch algorithm.Comment: See http://www.jair.org/ for any accompanying file

    Numerically optimized Markovian coupling and mixing in one-dimensional maps

    Get PDF
    Algorithms are introduced that produce optimal Markovian couplings for large finite-state-space discrete-time Markov chains with sparse transition matrices; these algorithms are applied to some toy models motivated by fluid-dynamical mixing problems at high Peclét number. An alternative definition of the time-scale of a mixing process is suggested. Finally, these algorithms are applied to the problem of coupling diffusion processes in an acute-angled triangle, and some of the simplifications that occur in continuum coupling problems are discussed

    Certified Reinforcement Learning with Logic Guidance

    Full text link
    This paper proposes the first model-free Reinforcement Learning (RL) framework to synthesise policies for unknown, and continuous-state Markov Decision Processes (MDPs), such that a given linear temporal property is satisfied. We convert the given property into a Limit Deterministic Buchi Automaton (LDBA), namely a finite-state machine expressing the property. Exploiting the structure of the LDBA, we shape a synchronous reward function on-the-fly, so that an RL algorithm can synthesise a policy resulting in traces that probabilistically satisfy the linear temporal property. This probability (certificate) is also calculated in parallel with policy learning when the state space of the MDP is finite: as such, the RL algorithm produces a policy that is certified with respect to the property. Under the assumption of finite state space, theoretical guarantees are provided on the convergence of the RL algorithm to an optimal policy, maximising the above probability. We also show that our method produces ''best available'' control policies when the logical property cannot be satisfied. In the general case of a continuous state space, we propose a neural network architecture for RL and we empirically show that the algorithm finds satisfying policies, if there exist such policies. The performance of the proposed framework is evaluated via a set of numerical examples and benchmarks, where we observe an improvement of one order of magnitude in the number of iterations required for the policy synthesis, compared to existing approaches whenever available.Comment: This article draws from arXiv:1801.08099, arXiv:1809.0782

    Computational Mechanics of Molecular Systems

    Get PDF
    New Yor
    corecore