10 research outputs found

    Maximum a Posteriori Estimation by Search in Probabilistic Programs

    Full text link
    We introduce an approximate search algorithm for fast maximum a posteriori probability estimation in probabilistic programs, which we call Bayesian ascent Monte Carlo (BaMC). Probabilistic programs represent probabilistic models with varying number of mutually dependent finite, countable, and continuous random variables. BaMC is an anytime MAP search algorithm applicable to any combination of random variables and dependencies. We compare BaMC to other MAP estimation algorithms and show that BaMC is faster and more robust on a range of probabilistic models.Comment: To appear in proceedings of SOCS1

    Probabilistic Deterministic Finite Automata and Recurrent Networks, Revisited

    Get PDF
    Reservoir computers (RCs) and recurrent neural networks (RNNs) can mimic any finite-state automaton in theory, and some workers demonstrated that this can hold in practice. We test the capability of generalized linear models, RCs, and Long Short-Term Memory (LSTM) RNN architectures to predict the stochastic processes generated by a large suite of probabilistic deterministic finite-state automata (PDFA). PDFAs provide an excellent performance benchmark in that they can be systematically enumerated, the randomness and correlation structure of their generated processes are exactly known, and their optimal memory-limited predictors are easily computed. Unsurprisingly, LSTMs outperform RCs, which outperform generalized linear models. Surprisingly, each of these methods can fall short of the maximal predictive accuracy by as much as 50% after training and, when optimized, tend to fall short of the maximal predictive accuracy by ~5%, even though previously available methods achieve maximal predictive accuracy with orders-of-magnitude less data. Thus, despite the representational universality of RCs and RNNs, using them can engender a surprising predictive gap for simple stimuli. One concludes that there is an important and underappreciated role for methods that infer "causal states" or "predictive state representations"

    Spectral learning with proper probabilities for finite state automation

    Get PDF
    International audienceProbabilistic Finite Automaton (PFA), Probabilistic Finite State Transducers (PFST) and Hidden Markov Models (HMM) are widely used in Automatic Speech Recognition (ASR), Text-to-Speech (TTS) systems and Part Of Speech (POS) tagging for language mod-eling. Traditionally, unsupervised learning of these latent variable models is done by Expectation-Maximization (EM)-like algorithms, as the Baum-Welch algorithm. In a recent alternative line of work, learning algorithms based on spectral properties of some low order moments matrices or tensors were proposed. In comparison to EM, they are orders of magnitude faster and come with theoretical convergence guarantees. However, returned models are not ensured to compute proper distributions. They often return negative values that do not sum to one, limiting their applicability and preventing them to serve as an initialization to EM-like algorithms. In this paper, we propose a new spectral algorithm able to learn a large range of models constrained to return proper distributions. We assess its performances on synthetic problems from the PAutomaC challenge and real datasets extracted from Wikipedia. Experiments show that it outperforms previous spectral approaches as well as the Baum-Welch algorithm with random restarts, in addition to serve as an efficient initialization step to EM-like algorithms

    Generative learning for nonlinear dynamics

    Full text link
    Modern generative machine learning models demonstrate surprising ability to create realistic outputs far beyond their training data, such as photorealistic artwork, accurate protein structures, or conversational text. These successes suggest that generative models learn to effectively parametrize and sample arbitrarily complex distributions. Beginning half a century ago, foundational works in nonlinear dynamics used tools from information theory to infer properties of chaotic attractors from time series, motivating the development of algorithms for parametrizing chaos in real datasets. In this perspective, we aim to connect these classical works to emerging themes in large-scale generative statistical learning. We first consider classical attractor reconstruction, which mirrors constraints on latent representations learned by state space models of time series. We next revisit early efforts to use symbolic approximations to compare minimal discrete generators underlying complex processes, a problem relevant to modern efforts to distill and interpret black-box statistical models. Emerging interdisciplinary works bridge nonlinear dynamics and learning theory, such as operator-theoretic methods for complex fluid flows, or detection of broken detailed balance in biological datasets. We anticipate that future machine learning techniques may revisit other classical concepts from nonlinear dynamics, such as transinformation decay and complexity-entropy tradeoffs.Comment: 23 pages, 4 figure

    Prediction and Dissipation in Nonequilibrium Molecular Sensors: Conditionally Markovian Channels Driven by Memoryful Environments.

    Get PDF
    Biological sensors must often predict their input while operating under metabolic constraints. However, determining whether or not a particular sensor is evolved or designed to be accurate and efficient is challenging. This arises partly from the functional constraints being at cross purposes and partly since quantifying the prediction performance of even in silico sensors can require prohibitively long simulations, especially when highly complex environments drive sensors out of equilibrium. To circumvent these difficulties, we develop new expressions for the prediction accuracy and thermodynamic costs of the broad class of conditionally Markovian sensors subject to complex, correlated (unifilar hidden semi-Markov) environmental inputs in nonequilibrium steady state. Predictive metrics include the instantaneous memory and the total predictable information (the mutual information between present sensor state and input future), while dissipation metrics include power extracted from the environment and the nonpredictive information rate. Success in deriving these formulae relies on identifying the environment's causal states, the input's minimal sufficient statistics for prediction. Using these formulae, we study large random channels and the simplest nontrivial biological sensor model-that of a Hill molecule, characterized by the number of ligands that bind simultaneously-the sensor's cooperativity. We find that the seemingly impoverished Hill molecule can capture an order of magnitude more predictable information than large random channels

    Information Theory and Machine Learning

    Get PDF
    The recent successes of machine learning, especially regarding systems based on deep neural networks, have encouraged further research activities and raised a new set of challenges in understanding and designing complex machine learning algorithms. New applications require learning algorithms to be distributed, have transferable learning results, use computation resources efficiently, convergence quickly on online settings, have performance guarantees, satisfy fairness or privacy constraints, incorporate domain knowledge on model structures, etc. A new wave of developments in statistical learning theory and information theory has set out to address these challenges. This Special Issue, "Machine Learning and Information Theory", aims to collect recent results in this direction reflecting a diverse spectrum of visions and efforts to extend conventional theories and develop analysis tools for these complex machine learning systems

    Méthodes des moments pour l'inférence de systèmes séquentiels linéaires rationnels

    Get PDF
    Learning stochastic models generating sequences has many applications in natural language processing, speech recognitions or bioinformatics. Multiplicity Automata (MA) are graphical latent variable models that encompass a wide variety of linear systems. In particular, they can model stochastic languages, stochastic processes and controlled processes. Traditional learning algorithms such as the one of Baum-Welch are iterative, slow and may converge to local optima. A recent alternative is to use the Method of Moments (MoM) to design consistent and fast algorithms with pseudo-PAC guarantees.However, MoM-based algorithms have two main disadvantages. First, the PAC guarantees hold only if the size of the learned model corresponds to the size of the target model. Second, although these algorithms learn a function close to the target distribution, most do not ensure it will be a distribution. Thus, a model learned from a finite number of examples may return negative values or values that do not sum to one.This thesis addresses both problems. First, we extend the theoretical guarantees for compressed models, and propose a regularized spectral algorithm that adjusts the size of the model to the data. Then, an application in electronic warfare is proposed to sequence of the dwells of a superheterodyne receiver. Finally, we design new learning algorithms based on the MoM that do not suffer the problem of negative probabilities. We show for one of them pseudo-PAC guarantees.L’apprentissage de modèles stochastiques générant des séquences a de nombreuses applications comme en traitement de la parole, du langage ou bien encore en bio-informatique. Les Automates à Multiplicité (MA) sont des modèles graphiques à variables latentes qui englobent une grande variété de systèmes linéaires pouvant représenter entre autres des langues stochastiques, des processus stochastiques ainsi que des processus contrôlés. Les algorithmes traditionnels d’apprentissage comme celui de Baum-Welch sont itératifs, lent et peuvent converger vers des optima locaux. Une alternative récente consiste à utiliser la méthode des moments (MoM) pour concevoir des algorithmes rapides et consistent avec des garanties pseudo-PAC.Cependant, les algorithmes basés sur la MoM ont deux inconvénients principaux. Tout d'abord, les garanties PAC ne sont valides que si la dimension du modèle appris correspond à la dimension du modèle cible. Deuxièmement, bien que les algorithmes basés sur la MoM apprennent une fonction proche de la distribution cible, la plupart ne contraignent pas celle-ci à être une distribution. Ainsi, un modèle appris à partir d’un nombre fini d’exemples peut renvoyer des valeurs négatives et qui ne somment pas à un.Ainsi, cette thèse s’adresse à ces deux problèmes en proposant 1) un élargissement des garanties théoriques pour les modèles compressés et 2) de nouveaux algorithmes d’apprentissage ne souffrant pas du problème des probabilités négatives et dont certains bénéficient de garanties PAC. Une application en guerre électronique est aussi proposée pour le séquencement des écoutes du récepteur superhétéordyne
    corecore