99,654 research outputs found

    Joint Structure Learning of Multiple Non-Exchangeable Networks

    Full text link
    Several methods have recently been developed for joint structure learning of multiple (related) graphical models or networks. These methods treat individual networks as exchangeable, such that each pair of networks are equally encouraged to have similar structures. However, in many practical applications, exchangeability in this sense may not hold, as some pairs of networks may be more closely related than others, for example due to group and sub-group structure in the data. Here we present a novel Bayesian formulation that generalises joint structure learning beyond the exchangeable case. In addition to a general framework for joint learning, we (i) provide a novel default prior over the joint structure space that requires no user input; (ii) allow for latent networks; (iii) give an efficient, exact algorithm for the case of time series data and dynamic Bayesian networks. We present empirical results on non-exchangeable populations, including a real data example from biology, where cell-line-specific networks are related according to genomic features.Comment: To appear in Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics (AISTATS

    Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks

    Get PDF
    Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an over-simplification of what real neuronal networks compute. Here, we suggest that the RNN approach may be made both neurobiologically more plausible and computationally more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive Bayesian update equations that can decode its output. Critically, these updates define a 'recognizing RNN' (rRNN), in which neurons compute and exchange prediction and prediction error messages. The rRNN has several desirable features that a conventional RNN does not have, for example, fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. We suggest that the Bayesian inversion of recurrent neural networks may be useful both as a model of brain function and as a machine learning tool. We illustrate the use of the rRNN by an application to the online decoding (i.e. recognition) of human kinematics

    Feature Dynamic Bayesian Networks

    Get PDF
    Feature Markov Decision Processes (PhiMDPs) are well-suited for learning agents in general environments. Nevertheless, unstructured (Phi)MDPs are limited to relatively simple environments. Structured MDPs like Dynamic Bayesian Networks (DBNs) are used for large-scale real-world problems. In this article I extend PhiMDP to PhiDBN. The primary contribution is to derive a cost criterion that allows to automatically extract the most relevant features from the environment, leading to the "best" DBN representation. I discuss all building blocks required for a complete general learning algorithm.Comment: 7 page

    Dynamic Bayesian networks in molecular plant science: inferring gene regulatory networks from multiple gene expression time series

    Get PDF
    To understand the processes of growth and biomass production in plants, we ultimately need to elucidate the structure of the underlying regulatory networks at the molecular level. The advent of high-throughput postgenomic technologies has spurred substantial interest in reverse engineering these networks from data, and several techniques from machine learning and multivariate statistics have recently been proposed. The present article discusses the problem of inferring gene regulatory networks from gene expression time series, and we focus our exposition on the methodology of Bayesian networks. We describe dynamic Bayesian networks and explain their advantages over other statistical methods. We introduce a novel information sharing scheme, which allows us to infer gene regulatory networks from multiple sources of gene expression data more accurately. We illustrate and test this method on a set of synthetic data, using three different measures to quantify the network reconstruction accuracy. The main application of our method is related to the problem of circadian regulation in plants, where we aim to reconstruct the regulatory networks of nine circadian genes in Arabidopsis thaliana from four gene expression time series obtained under different experimental conditions

    Non-Bayesian Social Learning, Second Version

    Get PDF
    We develop a dynamic model of opinion formation in social networks. Relevant information is spread throughout the network in such a way that no agent has enough data to learn a payoff-relevant parameter. Individuals engage in communication with their neighbors in order to learn from their experiences. However, instead of incorporating the views of their neighbors in a fully Bayesian manner, agents use a simple updating rule which linearly combines their personal experience and the views of their neighbors (even though the neighbors’ views may be quite inaccurate). This non-Bayesian learning rule is motivated by the formidable complexity required to fully implement Bayesian updating in networks. We show that, under mild assumptions, repeated interactions lead agents to successfully aggregate information and to learn the true underlying state of the world. This result holds in spite of the apparent naıvite of agents’ updating rule, the agents’ need for information from sources (i.e., other agents) the existence of which they may not be aware of, the possibility that the most persuasive agents in the network are precisely those least informed and with worst prior views, and the assumption that no agent can tell whether their own views or their neighbors’ views are more accurate.Social networks, learning, information aggregation

    Dynamic Bayesian networks and variable length genetic algorithm for designing cue-based model for dialogue act recognition

    Get PDF
    The automatic recognition of dialogue act is a task of crucial importance for the processing of natural language dialogue at discourse level. It is also one of the most challenging problems as most often the dialogue act is not expressed directly in speaker's utterance. In this paper, a new cue-based model for dialogue act recognition is presented. The model is, essentially, a dynamic Bayesian network induced from manually annotated dialogue corpus via dynamic Bayesian machine learning algorithms. Furthermore, the dynamic Bayesian network's random variables are constituted from sets of lexical cues selected automatically by means of a variable length genetic algorithm, developed specifically for this purpose. To evaluate the proposed approaches of design, three stages of experiments have been conducted. In the initial stage, the dynamic Bayesian network model is constructed using sets of lexical cues selected manually from the dialogue corpus. The model is evaluated against two previously proposed models and the results confirm the potentiality of dynamic Bayesian networks for dialogue act recognition. In the second stage, the developed variable length genetic algorithm is used to select different sets of lexical cues to constitute the dynamic Bayesian networks' random variables. The developed approach is evaluated against some of the previously used ranking approaches and the results provide experimental evidences on its ability to avoid the drawbacks of the ranking approaches. In the third stage, the dynamic Bayesian networks model is constructed using random variables constituted from the sets of lexical cues generated in the second stage and the results confirm the effectiveness of the proposed approaches for designing dialogue act recognition model

    Feature Markov Decision Processes

    Full text link
    General purpose intelligent learning agents cycle through (complex,non-MDP) sequences of observations, actions, and rewards. On the other hand, reinforcement learning is well-developed for small finite state Markov Decision Processes (MDPs). So far it is an art performed by human designers to extract the right state representation out of the bare observations, i.e. to reduce the agent setup to the MDP framework. Before we can think of mechanizing this search for suitable MDPs, we need a formal objective criterion. The main contribution of this article is to develop such a criterion. I also integrate the various parts into one learning algorithm. Extensions to more realistic dynamic Bayesian networks are developed in a companion article.Comment: 7 page
    corecore