194,532 research outputs found

    Extracting finite structure from infinite language

    Get PDF
    This paper presents a novel connectionist memory-rule based model capable of learning the finite-state properties of an input language from a set of positive examples. The model is based upon an unsupervised recurrent self-organizing map [T. McQueen, A. Hopgood, J. Tepper, T. Allen, A recurrent self-organizing map for temporal sequence processing, in: Proceedings of Fourth International Conference in Recent Advances in Soft Computing (RASC2002), Nottingham, 2002] with laterally interconnected neurons. A derivation of functionalequivalence theory [J. Hopcroft, J. Ullman, Introduction to Automata Theory, Languages and Computation, vol. 1, Addison-Wesley, Reading, MA, 1979] is used that allows the model to exploit similarities between the future context of previously memorized sequences and the future context of the current input sequence. This bottom-up learning algorithm binds functionally related neurons together to form states. Results show that the model is able to learn the Reber grammar [A. Cleeremans, D. Schreiber, J. McClelland, Finite state automata and simple recurrent networks, Neural Computation, 1 (1989) 372–381] perfectly from a randomly generated training set and to generalize to sequences beyond the length of those found in the training set

    Markovian Dynamics on Complex Reaction Networks

    Full text link
    Complex networks, comprised of individual elements that interact with each other through reaction channels, are ubiquitous across many scientific and engineering disciplines. Examples include biochemical, pharmacokinetic, epidemiological, ecological, social, neural, and multi-agent networks. A common approach to modeling such networks is by a master equation that governs the dynamic evolution of the joint probability mass function of the underling population process and naturally leads to Markovian dynamics for such process. Due however to the nonlinear nature of most reactions, the computation and analysis of the resulting stochastic population dynamics is a difficult task. This review article provides a coherent and comprehensive coverage of recently developed approaches and methods to tackle this problem. After reviewing a general framework for modeling Markovian reaction networks and giving specific examples, the authors present numerical and computational techniques capable of evaluating or approximating the solution of the master equation, discuss a recently developed approach for studying the stationary behavior of Markovian reaction networks using a potential energy landscape perspective, and provide an introduction to the emerging theory of thermodynamic analysis of such networks. Three representative problems of opinion formation, transcription regulation, and neural network dynamics are used as illustrative examples.Comment: 52 pages, 11 figures, for freely available MATLAB software, see http://www.cis.jhu.edu/~goutsias/CSS%20lab/software.htm

    Bayesian Learning for Neural Networks: an algorithmic survey

    Full text link
    The last decade witnessed a growing interest in Bayesian learning. Yet, the technicality of the topic and the multitude of ingredients involved therein, besides the complexity of turning theory into practical implementations, limit the use of the Bayesian learning paradigm, preventing its widespread adoption across different fields and applications. This self-contained survey engages and introduces readers to the principles and algorithms of Bayesian Learning for Neural Networks. It provides an introduction to the topic from an accessible, practical-algorithmic perspective. Upon providing a general introduction to Bayesian Neural Networks, we discuss and present both standard and recent approaches for Bayesian inference, with an emphasis on solutions relying on Variational Inference and the use of Natural gradients. We also discuss the use of manifold optimization as a state-of-the-art approach to Bayesian learning. We examine the characteristic properties of all the discussed methods, and provide pseudo-codes for their implementation, paying attention to practical aspects, such as the computation of the gradient

    Universal neural field computation

    Full text link
    Turing machines and G\"odel numbers are important pillars of the theory of computation. Thus, any computational architecture needs to show how it could relate to Turing machines and how stable implementations of Turing computation are possible. In this chapter, we implement universal Turing computation in a neural field environment. To this end, we employ the canonical symbologram representation of a Turing machine obtained from a G\"odel encoding of its symbolic repertoire and generalized shifts. The resulting nonlinear dynamical automaton (NDA) is a piecewise affine-linear map acting on the unit square that is partitioned into rectangular domains. Instead of looking at point dynamics in phase space, we then consider functional dynamics of probability distributions functions (p.d.f.s) over phase space. This is generally described by a Frobenius-Perron integral transformation that can be regarded as a neural field equation over the unit square as feature space of a dynamic field theory (DFT). Solving the Frobenius-Perron equation yields that uniform p.d.f.s with rectangular support are mapped onto uniform p.d.f.s with rectangular support, again. We call the resulting representation \emph{dynamic field automaton}.Comment: 21 pages; 6 figures. arXiv admin note: text overlap with arXiv:1204.546

    Quantum machine learning: a classical perspective

    Get PDF
    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning techniques to impressive results in regression, classification, data-generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets are motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed-up classical machine learning algorithms. Here we review the literature in quantum machine learning and discuss perspectives for a mixed readership of classical machine learning and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in machine learning are identified as promising directions for the field. Practical questions, like how to upload classical data into quantum form, will also be addressed.Comment: v3 33 pages; typos corrected and references adde

    Computation vs. Information Processing: Why Their Difference Matters to Cognitive Science

    Get PDF
    Since the cognitive revolution, it’s become commonplace that cognition involves both computation and information processing. Is this one claim or two? Is computation the same as information processing? The two terms are often used interchangeably, but this usage masks important differences. In this paper, we distinguish information processing from computation and examine some of their mutual relations, shedding light on the role each can play in a theory of cognition. We recommend that theorists of cognition be explicit and careful in choosing\ud notions of computation and information and connecting them together. Much confusion can be avoided by doing so

    A Survey on Continuous Time Computations

    Full text link
    We provide an overview of theories of continuous time computation. These theories allow us to understand both the hardness of questions related to continuous time dynamical systems and the computational power of continuous time analog models. We survey the existing models, summarizing results, and point to relevant references in the literature
    • …
    corecore