3,097 research outputs found

    Formal Modeling of Connectionism using Concurrency Theory, an Approach Based on Automata and Model Checking

    Get PDF
    This paper illustrates a framework for applying formal methods techniques, which are symbolic in nature, to specifying and verifying neural networks, which are sub-symbolic in nature. The paper describes a communicating automata [Bowman & Gomez, 2006] model of neural networks. We also implement the model using timed automata [Alur & Dill, 1994] and then undertake a verification of these models using the model checker Uppaal [Pettersson, 2000] in order to evaluate the performance of learning algorithms. This paper also presents discussion of a number of broad issues concerning cognitive neuroscience and the debate as to whether symbolic processing or connectionism is a suitable representation of cognitive systems. Additionally, the issue of integrating symbolic techniques, such as formal methods, with complex neural networks is discussed. We then argue that symbolic verifications may give theoretically well-founded ways to evaluate and justify neural learning systems in the field of both theoretical research and real world applications

    A bridge from neuroscientific models to recurrent neural networks. Derivation of continuous-time connectionist models from neuroscience computational principles

    Get PDF
    In the last years, recurrent neural networks with continuous dynamics have been applied to model many neurobiological phenomena. However, the literature on the physiological foundations of these connectionist networks is practically non-existent, as they are closer to artificial neural networks than neuroscientific computational models. In this article, we explicitly derive the equations of these recurrent connectionist systems from neuroscientific models, such as leaky integrate-and-fire (LIF) neurons and synaptic chemical kinetics. We specify under what conditions this modelling is supposed to hold, and we run simulations of networks wired like some simple neural circuits, such as those that possess species like Tritonia Diomedea, Aplysia Californica or lampreys, in order to show their similar behaviour. Finally, in the first annex we introduce some of the emerging properties of these networks, such as being universal approximators of dynamical systems, and we remark that this approach is congruent with the spontaneous synchronic activity that is known to take place in the cortex.A través d'aquestes pàgines, oferim una derivació completament nova d'alguns models de freqüència neuronal, partint del formalisme dels "integrate and fire" i de la cinètica sinàptica. El propòsit principal és investigar els fonaments neurocientífics dels models connexionistes, ja que les equacions que obtenim es tracten de xarxes neuronals recurrents. Llavors, realitzem simulacions de circuits neuronals reals fent servir aquestes equacions, provant que poden ajustar-se a les dades registrades en tres experiments diferents. Finalment, investiguem una propietat emergent de les xarxes derivades, i mostrem que aquesta característica està en sintonia amb les correlacions que han sigut observades entre les neurones del còrtex

    Platonic model of mind as an approximation to neurodynamics

    Get PDF
    Hierarchy of approximations involved in simplification of microscopic theories, from sub-cellural to the whole brain level, is presented. A new approximation to neural dynamics is described, leading to a Platonic-like model of mind based on psychological spaces. Objects and events in these spaces correspond to quasi-stable states of brain dynamics and may be interpreted from psychological point of view. Platonic model bridges the gap between neurosciences and psychological sciences. Static and dynamic versions of this model are outlined and Feature Space Mapping, a neurofuzzy realization of the static version of Platonic model, described. Categorization experiments with human subjects are analyzed from the neurodynamical and Platonic model points of view

    Reinforcement Learning: A Survey

    Full text link
    This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.Comment: See http://www.jair.org/ for any accompanying file
    • …
    corecore