16 research outputs found

    Advances in Reinforcement Learning

    Get PDF
    Reinforcement Learning (RL) is a very dynamic area in terms of theory and application. This book brings together many different aspects of the current research on several fields associated to RL which has been growing rapidly, producing a wide variety of learning algorithms for different applications. Based on 24 Chapters, it covers a very broad variety of topics in RL and their application in autonomous systems. A set of chapters in this book provide a general overview of RL while other chapters focus mostly on the applications of RL paradigms: Game Theory, Multi-Agent Theory, Robotic, Networking Technologies, Vehicular Navigation, Medicine and Industrial Logistic

    Application of spiking neural networks and the bees algorithm to control chart pattern recognition

    Get PDF
    Statistical process control (SPC) is a method for improving the quality of products. Control charting plays a most important role in SPC. SPC control charts arc used for monitoring and detecting unnatural process behaviour. Unnatural patterns in control charts indicate unnatural causes for variations. Control chart pattern recognition is therefore important in SPC. Past research shows that although certain types of charts, such as the CUSUM chart, might have powerful detection ability, they lack robustness and do not function automatically. In recent years, neural network techniques have been applied to automatic pattern recognition. Spiking Neural Networks (SNNs) belong to the third generation of artificial neural networks, with spiking neurons as processing elements. In SNNs, time is an important feature for information representation and processing. This thesis proposes the application of SNN techniques to control chart pattern recognition. It is designed to present an analysis of the existing learning algorithms of SNN for pattern recognition and to explain how and why spiking neurons have more computational power in comparison to the previous generation of neural networks. This thesis focuses on the architecture and the learning procedure of the network. Four new learning algorithms arc presented with their specific architecture: Spiking Learning Vector Quantisation (S-LVQ), Enhanced-Spiking Learning Vector Quantisation (NS-LVQ), S-LVQ with Bees and NS-LVQ with Bees. The latter two algorithms employ a new intelligent swarm-based optimisation called the Bees Algorithm to optimise the LVQ pattern recognition networks. Overall, the aim of the research is to develop a simple architecture for the proposed network as well as to develop a network that is efficient for application to control chart pattern recognition. Experiments show that the proposed architecture and the learning procedure give high pattern recognition accuracies.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Application of spiking neural networks and the bees algorithm to control chart pattern recognition

    Get PDF
    Statistical process control (SPC) is a method for improving the quality of products. Control charting plays a most important role in SPC. SPC control charts arc used for monitoring and detecting unnatural process behaviour. Unnatural patterns in control charts indicate unnatural causes for variations. Control chart pattern recognition is therefore important in SPC. Past research shows that although certain types of charts, such as the CUSUM chart, might have powerful detection ability, they lack robustness and do not function automatically. In recent years, neural network techniques have been applied to automatic pattern recognition. Spiking Neural Networks (SNNs) belong to the third generation of artificial neural networks, with spiking neurons as processing elements. In SNNs, time is an important feature for information representation and processing. This thesis proposes the application of SNN techniques to control chart pattern recognition. It is designed to present an analysis of the existing learning algorithms of SNN for pattern recognition and to explain how and why spiking neurons have more computational power in comparison to the previous generation of neural networks. This thesis focuses on the architecture and the learning procedure of the network. Four new learning algorithms arc presented with their specific architecture: Spiking Learning Vector Quantisation (S-LVQ), Enhanced-Spiking Learning Vector Quantisation (NS-LVQ), S-LVQ with Bees and NS-LVQ with Bees. The latter two algorithms employ a new intelligent swarm-based optimisation called the Bees Algorithm to optimise the LVQ pattern recognition networks. Overall, the aim of the research is to develop a simple architecture for the proposed network as well as to develop a network that is efficient for application to control chart pattern recognition. Experiments show that the proposed architecture and the learning procedure give high pattern recognition accuracies

    Synaptic consolidation: from synapses to behavioral modeling

    Get PDF
    Synaptic plasticity, a key process for memory formation, manifests itself across different time scales ranging from a few seconds for plasticity induction up to hours or even years for consolidation and memory retention. We developed a three-layered model of synaptic consolidation that accounts for data across a large range of experimental conditions. Consolidation occurs in the model through the interaction of the synaptic efficacy with a scaffolding variable by a read-write process mediated by a tagging-related variable. Plasticity-inducing stimuli modify the efficacy, but the state of tag and scaffold can only change if a write protection mechanism is overcome. Our model makes a link from depotentiation protocols in vitro to behavioral results regarding the influence of novelty on inhibitory avoidance memory in rats

    Brain Computations and Connectivity [2nd edition]

    Get PDF
    This is an open access title available under the terms of a CC BY-NC-ND 4.0 International licence. It is free to read on the Oxford Academic platform and offered as a free PDF download from OUP and selected open access locations. Brain Computations and Connectivity is about how the brain works. In order to understand this, it is essential to know what is computed by different brain systems; and how the computations are performed. The aim of this book is to elucidate what is computed in different brain systems; and to describe current biologically plausible computational approaches and models of how each of these brain systems computes. Understanding the brain in this way has enormous potential for understanding ourselves better in health and in disease. Potential applications of this understanding are to the treatment of the brain in disease; and to artificial intelligence which will benefit from knowledge of how the brain performs many of its extraordinarily impressive functions. This book is pioneering in taking this approach to brain function: to consider what is computed by many of our brain systems; and how it is computed, and updates by much new evidence including the connectivity of the human brain the earlier book: Rolls (2021) Brain Computations: What and How, Oxford University Press. Brain Computations and Connectivity will be of interest to all scientists interested in brain function and how the brain works, whether they are from neuroscience, or from medical sciences including neurology and psychiatry, or from the area of computational science including machine learning and artificial intelligence, or from areas such as theoretical physics

    Short-Term Plasticity at the Schaffer Collateral: A New Model with Implications for Hippocampal Processing

    Get PDF
    A new mathematical model of short-term synaptic plasticity (STP) at the Schaffer collateral is introduced. Like other models of STP, the new model relates short-term synaptic plasticity to an interaction between facilitative and depressive dynamic influences. Unlike previous models, the new model successfully simulates facilitative and depressive dynamics within the framework of the synaptic vesicle cycle. The novelty of the model lies in the description of a competitive interaction between calcium-sensitive proteins for binding sites on the vesicle release machinery. By attributing specific molecular causes to observable presynaptic effects, the new model of STP can predict the effects of specific alterations to the presynaptic neurotransmitter release mechanism. This understanding will guide further experiments into presynaptic functionality, and may contribute insights into the development of pharmaceuticals that target illnesses manifesting aberrant synaptic dynamics, such as Fragile-X syndrome and schizophrenia. The new model of STP will also add realism to brain circuit models that simulate cognitive processes such as attention and memory. The hippocampal processing loop is an example of a brain circuit involved in memory formation. The hippocampus filters and organizes large amounts of spatio-temporal data in real time according to contextual significance. The role of synaptic dynamics in the hippocampal system is speculated to help keep the system close to a region of instability that increases encoding capacity and discriminating capability. In particular, synaptic dynamics at the Schaffer collateral are proposed to coordinate the output of the highly dynamic CA3 region of the hippocampus with the phase-code in the CA1 that modulates communication between the hippocampus and the neocortex

    Efficient Learning Machines

    Get PDF
    Computer scienc

    Synaptic plasticity across different time scales and its functional implications

    Get PDF
    Humans and animals learn by modifying the synaptic strength between neurons, a phenomenon known as synaptic plasticity. These changes can be induced by rather short stimuli (lasting, for instance, only a few seconds), yet, in order to be useful for long-term memory, they should remain stable for months or years. Experimentalists study synaptic plasticity by applying a vast variety of protocols. In the present thesis we focus on protocols that fall under two main categories: (i) Those that induce synaptic modifications that last for only a few hours ("early phase" of plasticity) (ii) Those that allow synapses to undergo a sequence of steps that transforms the rapid changes occurring during the "early phase" into a stable memory trace ("late phase" of plasticity). The goal of this thesis is to better understand synaptic plasticity across these different phases, early and late, by creating compact mathematic models to describe the plasticity mechanisms. Our approach allows for a synthetic view of the field as well as the exploration of functional consequences of learning. In this direction, we propose a model for the induction of synaptic plasticity that depends on the presynaptic spike time and nonlinearly on the postsynaptic voltage. The model is able to reproduce a broad range of experimental protocols such as voltage-clamp experiments and spike-timing experiments. Since the voltage is a key element in the model, we describe the neuronal activity by using a compact neuron model that faithfully reproduces the voltage time course of pyramidal neurons. In addition, this model of the induction of synaptic plasticity is combined with a trigger process for protein synthesis, and the final stabilization mechanism in order to describe the "late phase". In this combinatory form, it is able to explain experimental phenomena known as tagging experiments and to make testable predictions. A study of functional consequences of the induction model reveals selectivity in the inputs, independent component analysis computation and a tight relation between connectivity and coding. In parallel a top-down approach finding independent components is used to derive a rate-based learning rule which shows structural correlations with the induction model. This unified model across different time scales allowing the stabilization of synapses is crucial to understand learning and memory processes in animals and humans, and a necessary ingredient for any large-scale model of the brain
    corecore