15 research outputs found

    Formal Modeling of Connectionism using Concurrency Theory, an Approach Based on Automata and Model Checking

    Get PDF
    This paper illustrates a framework for applying formal methods techniques, which are symbolic in nature, to specifying and verifying neural networks, which are sub-symbolic in nature. The paper describes a communicating automata [Bowman & Gomez, 2006] model of neural networks. We also implement the model using timed automata [Alur & Dill, 1994] and then undertake a verification of these models using the model checker Uppaal [Pettersson, 2000] in order to evaluate the performance of learning algorithms. This paper also presents discussion of a number of broad issues concerning cognitive neuroscience and the debate as to whether symbolic processing or connectionism is a suitable representation of cognitive systems. Additionally, the issue of integrating symbolic techniques, such as formal methods, with complex neural networks is discussed. We then argue that symbolic verifications may give theoretically well-founded ways to evaluate and justify neural learning systems in the field of both theoretical research and real world applications

    Analysis and design of a distributed k-winners-take-all model

    Get PDF
    The -winners-take-all (WTA) problem is to find the largest inputs from inputs. In this paper, we design and propose a novel distributed WTA model, for which no central unit is needed to realize the computation of the winners. As a result, the proposed model has the general advantages of distributed models over centralized ones, such as better robustness to faults of agents. The global asymptotic convergence of the proposed distributed model is proven. Besides, two numerical examples on networks of agents with static inputs and time-varying inputs are presented to validate the performance of the proposed model

    Spatiotemporal Computations of an Excitable and Plastic Brain: Neuronal Plasticity Leads to Noise-Robust and Noise-Constructive Computations

    Get PDF
    It is a long-established fact that neuronal plasticity occupies the central role in generating neural function and computation. Nevertheless, no unifying account exists of how neurons in a recurrent cortical network learn to compute on temporally and spatially extended stimuli. However, these stimuli constitute the norm, rather than the exception, of the brain's input. Here, we introduce a geometric theory of learning spatiotemporal computations through neuronal plasticity. To that end, we rigorously formulate the problem of neural representations as a relation in space between stimulus-induced neural activity and the asymptotic dynamics of excitable cortical networks. Backed up by computer simulations and numerical analysis, we show that two canonical and widely spread forms of neuronal plasticity, that is, spike-timing-dependent synaptic plasticity and intrinsic plasticity, are both necessary for creating neural representations, such that these computations become realizable. Interestingly, the effects of these forms of plasticity on the emerging neural code relate to properties necessary for both combating and utilizing noise. The neural dynamics also exhibits features of the most likely stimulus in the network's spontaneous activity. These properties of the spatiotemporal neural code resulting from plasticity, having their grounding in nature, further consolidate the biological relevance of our findings

    Memristor-Based HTM Spatial Pooler with On-Device Learning for Pattern Recognition

    Get PDF
    This article investigates hardware implementation of hierarchical temporal memory (HTM), a brain-inspired machine learning algorithm that mimics the key functions of the neocortex and is applicable to many machine learning tasks. Spatial pooler (SP) is one of the main parts of HTM, designed to learn the spatial information and obtain the sparse distributed representations (SDRs) of input patterns. The other part is temporal memory (TM) which aims to learn the temporal information of inputs. The memristor, which is an appropriate synapse emulator for neuromorphic systems, can be used as the synapse in SP and TM circuits. In this article, a memristor-based SP (MSP) circuit structure is designed to accelerate the execution of the SP algorithm. The presented MSP has properties of modeling both the synaptic permanence and the synaptic connection state within a single synapse, and on-device and parallel learning. Simulation results of statistic metrics and classification tasks on several real-world datasets substantiate the validity of MSP

    Continuous-time recurrent neural networks for quadratic programming: theory and engineering applications.

    Get PDF
    Liu Shubao.Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.Includes bibliographical references (leaves 90-98).Abstracts in English and Chinese.Abstract --- p.i摘要 --- p.iiiAcknowledgement --- p.ivChapter 1 --- Introduction --- p.1Chapter 1.1 --- Time-Varying Quadratic Optimization --- p.1Chapter 1.2 --- Recurrent Neural Networks --- p.3Chapter 1.2.1 --- From Feedforward to Recurrent Networks --- p.3Chapter 1.2.2 --- Computational Power and Complexity --- p.6Chapter 1.2.3 --- Implementation Issues --- p.7Chapter 1.3 --- Thesis Organization --- p.9Chapter I --- Theory and Models --- p.11Chapter 2 --- Linearly Constrained QP --- p.13Chapter 2.1 --- Model Description --- p.14Chapter 2.2 --- Convergence Analysis --- p.17Chapter 3 --- Quadratically Constrained QP --- p.26Chapter 3.1 --- Problem Formulation --- p.26Chapter 3.2 --- Model Description --- p.27Chapter 3.2.1 --- Model 1 (Dual Model) --- p.28Chapter 3.2.2 --- Model 2 (Improved Dual Model) --- p.28Chapter II --- Engineering Applications --- p.29Chapter 4 --- KWTA Network Circuit Design --- p.31Chapter 4.1 --- Introduction --- p.31Chapter 4.2 --- Equivalent Reformulation --- p.32Chapter 4.3 --- KWTA Network Model --- p.36Chapter 4.4 --- Simulation Results --- p.40Chapter 4.5 --- Conclusions --- p.40Chapter 5 --- Dynamic Control of Manipulators --- p.43Chapter 5.1 --- Introduction --- p.43Chapter 5.2 --- Problem Formulation --- p.44Chapter 5.3 --- Simplified Dual Neural Network --- p.47Chapter 5.4 --- Simulation Results --- p.51Chapter 5.5 --- Concluding Remarks --- p.55Chapter 6 --- Robot Arm Obstacle Avoidance --- p.56Chapter 6.1 --- Introduction --- p.56Chapter 6.2 --- Obstacle Avoidance Scheme --- p.58Chapter 6.2.1 --- Equality Constrained Formulation --- p.58Chapter 6.2.2 --- Inequality Constrained Formulation --- p.60Chapter 6.3 --- Simplified Dual Neural Network Model --- p.64Chapter 6.3.1 --- Existing Approaches --- p.64Chapter 6.3.2 --- Model Derivation --- p.65Chapter 6.3.3 --- Convergence Analysis --- p.67Chapter 6.3.4 --- Model Comparision --- p.69Chapter 6.4 --- Simulation Results --- p.70Chapter 6.5 --- Concluding Remarks --- p.71Chapter 7 --- Multiuser Detection --- p.77Chapter 7.1 --- Introduction --- p.77Chapter 7.2 --- Problem Formulation --- p.78Chapter 7.3 --- Neural Network Architecture --- p.82Chapter 7.4 --- Simulation Results --- p.84Chapter 8 --- Conclusions and Future Works --- p.88Chapter 8.1 --- Concluding Remarks --- p.88Chapter 8.2 --- Future Prospects --- p.88Bibliography --- p.8

    Lifelong Neural Predictive Coding: Learning Cumulatively Online without Forgetting

    Full text link
    In lifelong learning systems, especially those based on artificial neural networks, one of the biggest obstacles is the severe inability to retain old knowledge as new information is encountered. This phenomenon is known as catastrophic forgetting. In this article, we propose a new kind of connectionist architecture, the Sequential Neural Coding Network, that is robust to forgetting when learning from streams of data points and, unlike networks of today, does not learn via the immensely popular back-propagation of errors. Grounded in the neurocognitive theory of predictive processing, our model adapts its synapses in a biologically-plausible fashion, while another, complementary neural system rapidly learns to direct and control this cortex-like structure by mimicking the task-executive control functionality of the basal ganglia. In our experiments, we demonstrate that our self-organizing system experiences significantly less forgetting as compared to standard neural models and outperforms a wide swath of previously proposed methods even though it is trained across task datasets in a stream-like fashion. The promising performance of our complementary system on benchmarks, e.g., SplitMNIST, Split Fashion MNIST, and Split NotMNIST, offers evidence that by incorporating mechanisms prominent in real neuronal systems, such as competition, sparse activation patterns, and iterative input processing, a new possibility for tackling the grand challenge of lifelong machine learning opens up.Comment: Key updates including results on standard benchmarks, e.g., split mnist/fmnist/not-mnist. Task selection/basal ganglia model has been integrate

    Neuronal Auditory Machine Intelligence (NEURO-AMI) In Perspective

    Full text link
    The recent developments in soft computing cannot be complete without noting the contributions of artificial neural machine learning systems that draw inspiration from real cortical tissue or processes that occur in human brain. The universal approximability of such neural systems has led to its wide spread use, and novel developments in this evolving technology has shown that there is a bright future for such Artificial Intelligent (AI) techniques in the soft computing field. Indeed, the proliferation of large and very deep networks of artificial neural systems and the corresponding enhancement and development of neural machine learning algorithms have contributed immensely to the development of the modern field of Deep Learning as may be found in the well documented research works of Lecun, Bengio and Hinton. However, the key requirements of end user affordability in addition to reduced complexity and reduced data learning size requirement means there still remains a need for the synthesis of more cost-efficient and less data-hungry artificial neural systems. In this report, we present an overview of a new competing bio-inspired continual learning neural tool Neuronal Auditory Machine Intelligence (Neuro-AMI) as a predictor detailing its functional and structural details, important aspects on right applicability, some recent application use cases and future research directions for current and prospective machine learning experts and data scientists.Comment: Journal submission in progres

    A Review of Findings from Neuroscience and Cognitive Psychology as Possible Inspiration for the Path to Artificial General Intelligence

    Full text link
    This review aims to contribute to the quest for artificial general intelligence by examining neuroscience and cognitive psychology methods for potential inspiration. Despite the impressive advancements achieved by deep learning models in various domains, they still have shortcomings in abstract reasoning and causal understanding. Such capabilities should be ultimately integrated into artificial intelligence systems in order to surpass data-driven limitations and support decision making in a way more similar to human intelligence. This work is a vertical review that attempts a wide-ranging exploration of brain function, spanning from lower-level biological neurons, spiking neural networks, and neuronal ensembles to higher-level concepts such as brain anatomy, vector symbolic architectures, cognitive and categorization models, and cognitive architectures. The hope is that these concepts may offer insights for solutions in artificial general intelligence.Comment: 143 pages, 49 figures, 244 reference

    Cognitive Learning and Memory Systems Using Spiking Neural Networks

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore