1,764 research outputs found

    An investigation of the cortical learning algorithm

    Get PDF
    Pattern recognition and machine learning fields have revolutionized countless industries and applications from biometric security to modern industrial assembly lines. The fields continue to accelerate as faster, more efficient processing hardware becomes commercially available. Despite the accelerated growth of the pattern recognition and machine learning fields, computers still are unable to learn, reason, and perform rudimentary tasks that humans and animals find routine. Animals are able to move fluidly, understand their environment, and maximize their chances of survival through adaptation - animals demonstrate intelligence. A primary argument in this thesis that we have not yet achieved a level of intelligence similar to humans and animals in the pattern recognition and machine learning fields, not due to a lack of computational power but, rather, due to lack of understanding of how the cortical structures of mammalian brain interact and operate. This thesis describes a cortical learning algorithm (CLA) that models how the cortical structures in the mammalian neocortex operate. Furthermore, a high level understanding of how the cortical structures in the mammalian brain interact, store semantic patterns, and auto-recall these patterns for future predictions are discussed. Finally, we demonstrate that the algorithm can build and maintain a model of its environment and provide feedback for actions and/or classification in a similar fashion to our understanding of cortical operation

    Using High-Order Prior Belief Predictions in Hierarchical Temporal Memory for Streaming Anomaly Detection

    Get PDF
    Autonomous streaming anomaly detection can have a significant impact in any domain where continuous, real-time data is common. Often in these domains, datasets are too large or complex to hand label. Algorithms that require expensive global training procedures and large training datasets impose strict demands on data and are accordingly not fit to scale to real-time applications that are noisy and dynamic. Unsupervised algorithms that learn continuously like humans therefore boast increased applicability to these real-world scenarios. Hierarchical Temporal Memory (HTM) is a biologically constrained theory of machine intelligence inspired by the structure, activity, organization and interaction of pyramidal neurons in the neocortex of the primate brain. At the core of HTM are spatio-temporal learning algorithms that store, learn, recall and predict temporal sequences in an unsupervised and continuous fashion to meet the demands of real-time tasks. Unlike traditional machine learning and deep learning encompassed by the act of complex functional approximation, HTM with the surrounding proposed framework does not require any offline training procedures, any massive stores of training data, any data labels, it does not catastrophically forget previously learned information and it need only make one pass through the temporal data. Proposed in this thesis is an algorithmic framework built upon HTM for intelligent streaming anomaly detection. Unseen in earlier streaming anomaly detection work, the proposed framework uses high-order prior belief predictions in time in the effort to increase the fault tolerance and complex temporal anomaly detection capabilities of the underlying time-series model. Experimental results suggest that the framework when built upon HTM redefines state-of-the-art performance in a popular streaming anomaly benchmark. Comparative results with and without the framework on several third-party datasets collected from real-world scenarios also show a clear performance benefit. In principle, the proposed framework can be applied to any time-series modeling algorithm capable of producing high-order predictions

    Neuronal assembly dynamics in supervised and unsupervised learning scenarios

    Get PDF
    The dynamic formation of groups of neurons—neuronal assemblies—is believed to mediate cognitive phenomena at many levels, but their detailed operation and mechanisms of interaction are still to be uncovered. One hypothesis suggests that synchronized oscillations underpin their formation and functioning, with a focus on the temporal structure of neuronal signals. In this context, we investigate neuronal assembly dynamics in two complementary scenarios: the first, a supervised spike pattern classification task, in which noisy variations of a collection of spikes have to be correctly labeled; the second, an unsupervised, minimally cognitive evolutionary robotics tasks, in which an evolved agent has to cope with multiple, possibly conflicting, objectives. In both cases, the more traditional dynamical analysis of the system’s variables is paired with information-theoretic techniques in order to get a broader picture of the ongoing interactions with and within the network. The neural network model is inspired by the Kuramoto model of coupled phase oscillators and allows one to fine-tune the network synchronization dynamics and assembly configuration. The experiments explore the computational power, redundancy, and generalization capability of neuronal circuits, demonstrating that performance depends nonlinearly on the number of assemblies and neurons in the network and showing that the framework can be exploited to generate minimally cognitive behaviors, with dynamic assembly formation accounting for varying degrees of stimuli modulation of the sensorimotor interactions

    Dynamic and Integrative Properties of the Primary Visual Cortex

    Get PDF
    The ability to derive meaning from complex, ambiguous sensory input requires the integration of information over both space and time, as well as cognitive mechanisms to dynamically shape that integration. We have studied these processes in the primary visual cortex (V1), where neurons have been proposed to integrate visual inputs along a geometric pattern known as the association field (AF). We first used cortical reorganization as a model to investigate the role that a specific network of V1 connections, the long-range horizontal connections, might play in temporal and spatial integration across the AF. When retinal lesions ablate sensory information from portions of the visual field, V1 undergoes a process of reorganization mediated by compensatory changes in the network of horizontal collaterals. The reorganization accompanies the brain’s amazing ability to perceptually “fill-inâ€, or “seeâ€, the lost visual input. We developed a computational model to simulate cortical reorganization and perceptual fill-in mediated by a plexus of horizontal connections that encode the AF. The model reproduces the major features of the perceptual fill-in reported by human subjects with retinal lesions, and it suggests that V1 neurons, empowered by their horizontal connections, underlie both perceptual fill-in and normal integrative mechanisms that are crucial to our visual perception. These results motivated the second prong of our work, which was to experimentally study the normal integration of information in V1. Since psychophysical and physiological studies suggest that spatial interactions in V1 may be under cognitive control, we investigated the integrative properties of V1 neurons under different cognitive states. We performed extracellular recordings from single V1 neurons in macaques that were trained to perform a delayed-match-to-sample contour detection task. We found that the ability of V1 neurons to summate visual inputs from beyond the classical receptive field (cRF) imbues them with selectivity for complex contour shapes, and that neuronal shape selectivity in V1 changed dynamically according to the shapes monkeys were cued to detect. Over the population, V1 encoded subsets of the AF, predicted by the computational model, that shifted as a function of the monkeys’ expectations. These results support the major conclusions of the theoretical work; even more, they reveal a sophisticated mode of form processing, whereby the selectivity of the whole network in V1 is reshaped by cognitive state

    A Comprehensive Workflow for General-Purpose Neural Modeling with Highly Configurable Neuromorphic Hardware Systems

    Full text link
    In this paper we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware-experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results

    An Adaptive Locally Connected Neuron Model: Focusing Neuron

    Full text link
    This paper presents a new artificial neuron model capable of learning its receptive field in the topological domain of inputs. The model provides adaptive and differentiable local connectivity (plasticity) applicable to any domain. It requires no other tool than the backpropagation algorithm to learn its parameters which control the receptive field locations and apertures. This research explores whether this ability makes the neuron focus on informative inputs and yields any advantage over fully connected neurons. The experiments include tests of focusing neuron networks of one or two hidden layers on synthetic and well-known image recognition data sets. The results demonstrated that the focusing neurons can move their receptive fields towards more informative inputs. In the simple two-hidden layer networks, the focusing layers outperformed the dense layers in the classification of the 2D spatial data sets. Moreover, the focusing networks performed better than the dense networks even when 70%\% of the weights were pruned. The tests on convolutional networks revealed that using focusing layers instead of dense layers for the classification of convolutional features may work better in some data sets.Comment: 45 pages, a national patent filed, submitted to Turkish Patent Office, No: -2017/17601, Date: 09.11.201

    Spatial Learning and Action Planning in a Prefrontal Cortical Network Model

    Get PDF
    The interplay between hippocampus and prefrontal cortex (PFC) is fundamental to spatial cognition. Complementing hippocampal place coding, prefrontal representations provide more abstract and hierarchically organized memories suitable for decision making. We model a prefrontal network mediating distributed information processing for spatial learning and action planning. Specific connectivity and synaptic adaptation principles shape the recurrent dynamics of the network arranged in cortical minicolumns. We show how the PFC columnar organization is suitable for learning sparse topological-metrical representations from redundant hippocampal inputs. The recurrent nature of the network supports multilevel spatial processing, allowing structural features of the environment to be encoded. An activation diffusion mechanism spreads the neural activity through the column population leading to trajectory planning. The model provides a functional framework for interpreting the activity of PFC neurons recorded during navigation tasks. We illustrate the link from single unit activity to behavioral responses. The results suggest plausible neural mechanisms subserving the cognitive “insight” capability originally attributed to rodents by Tolman & Honzik. Our time course analysis of neural responses shows how the interaction between hippocampus and PFC can yield the encoding of manifold information pertinent to spatial planning, including prospective coding and distance-to-goal correlates
    corecore