1,260 research outputs found

    An Integrated Neural Network-Event-Related Potentials Model of Temporal and Probability Context Effects on Event Categorization

    Full text link
    We present a neural network that adapts and integrates several preexisting or new modules to categorize events in short term memory (STM), encode temporal order in working memory, evaluate timing and probability context in medium and long term memory. The model shows how processed contextual information modulates event recognition and categorization, focal attention and incentive motivation. The model is based on a compendium of Event Related Potentials (ERPs) and behavioral results either collected by the authors or compiled from the classical ERP literature. Its hallmark is, at the functional level, the interplay of memory registers endowed with widely different dynamical ranges, and at the structural level, the attempt to relate the different modules to known anatomical structures.INSERM; NATO; DGA/DRET (911470/A000/DRET/DS/DR

    Lifelong 3D object recognition and grasp synthesis using dual memory recurrent self-organization networks

    Get PDF
    Humans learn to recognize and manipulate new objects in lifelong settings without forgetting the previously gained knowledge under non-stationary and sequential conditions. In autonomous systems, the agents also need to mitigate similar behaviour to continually learn the new object categories and adapt to new environments. In most conventional deep neural networks, this is not possible due to the problem of catastrophic forgetting, where the newly gained knowledge overwrites existing representations. Furthermore, most state-of-the-art models excel either in recognizing the objects or in grasp prediction, while both tasks use visual input. The combined architecture to tackle both tasks is very limited. In this paper, we proposed a hybrid model architecture consists of a dynamically growing dual-memory recurrent neural network (GDM) and an autoencoder to tackle object recognition and grasping simultaneously. The autoencoder network is responsible to extract a compact representation for a given object, which serves as input for the GDM learning, and is responsible to predict pixel-wise antipodal grasp configurations. The GDM part is designed to recognize the object in both instances and categories levels. We address the problem of catastrophic forgetting using the intrinsic memory replay, where the episodic memory periodically replays the neural activation trajectories in the absence of external sensory information. To extensively evaluate the proposed model in a lifelong setting, we generate a synthetic dataset due to lack of sequential 3D objects dataset. Experiment results demonstrated that the proposed model can learn both object representation and grasping simultaneously in continual learning scenarios

    Single-trial analysis of EEG during rapid visual discrimination: enabling cortically-coupled computer vision

    Get PDF
    We describe our work using linear discrimination of multi-channel electroencephalography for single-trial detection of neural signatures of visual recognition events. We demonstrate the approach as a methodology for relating neural variability to response variability, describing studies for response accuracy and response latency during visual target detection. We then show how the approach can be utilized to construct a novel type of brain-computer interface, which we term cortically-coupled computer vision. In this application, a large database of images is triaged using the detected neural signatures. We show how ‘corticaltriaging’ improves image search over a strictly behavioral response

    Delay Learning Architectures for Memory and Classification

    Full text link
    We present a neuromorphic spiking neural network, the DELTRON, that can remember and store patterns by changing the delays of every connection as opposed to modifying the weights. The advantage of this architecture over traditional weight based ones is simpler hardware implementation without multipliers or digital-analog converters (DACs) as well as being suited to time-based computing. The name is derived due to similarity in the learning rule with an earlier architecture called Tempotron. The DELTRON can remember more patterns than other delay-based networks by modifying a few delays to remember the most 'salient' or synchronous part of every spike pattern. We present simulations of memory capacity and classification ability of the DELTRON for different random spatio-temporal spike patterns. The memory capacity for noisy spike patterns and missing spikes are also shown. Finally, we present SPICE simulation results of the core circuits involved in a reconfigurable mixed signal implementation of this architecture.Comment: 27 pages, 20 figure

    Towards a Unified Theory of Neocortex: Laminar Cortical Circuits for Vision and Cognition

    Full text link
    A key goal of computational neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how laminar neocortical circuits give rise to biological intelligence. These circuits embody two new and revolutionary computational paradigms: Complementary Computing and Laminar Computing. Circuit properties include a novel synthesis of feedforward and feedback processing, of digital and analog processing, and of pre-attentive and attentive processing. This synthesis clarifies the appeal of Bayesian approaches but has a far greater predictive range that naturally extends to self-organizing processes. Examples from vision and cognition are summarized. A LAMINART architecture unifies properties of visual development, learning, perceptual grouping, attention, and 3D vision. A key modeling theme is that the mechanisms which enable development and learning to occur in a stable way imply properties of adult behavior. It is noted how higher-order attentional constraints can influence multiple cortical regions, and how spatial and object attention work together to learn view-invariant object categories. In particular, a form-fitting spatial attentional shroud can allow an emerging view-invariant object category to remain active while multiple view categories are associated with it during sequences of saccadic eye movements. Finally, the chapter summarizes recent work on the LIST PARSE model of cognitive information processing by the laminar circuits of prefrontal cortex. LIST PARSE models the short-term storage of event sequences in working memory, their unitization through learning into sequence, or list, chunks, and their read-out in planned sequential performance that is under volitional control. LIST PARSE provides a laminar embodiment of Item and Order working memories, also called Competitive Queuing models, that have been supported by both psychophysical and neurobiological data. These examples show how variations of a common laminar cortical design can embody properties of visual and cognitive intelligence that seem, at least on the surface, to be mechanistically unrelated.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Gain control network conditions in early sensory coding

    Get PDF
    Gain control is essential for the proper function of any sensory system. However, the precise mechanisms for achieving effective gain control in the brain are unknown. Based on our understanding of the existence and strength of connections in the insect olfactory system, we analyze the conditions that lead to controlled gain in a randomly connected network of excitatory and inhibitory neurons. We consider two scenarios for the variation of input into the system. In the first case, the intensity of the sensory input controls the input currents to a fixed proportion of neurons of the excitatory and inhibitory populations. In the second case, increasing intensity of the sensory stimulus will both, recruit an increasing number of neurons that receive input and change the input current that they receive. Using a mean field approximation for the network activity we derive relationships between the parameters of the network that ensure that the overall level of activity of the excitatory population remains unchanged for increasing intensity of the external stimulation. We find that, first, the main parameters that regulate network gain are the probabilities of connections from the inhibitory population to the excitatory population and of the connections within the inhibitory population. Second, we show that strict gain control is not achievable in a random network in the second case, when the input recruits an increasing number of neurons. Finally, we confirm that the gain control conditions derived from the mean field approximation are valid in simulations of firing rate models and Hodgkin-Huxley conductance based models

    Dynamic signatures of stress

    Get PDF

    Optimization and improvement of a robotics gaze control system using LSTM networks

    Get PDF
    Producción CientíficaGaze control represents an important issue in the interaction between a robot and humans. Specifically, deciding who to pay attention to in a multi-party conversation is one way to improve the naturalness of a robot in human-robot interaction. This control can be carried out by means of two different models that receive the stimuli produced by the participants in an interaction, either an on-center off-surround competitive network or a recurrent neural network. A system based on a competitive neural network is able to decide who to look at with a smooth transition in the focus of attention when significant changes in stimuli occur. An important aspect in this process is the configuration of the different parameters of such neural network. The weights of the different stimuli have to be computed to achieve human-like behavior. This article explains how these weights can be obtained by solving an optimization problem. In addition, a new model using a recurrent neural network with LSTM layers is presented. This model uses the same set of stimuli but does not require its weighting. This new model is easier to train, avoiding manual configurations, and offers promising results in robot gaze control. The experiments carried out and some results are also presented.Ministerio de Ciencia, Innovación y Universidades (project TI2018-096652-B-I00)Junta de Castilla y León - Fondo Europeo de Desarrollo Regional (grant VA233P18

    Area-efficient Neuromorphic Silicon Circuits and Architectures using Spatial and Spatio-Temporal Approaches

    Get PDF
    In the field of neuromorphic VLSI connectivity is a huge bottleneck in implementing brain-inspired circuits due to the large number of synapses needed for performing brain-like functions. (E.g. pattern recognition, classification, etc.). In this thesis I have addressed this problem using a two pronged approach namely spatial and temporal.Spatial: The real-estate occupied by silicon synapses have been an impediment to implementing neuromorphic circuits. In recent years, memristors have emerged as a nano-scale analog synapse. Furthermore, these nano-devices can be integrated on top of CMOS chips enabling the realization of dense neural networks. As a first step in realizing this vision, a programmable CMOS chip enabling direct integration of memristors was realized. In a collaborative MURI project, a CMOS memory platform was designed for the memristive memory array in a hybrid/3D architecture (CMOL architecture) and memristors were successfully integrated on top of it. After demonstrating feasibility of post-CMOS integration of memristors, a second design containing an array of spiking CMOS neurons was designed in a 5mm x 5mm chip in a 180nm CMOS process to explore the role of memristors as synapses in neuromorphic chips.8Temporal: While physical miniaturization by integrating memristors is one facet of realizing area-efficient neural networks, on-chip routing between silicon neurons prevents the complete realization of complex networks containing large number of neurons. A promising solution for the connectivity problem is to employ spatio-temporal coding to encode neuronal information in the time of arrival of the spikes. Temporal codes open up a whole new range of coding schemes which not only are energy efficient (computation with one spike) but also have much larger information capacity than their conventional counterparts. This can result in reducing the number of connections to do similar tasks with traditional rate-based methods.By choosing an efficient temporal coding scheme we developed a system architecture by which pattern classification can be done using a “Winners-share-all” instead of a “Winner-takes-all” mechanism. Winner-takes-all limits the code space to the number of output neurons, meaning n output neurons can only classify n pattern. In winners-share-all we exploit the code space provided by the temporal code by training different combination of k out of n neurons to fire together in response to different patterns. Optimal values of k in order to maximize information capacity using n output neurons were theoretically determined and utilized. An unsupervised network of 3 layers was trained to classify 14 patterns of 15 x 15 pixels while using only 6 output neurons to demonstrate the power of the technique. The reduction in the number of output neurons results in the reduction of number of training parameters and results in lower power, area and memory required for the same functionality
    corecore