237 research outputs found

    A survey of visual preprocessing and shape representation techniques

    Get PDF
    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention)

    Cortical region interactions and the functional role of apical dendrites

    Get PDF
    The basal and distal apical dendrites of pyramidal cells occupy distinct cortical layers and are targeted by axons originating in different cortical regions. Hence, apical and basal dendrites receive information from distinct sources. Physiological evidence suggests that this anatomically observed segregation of input sources may have functional significance. This possibility has been explored in various connectionist models that employ neurons with functionally distinct apical and basal compartments. A neuron in which separate sets of inputs can be integrated independently has the potential to operate in a variety of ways which are not possible for the conventional model of a neuron in which all inputs are treated equally. This article thus considers how functionally distinct apical and basal dendrites can contribute to the information processing capacities of single neurons and, in particular, how information from different cortical regions could have disparate affects on neural activity and learning

    Improving Associative Memory in a Network of Spiking Neurons

    Get PDF
    In this thesis we use computational neural network models to examine the dynamics and functionality of the CA3 region of the mammalian hippocampus. The emphasis of the project is to investigate how the dynamic control structures provided by inhibitory circuitry and cellular modification may effect the CA3 region during the recall of previously stored information. The CA3 region is commonly thought to work as a recurrent auto-associative neural network due to the neurophysiological characteristics found, such as, recurrent collaterals, strong and sparse synapses from external inputs and plasticity between coactive cells. Associative memory models have been developed using various configurations of mathematical artificial neural networks which were first developed over 40 years ago. Within these models we can store information via changes in the strength of connections between simplified model neurons (two-state). These memories can be recalled when a cue (noisy or partial) is instantiated upon the net. The type of information they can store is quite limited due to restrictions caused by the simplicity of the hard-limiting nodes which are commonly associated with a binary activation threshold. We build a much more biologically plausible model with complex spiking cell models and with realistic synaptic properties between cells. This model is based upon some of the many details we now know of the neuronal circuitry of the CA3 region. We implemented the model in computer software using Neuron and Matlab and tested it by running simulations of storage and recall in the network. By building this model we gain new insights into how different types of neurons, and the complex circuits they form, actually work. The mammalian brain consists of complex resistive-capacative electrical circuitry which is formed by the interconnection of large numbers of neurons. A principal cell type is the pyramidal cell within the cortex, which is the main information processor in our neural networks. Pyramidal cells are surrounded by diverse populations of interneurons which have proportionally smaller numbers compared to the pyramidal cells and these form connections with pyramidal cells and other inhibitory cells. By building detailed computational models of recurrent neural circuitry we explore how these microcircuits of interneurons control the flow of information through pyramidal cells and regulate the efficacy of the network. We also explore the effect of cellular modification due to neuronal activity and the effect of incorporating spatially dependent connectivity on the network during recall of previously stored information. In particular we implement a spiking neural network proposed by Sommer and Wennekers (2001). We consider methods for improving associative memory recall using methods inspired by the work by Graham and Willshaw (1995) where they apply mathematical transforms to an artificial neural network to improve the recall quality within the network. The networks tested contain either 100 or 1000 pyramidal cells with 10% connectivity applied and a partial cue instantiated, and with a global pseudo-inhibition.We investigate three methods. Firstly, applying localised disynaptic inhibition which will proportionalise the excitatory post synaptic potentials and provide a fast acting reversal potential which should help to reduce the variability in signal propagation between cells and provide further inhibition to help synchronise the network activity. Secondly, implementing a persistent sodium channel to the cell body which will act to non-linearise the activation threshold where after a given membrane potential the amplitude of the excitatory postsynaptic potential (EPSP) is boosted to push cells which receive slightly more excitation (most likely high units) over the firing threshold. Finally, implementing spatial characteristics of the dendritic tree will allow a greater probability of a modified synapse existing after 10% random connectivity has been applied throughout the network. We apply spatial characteristics by scaling the conductance weights of excitatory synapses which simulate the loss in potential in synapses found in the outer dendritic regions due to increased resistance. To further increase the biological plausibility of the network we remove the pseudo-inhibition and apply realistic basket cell models with differing configurations for a global inhibitory circuit. The networks are configured with; 1 single basket cell providing feedback inhibition, 10% basket cells providing feedback inhibition where 10 pyramidal cells connect to each basket cell and finally, 100% basket cells providing feedback inhibition. These networks are compared and contrasted for efficacy on recall quality and the effect on the network behaviour. We have found promising results from applying biologically plausible recall strategies and network configurations which suggests the role of inhibition and cellular dynamics are pivotal in learning and memory

    Modeling the Bat Spatial Navigation System: A Neuromorphic VLSI Approach

    Get PDF
    Autonomously navigating robots have long been a tough challenge facing engineers. The recent push to develop micro-aerial vehicles for practical military, civilian, and industrial use has added a significant power and time constraint to the challenge. In contrast, animals, from insects to humans, have been navigating successfully for millennia using a wide range of variants of the ultra-low-power computational system known as the brain. For this reason, we look to biological systems to inspire a solution suitable for autonomously navigating micro-aerial vehicles. In this dissertation, the focus is on studying the neurobiological structures involved in mammalian spatial navigation. The mammalian brain areas widely believed to contribute directly to navigation tasks are the Head Direction Cells, Grid Cells and Place Cells found in the post-subiculum, the medial entorhinal cortex, and the hippocampus, respectively. In addition to studying the neurobiological structures involved in navigation, we investigate various neural models that seek to explain the operation of these structures and adapt them to neuromorphic VLSI circuits and systems. We choose the neuromorphic approach for our systems because we are interested in understanding the interaction between the real-time, physical implementation of the algorithms and the real-world problem (robot and environment). By utilizing both analog and asynchronous digital circuits to mimic similar computations in neural systems, we envision very low power VLSI implementations suitable for providing practical solutions for spatial navigation in micro-aerial vehicles

    IST Austria Thesis

    Get PDF
    CA3 pyramidal neurons are thought to pay a key role in memory storage and pattern completion by activity-dependent synaptic plasticity between CA3-CA3 recurrent excitatory synapses. To examine the induction rules of synaptic plasticity at CA3-CA3 synapses, we performed whole-cell patch-clamp recordings in acute hippocampal slices from rats (postnatal 21-24 days) at room temperature. Compound excitatory postsynaptic potentials (ESPSs) were recorded by tract stimulation in stratum oriens in the presence of 10 µM gabazine. High-frequency stimulation (HFS) induced N-methyl-D-aspartate (NMDA) receptor-dependent long-term potentiation (LTP). Although LTP by HFS did not requier postsynaptic spikes, it was blocked by Na+-channel blockers suggesting that local active processes (e.g.) dendritic spikes) may contribute to LTP induction without requirement of a somatic action potential (AP). We next examined the properties of spike timing-dependent plasticity (STDP) at CA3-CA3 synapses. Unexpectedly, low-frequency pairing of EPSPs and backpropagated action potentialy (bAPs) induced LTP, independent of temporal order. The STDP curve was symmetric and broad, with a half-width of ~150 ms. Consistent with these specific STDP induction properties, post-presynaptic sequences led to a supralinear summation of spine [Ca2+] transients. Furthermore, in autoassociative network models, storage and recall was substantially more robust with symmetric than with asymmetric STDP rules. In conclusion, we found associative forms of LTP at CA3-CA3 recurrent collateral synapses with distinct induction rules. LTP induced by HFS may be associated with dendritic spikes. In contrast, low frequency pairing of pre- and postsynaptic activity induced LTP only if EPSP-AP were temporally very close. Together, these induction mechanisms of synaptiic plasticity may contribute to memory storage in the CA3-CA3 microcircuit at different ranges of activity

    Implementation of dynamical systems with plastic self-organising velocity fields

    Get PDF
    To describe learning, as an alternative to a neural network recently dynamical systems were introduced whose vector fields were plastic and self-organising. Such a system automatically modifies its velocity vector field in response to the external stimuli. In the simplest case under certain conditions its vector field develops into a gradient of a multi-dimensional probability density distribution of the stimuli. We illustrate with examples how such a system carries out categorisation, pattern recognition, memorisation and forgetting without any supervision. [Continues.

    The hippocampal formation from a machine learning perspective

    Get PDF
    Nos dias de hoje, existem diversos tipos de sensores que conseguem captar uma grande quantidade de dados em curtos espaços de tempo. Em muitas situações, as informações obtidas pelos diferentes sensores traduzem fenómenos específicos, através de dados obtidos em diferentes formatos. Nesses casos, torna-se difícil saber quais as relações entre os dados e/ou identificar se os diferentes dados traduzem uma certa condição. Neste contexto, torna-se relevante desenvolver sistemas que tenham capacidade de analisar grandes quantidades de dados num menor tempo possível, produzindo informação válida a partir da informação recolhida. O cérebro dos animais é um órgão biológico capaz de fazer algo semelhante com a informação obtida pelos sentidos, que traduzem fenómenos específicos. Dentro do cérebro, existe um elemento chamado Hipocampo, que se encontra situado na área do lóbulo temporal. A sua função principal consiste em analisar os elementos previamente codificados pelo Entorhinal Cortex, dando origem à formação de novas memórias. Sendo o Hipocampo um órgão que foi sofrendo evoluções ao longo do tempos, é importante perceber qual é o seu funcionamento e, se possível, tentar encontrar modelos computacionais que traduzam o seu mecanismo. Desde a remoção do Hipocampo num paciente que sofria de convulsões, ficou claro que, sem esse elemento, não seria possível memorizar lugares ou eventos ocorridos num determinado espaço de tempo. Essa funcionalidade é obtida através de um conjunto específico de células chamadas de Grid Cells, que estão situadas na área do Entorhinal Cortex, mas também das Place Cells, Head Direction Cells e Boundary Vector Cells. Neste âmbito, o principal objetivo desta Dissertação consiste em descrever os principais mecanismos biológicos localizados no Hipocampo e definir modelos computacionais que consigam simular as funções mais críticas de ambos os Hipocampos e da área do Entorhinal Cortex.Nowadays, sensor devices are able to generate huge amounts of data in short periods of time. In many situations, that data, collected by many different sensors, translates a specific phenomenon, but is presented in very different types and formats. In these cases, it is hard to determine how these distinct types of data are related to each other or translate a certain condition. In this context, it would be of great importance to develop a system capable of analysing such data in the smallest amount time to produce valid information. The brain is a biological organ capable of such decisions. Inside the brain, there is an element called Hippocampus, that is situated in the Temporal Lobe area. Its main function is to analyse the sensorial data encoded by the Entorhinal Cortex to create new memories. Since the Hippocampus has evolved for thousands of years to perform these tasks, it is of high importance to try to understand its functioning and to model it, i.e. to define a set of computer algorithms that approximates it. Since the removal of the Hippocampus from a patient suffering from seizures, the scientific community believes that the Hippocampus is crucial for memory formation and for spatial navigation. Without it, it wouldn’t be possible to memorize places and events that happened in a specific time or place. Such functionality is achieved with the help of set of cells called Grid Cells, present in the Entorhinal Cortex area, but also with Place Cells, Head Direction Cells and Boundary Vector Cells. The combined information analysed by those cells allows the unique identification of places or events. The main objective of the work developed in this Thesis consists in describing the biological mechanisms present in the Hippocampus area and to define potential computer models that allow the simulation of all or the most critical functions of both the Hippocampus and the Entorhinal Cortex areas

    Memristors for the Curious Outsiders

    Full text link
    We present both an overview and a perspective of recent experimental advances and proposed new approaches to performing computation using memristors. A memristor is a 2-terminal passive component with a dynamic resistance depending on an internal parameter. We provide an brief historical introduction, as well as an overview over the physical mechanism that lead to memristive behavior. This review is meant to guide nonpractitioners in the field of memristive circuits and their connection to machine learning and neural computation.Comment: Perpective paper for MDPI Technologies; 43 page

    Computational Models of the Amygdala in Acquisition and Extinction of Conditioned Fear

    Get PDF
    The amygdala plays a central role in both acquisition and expression of conditioned fear associations and dysregulation of the amygdala leads to fear and anxiety disorders such as posttraumatic stress disorder (PTSD). Computational modeling has served as an important tool to understand the cellular and circuit mechanisms of fear acquisition and extinction. This review provides a critical appraisal of existing computational modeling studies of the amygdala and extended circuitry in acquisition and extinction of learned fear associations. It gives a broad overview of the computational techniques applied to amygdala modeling with an emphasis on how computational models could shed light on the neural mechanisms of fear learning, inform experimental design, and lead to specific, experimentally testable hypotheses. It covers different types of published models including rule‐based models, connectionist type models, phenomenological spiking neuronal models, and detailed biophysical conductance‐based models. Specific attention is given to the evolution of amygdala models from simple rule‐based and connectionist type models to more sophisticated and biologically realistic models. Future direction on computational modeling of the amygdala and associated networks in emotional learning is also discussed

    Auditory Fear Circuits in the Amygdala – Insights from Computational Models

    Get PDF
    corecore