183 research outputs found

    Modeling the formation process of grouping stimuli sets through cortical columns and microcircuits to feature neurons

    Get PDF
    A computational model of a self-structuring neuronal net is presented in which repetitively applied pattern sets induce the formation of cortical columns and microcircuits which decode distinct patterns after a learning phase. In a case study, it is demonstrated how specific neurons in a feature classifier layer become orientation selective if they receive bar patterns of different slopes from an input layer. The input layer is mapped and intertwined by self-evolving neuronal microcircuits to the feature classifier layer. In this topical overview, several models are discussed which indicate that the net formation converges in its functionality to a mathematical transform which maps the input pattern space to a feature representing output space. The self-learning of the mathematical transform is discussed and its implications are interpreted. Model assumptions are deduced which serve as a guide to apply model derived repetitive stimuli pattern sets to in vitro cultures of neuron ensembles to condition them to learn and execute a mathematical transform

    A Stable Biologically Motivated Learning Mechanism for Visual Feature Extraction to Handle Facial Categorization

    Get PDF
    The brain mechanism of extracting visual features for recognizing various objects has consistently been a controversial issue in computational models of object recognition. To extract visual features, we introduce a new, biologically motivated model for facial categorization, which is an extension of the Hubel and Wiesel simple-to-complex cell hierarchy. To address the synaptic stability versus plasticity dilemma, we apply the Adaptive Resonance Theory (ART) for extracting informative intermediate level visual features during the learning process, which also makes this model stable against the destruction of previously learned information while learning new information. Such a mechanism has been suggested to be embedded within known laminar microcircuits of the cerebral cortex. To reveal the strength of the proposed visual feature learning mechanism, we show that when we use this mechanism in the training process of a well-known biologically motivated object recognition model (the HMAX model), it performs better than the HMAX model in face/non-face classification tasks. Furthermore, we demonstrate that our proposed mechanism is capable of following similar trends in performance as humans in a psychophysical experiment using a face versus non-face rapid categorization task

    How models of canonical microcircuits implement cognitive functions

    Get PDF
    Major cognitive functions such as language, memory, and decision-making are thought to rely on distributed networks of a large number of fundamental neural elements, called canonical microcircuits. A mechanistic understanding of the interaction of these canonical microcircuits promises a better comprehension of cognitive functions as well as their potential disorders and corresponding treatment techniques. This thesis establishes a generative modeling framework that rests on canonical microcircuits and employs it to investigate composite mechanisms of cognitive functions. A generic, biologically plausible neural mass model was derived to parsimoniously represent conceivable architectures of canonical microcircuits. Time domain simulations and bifurcation and stability analyses were used to evaluate the model’s capability for basic information processing operations in response to transient stimulations, namely signal flow gating and working memory. Analysis shows that these basic operations rest upon the bistable activity of a neural population and the selectivity for the stimulus’ intensity and temporal consistency and transiency. In the model’s state space, this selectivity is marked by the distance of the system’s working point to a saddle-node bifurcation and the existence of a Hopf separatrix. The local network balance, in regard of synaptic gains, is shown to modify the model’s state space and thus its operational repertoire. Among the investigated architectures, only a three-population model that separates input-receiving and output-emitting excitatory populations exhibits the necessary state space characteristics. It is thus specified as minimal canonical microcircuit. In this three-population model, facilitative feedback information modifies the retention of sensory feedforward information. Consequently, meta-circuits of two hierarchically interacting minimal canonical microcircuits feature a temporal processing history that enables state-dependent processing operations. The relevance of these composite operations is demonstrated for the neural operations of priming and structure-building. Structure-building, that is the sequential and selective activation of neural circuits, is identified as an essential mechanism in a neural network for syntax parsing. This insight into cognitive processing proves the modeling framework’s potential in neurocognitive research. This thesis substantiates the connectionist notion that higher processing operations emerge from the combination of minimal processing elements and advances the understanding how cognitive functions are implemented in the neocortical matter of the brain.Kognitive Fähigkeiten wie Sprache, Gedächtnis und Entscheidungsfindung resultieren vermutlich aus der Interaktion vieler fundamentaler neuronaler Elemente, sogenannter kanonischer Schaltkreise. Eine vertiefte Einsicht in das Zusammenwirken dieser kanonischen Schaltkreise verspricht ein besseres Verständnis kognitiver Fähigkeiten, möglicher Funktionsstörungen und Therapieansätze. Die vorliegende Dissertation untersucht ein generatives Modell kanonischer Schaltkreise und erforscht mit dessen Hilfe die Zusammensetzung kognitiver Fähigkeiten aus konstitutiven Mechanismen neuronaler Interaktion. Es wurde ein biologisch-plausibles neuronales Massenmodell erstellt, das mögliche Architekturen kanonischer Schaltkreise generisch berücksichtigt. Anhand von Simulationen sowie Bifurkations- und Stabilitätsanalysen wurde untersucht, inwiefern das Modell grundlegende Operationen der Informationsverarbeitung, nämlich Selektion und temporäre Speicherung einer transienten Stimulation, unterstützt. Die Untersuchung zeigt, dass eine bistabile Aktivität einer neuronalen Population und die Beurteilung der Salienz des Signals den grundlegenden Operationen zugrunde liegen. Die Beurteilung der Salienz beruht dabei hinsichtlich der Signalstärke auf dem Abstand des Arbeitspunktes zu einer Sattel-Knoten-Bifurkation und hinsichtlich der Signalkonsistenz und-–vergänglichkeit auf einer Hopf-Separatrix im Zustandsraum des Systems. Die Netzwerkbalance modifiziert diesen Zustandsraum und damit die Funktionsfähigkeit des Modells. Nur ein Drei-Populationenmodell mit getrennten erregenden Populationen für Signalempfang und -emission weist die notwendigen Bedingungen im Zustandsraum auf und genügt der Definition eines minimalen kanonischen Schaltkreises. In diesem Drei-Populationenmodell erleichtert ein Feedbacksignal die Speicherfähigkeit für sensorische Feedforwardsignale. Dementsprechend weisen hierarchisch interagierende minimale kanonische Schaltkreise ein zeitliches Verarbeitungsgedächtnis auf, das zustandsabhängige Verarbeitungsoperationen erlaubt. Die Bedeutung dieser konstitutiven Operationen wird für die neuronalen Operationen Priming und Strukturbildung verdeutlicht. Letztere wurde als wichtiger Mechanismus in einem Netzwerk zur Syntaxanalyse identifiziert und belegt das Potential des Modellansatzes für die neurokognitive Forschung. Diese Dissertation konkretisiert die konnektionistische Ansicht höhere Verarbeitungsoperationen als Ergebnis der Kombination minimaler Verarbeitungselemente zu verstehen und befördert das Verständnis für die Frage wie kognitive Fähigkeiten im Nervengewebe des Gehirns implementiert sind

    Layer-Dependent Attentional Processing by Top-down Signals in a Visual Cortical Microcircuit Model

    Get PDF
    A vast amount of information about the external world continuously flows into the brain, whereas its capacity to process such information is limited. Attention enables the brain to allocate its resources of information processing to selected sensory inputs for reducing its computational load, and effects of attention have been extensively studied in visual information processing. However, how the microcircuit of the visual cortex processes attentional information from higher areas remains largely unknown. Here, we explore the complex interactions between visual inputs and an attentional signal in a computational model of the visual cortical microcircuit. Our model not only successfully accounts for previous experimental observations of attentional effects on visual neuronal responses, but also predicts contrasting differences in the attentional effects of top-down signals between cortical layers: attention to a preferred stimulus of a column enhances neuronal responses of layers 2/3 and 5, the output stations of cortical microcircuits, whereas attention suppresses neuronal responses of layer 4, the input station of cortical microcircuits. We demonstrate that the specific modulation pattern of layer-4 activity, which emerges from inter-laminar synaptic connections, is crucial for a rapid shift of attention to a currently unattended stimulus. Our results suggest that top-down signals act differently on different layers of the cortical microcircuit

    A Theory of Object Recognition: Computations and Circuits in the Feedforward Path of the Ventral Stream in Primate Visual Cortex

    Get PDF
    We describe a quantitative theory to account for the computations performed by the feedforward path of the ventral stream of visual cortex and the local circuits implementing them. We show that a model instantiating the theory is capable of performing recognition on datasets of complex images at the level of human observers in rapid categorization tasks. We also show that the theory is consistent with (and in some case has predicted) several properties of neurons in V1, V4, IT and PFC. The theory seems sufficiently comprehensive, detailed and satisfactory to represent an interesting challenge for physiologists and modelers: either disprove its basic features or propose alternative theories of equivalent scope. The theory suggests a number of open questions for visual physiology and psychophysics

    Spike-Timing-Dependent Plasticity in the Intact Brain: Counteracting Spurious Spike Coincidences

    Get PDF
    A computationally rich algorithm of synaptic plasticity has been proposed based on the experimental observation that the sign and amplitude of the change in synaptic weight is dictated by the temporal order and temporal contiguity between pre- and postsynaptic activities. For more than a decade, this spike-timing-dependent plasticity (STDP) has been studied mainly in brain slices of different brain structures and cultured neurons. Although not yet compelling, evidences for the STDP rule in the intact brain, including primary sensory cortices, have been provided lastly. From insects to mammals, the presentation of precisely timed sensory inputs drives synaptic and functional plasticity in the intact central nervous system, with similar timing requirements than the in vitro defined STDP rule. The convergent evolution of this plasticity rule in species belonging to so distant phylogenic groups points to the efficiency of STDP, as a mechanism for modifying synaptic weights, as the basis of activity-dependent development, learning and memory. In spite of the ubiquity of STDP phenomena, a number of significant variations of the rule are observed in different structures, neuronal types and even synapses on the same neuron, as well as between in vitro and in vivo conditions. In addition, the state of the neuronal network, its ongoing activity and the activation of ascending neuromodulatory systems in different behavioral conditions have dramatic consequences on the expression of spike-timing-dependent synaptic plasticity, and should be further explored

    Bio-Inspired Computer Vision: Towards a Synergistic Approach of Artificial and Biological Vision

    Get PDF
    To appear in CVIUStudies in biological vision have always been a great source of inspiration for design of computer vision algorithms. In the past, several successful methods were designed with varying degrees of correspondence with biological vision studies, ranging from purely functional inspiration to methods that utilise models that were primarily developed for explaining biological observations. Even though it seems well recognised that computational models of biological vision can help in design of computer vision algorithms, it is a non-trivial exercise for a computer vision researcher to mine relevant information from biological vision literature as very few studies in biology are organised at a task level. In this paper we aim to bridge this gap by providing a computer vision task centric presentation of models primarily originating in biological vision studies. Not only do we revisit some of the main features of biological vision and discuss the foundations of existing computational studies modelling biological vision, but also we consider three classical computer vision tasks from a biological perspective: image sensing, segmentation and optical flow. Using this task-centric approach, we discuss well-known biological functional principles and compare them with approaches taken by computer vision. Based on this comparative analysis of computer and biological vision, we present some recent models in biological vision and highlight a few models that we think are promising for future investigations in computer vision. To this extent, this paper provides new insights and a starting point for investigators interested in the design of biology-based computer vision algorithms and pave a way for much needed interaction between the two communities leading to the development of synergistic models of artificial and biological vision

    Effects of homeostatic constraints on associative memory storage and synaptic connectivity of cortical circuits

    Get PDF
    The impact of learning and long-term memory storage on synaptic connectivity is not completely understood. In this study, we examine the effects of associative learning on synaptic connectivity in adult cortical circuits by hypothesizing that these circuits function in a steady-state, in which the memory capacity of a circuit is maximal and learning must be accompanied by forgetting. Steady-state circuits should be characterized by unique connectivity features. To uncover such features we developed a biologically constrained, exactly solvable model of associative memory storage. The model is applicable to networks of multiple excitatory and inhibitory neuron classes and can account for homeostatic constraints on the number and the overall weight of functional connections received by each neuron. The results show that in spite of a large number of neuron classes, functional connections between potentially connected cells are realized with less than 50% probability if the presynaptic cell is excitatory and generally a much greater probability if it is inhibitory. We also find that constraining the overall weight of presynaptic connections leads to Gaussian connection weight distributions that are truncated at zero. In contrast, constraining the total number of functional presynaptic connections leads to non-Gaussian distributions, in which weak connections are absent. These theoretical predictions are compared with a large dataset of published experimental studies reporting amplitudes of unitary postsynaptic potentials and probabilities of connections between various classes of excitatory and inhibitory neurons in the cerebellum, neocortex, and hippocampus
    • …
    corecore