39 research outputs found

    Topographica: Building and Analyzing Map-Level Simulations from Python, C/C++, MATLAB, NEST, or NEURON Components

    Get PDF
    Many neural regions are arranged into two-dimensional topographic maps, such as the retinotopic maps in mammalian visual cortex. Computational simulations have led to valuable insights about how cortical topography develops and functions, but further progress has been hindered by the lack of appropriate tools. It has been particularly difficult to bridge across levels of detail, because simulators are typically geared to a specific level, while interfacing between simulators has been a major technical challenge. In this paper, we show that the Python-based Topographica simulator makes it straightforward to build systems that cross levels of analysis, as well as providing a common framework for evaluating and comparing models implemented in other simulators. These results rely on the general-purpose abstractions around which Topographica is designed, along with the Python interfaces becoming available for many simulators. In particular, we present a detailed, general-purpose example of how to wrap an external spiking PyNN/NEST simulation as a Topographica component using only a dozen lines of Python code, making it possible to use any of the extensive input presentation, analysis, and plotting tools of Topographica. Additional examples show how to interface easily with models in other types of simulators. Researchers simulating topographic maps externally should consider using Topographica's analysis tools (such as preference map, receptive field, or tuning curve measurement) to compare results consistently, and for connecting models at different levels. This seamless interoperability will help neuroscientists and computational scientists to work together to understand how neurons in topographic maps organize and operate

    Development of Maps of Simple and Complex Cells in the Primary Visual Cortex

    Get PDF
    Hubel and Wiesel (1962) classified primary visual cortex (V1) neurons as either simple, with responses modulated by the spatial phase of a sine grating, or complex, i.e., largely phase invariant. Much progress has been made in understanding how simple-cells develop, and there are now detailed computational models establishing how they can form topographic maps ordered by orientation preference. There are also models of how complex cells can develop using outputs from simple cells with different phase preferences, but no model of how a topographic orientation map of complex cells could be formed based on the actual connectivity patterns found in V1. Addressing this question is important, because the majority of existing developmental models of simple-cell maps group neurons selective to similar spatial phases together, which is contrary to experimental evidence, and makes it difficult to construct complex cells. Overcoming this limitation is not trivial, because mechanisms responsible for map development drive receptive fields (RF) of nearby neurons to be highly correlated, while co-oriented RFs of opposite phases are anti-correlated. In this work, we model V1 as two topographically organized sheets representing cortical layer 4 and 2/3. Only layer 4 receives direct thalamic input. Both sheets are connected with narrow feed-forward and feedback connectivity. Only layer 2/3 contains strong long-range lateral connectivity, in line with current anatomical findings. Initially all weights in the model are random, and each is modified via a Hebbian learning rule. The model develops smooth, matching, orientation preference maps in both sheets. Layer 4 units become simple cells, with phase preference arranged randomly, while those in layer 2/3 are primarily complex cells. To our knowledge this model is the first explaining how simple cells can develop with random phase preference, and how maps of complex cells can develop, using only realistic patterns of connectivity

    Familiarization: A theory of repetition suppression predicts interference between overlapping cortical representations

    Get PDF
    Repetition suppression refers to a reduction in the cortical response to a novel stimulus that results from repeated presentation of the stimulus. We demonstrate repetition suppression in a well established computational model of cortical plasticity, according to which the relative strengths of lateral inhibitory interactions are modified by Hebbian learning. We present the model as an extension to the traditional account of repetition suppression offered by sharpening theory, which emphasises the contribution of afferent plasticity, by instead attributing the effect primarily to plasticity of intra-cortical circuitry. In support, repetition suppression is shown to emerge in simulations with plasticity enabled only in intra-cortical connections. We show in simulation how an extended ‘inhibitory sharpening theory’ can explain the disruption of repetition suppression reported in studies that include an intermediate phase of exposure to additional novel stimuli composed of features similar to those of the original stimulus. The model suggests a re-interpretation of repetition suppression as a manifestation of the process by which an initially distributed representation of a novel object becomes a more localist representation. Thus, inhibitory sharpening may constitute a more general process by which representation emerges from cortical re-organisation

    Familiarization: A theory of repetition suppression predicts interference between overlapping cortical representations

    Get PDF
    Repetition suppression refers to a reduction in the cortical response to a novel stimulus that results from repeated presentation of the stimulus. We demonstrate repetition suppression in a well established computational model of cortical plasticity, according to which the relative strengths of lateral inhibitory interactions are modified by Hebbian learning. We present the model as an extension to the traditional account of repetition suppression offered by sharpening theory, which emphasises the contribution of afferent plasticity, by instead attributing the effect primarily to plasticity of intra-cortical circuitry. In support, repetition suppression is shown to emerge in simulations with plasticity enabled only in intra-cortical connections. We show in simulation how an extended ‘inhibitory sharpening theory’ can explain the disruption of repetition suppression reported in studies that include an intermediate phase of exposure to additional novel stimuli composed of features similar to those of the original stimulus. The model suggests a re-interpretation of repetition suppression as a manifestation of the process by which an initially distributed representation of a novel object becomes a more localist representation. Thus, inhibitory sharpening may constitute a more general process by which representation emerges from cortical re-organisation

    Computational Modeling of Contrast Sensitivity and Orientation Tuning in Schizophrenia

    Get PDF
    Computational modeling is being increasingly used to understand schizophrenia, but, to date, it has not been used to account for the common perceptual disturbances in the disorder. We manipulated schizophrenia-relevant parameters in the GCAL (gain control, adaptation, laterally connected) model (Stevens et al., 2013), run using the Topographica simulator (Bednar, 2012), to model low-level visual processing changes in the disorder. Our models incorporated: separate sheets for retinal, LGN, and V1 activity; gain control in the LGN; homeostatic adaptation in V1 based on a weighted sum of all inputs and limited by a logistic (sigmoid) nonlinearity; lateral excitation and inhibition in V1; and self-organization of synaptic weights based on Hebbian learning. Data indicated that: 1) findings of increased contrast sensitivity for low spatial frequency stimuli in first episode schizophrenia (FES) can be successfully modeled as a function of reduced retinal and LGN efferent activity within the context of normal LGN gain control and cortical mechanisms (see Figure 1); and 2) findings of reduced contrast sensitivity and broadened orientation tuning in chronic schizophrenia can be successfully modeled by a combination of reduced V1 lateral inhibition and an increase in the Hebbian learning rate at V1 synapses for LGN input (see Figures 1-3). These models are consistent with many current findings (Silverstein, 2016) and they predict relationships that have not yet been explored. They also have implications for understanding links between perceptual changes and psychotic symptom formation, and for understanding changes during the long-term course of the disorder

    Modeling the Emergence of Whisker Direction Maps in Rat Barrel Cortex

    Get PDF
    Based on measuring responses to rat whiskers as they are mechanically stimulated, one recent study suggests that barrel-related areas in layer 2/3 rat primary somatosensory cortex (S1) contain a pinwheel map of whisker motion directions. Because this map is reminiscent of topographic organization for visual direction in primary visual cortex (V1) of higher mammals, we asked whether the S1 pinwheels could be explained by an input-driven developmental process as is often suggested for V1. We developed a computational model to capture how whisker stimuli are conveyed to supragranular S1, and simulate lateral cortical interactions using an established self-organizing algorithm. Inputs to the model each represent the deflection of a subset of 25 whiskers as they are contacted by a moving stimulus object. The subset of deflected whiskers corresponds with the shape of the stimulus, and the deflection direction corresponds with the movement direction of the stimulus. If these two features of the inputs are correlated during the training of the model, a somatotopically aligned map of direction emerges for each whisker in S1. Predictions of the model that are immediately testable include (1) that somatotopic pinwheel maps of whisker direction exist in adult layer 2/3 barrel cortex for every large whisker on the rat's face, even peripheral whiskers; and (2) in the adult, neurons with similar directional tuning are interconnected by a network of horizontal connections, spanning distances of many whisker representations. We also propose specific experiments for testing the predictions of the model by manipulating patterns of whisker inputs experienced during early development. The results suggest that similar intracortical mechanisms guide the development of primate V1 and rat S1

    Integration of continuous-time dynamics in a spiking neural network simulator

    Full text link
    Contemporary modeling approaches to the dynamics of neural networks consider two main classes of models: biologically grounded spiking neurons and functionally inspired rate-based units. The unified simulation framework presented here supports the combination of the two for multi-scale modeling approaches, the quantitative validation of mean-field approaches by spiking network simulations, and an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most efficient spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. We further demonstrate the broad applicability of the framework by considering various examples from the literature ranging from random networks to neural field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation

    Spatiotemporal properties of evoked neural response in the primary visual cortex

    Get PDF
    Understanding how neurons in the primary visual cortex (V1) of primates respond to visual patterns has been a major focus of research in neuroscience for many decades. Numerous different experimental techniques have been used to provide data about how the spatiotemporal patterns of light projected from the visual environment onto the retina relate to the spatiotemporal patterns of neural activity evoked in the visual cortex, across disparate spatial and temporal scales. However, despite the variety of data sources available (or perhaps because of it), there is still no unified explanation for how the circuitry in the eye, the subcortical visual pathways, and the visual cortex responds to these patterns. This thesis outlines a research project to build computational models of V1 that incorporate observations and constraints from an unprecedented range of experimental data sources, reconciling each data source with the others into a consistent proposal for the underlying circuitry and computational mechanisms. The final mechanistic model is the first one shown to be compatible with measurements of: (1) temporal firing-rate patterns in single neurons over tens of milliseconds obtained using single-unit electrophysiology, (2) spatiotemporal patterns in membrane voltages in cortical tissues spanning several square millimeters over similar time scales, obtained using voltage-sensitive–dye imaging, and (3) spatial patterns in neural activity over several square millimeters of cortex, measured over the course of weeks of early development using optical imaging of intrinsic signals. Reconciling this data was not trivial, in part because single-unit studies suggested short, transient neural responses, while population measurements suggested gradual, sustained responses. The fundamental principles of the resulting models are (a) that the spatial and temporal patterns of neural responses are determined not only by the particular properties of a visual stimulus and the internal response properties of individual neurons, but by the collective dynamics of an entire network of interconnected neurons, (b) that these dynamics account both for the fast time course of neural responses to individual stimuli, and the gradual emergence of structure in this network via activity-dependent Hebbian modifications of synaptic connections over days, and (c) the differences between single-unit and population measurements are primarily due to extensive and wide-ranging forms of diversity in neural responses, which become crucial when trying to estimate population responses out of a series of individual measurements. The final model is the first to include all the types of diversity necessary to show how realistic single-unit responses can add up to the very different population-level evoked responses measured using voltage-sensitive–dye imaging over large cortical areas. Additional contributions from this thesis include (1) a comprehensive solution for doing exploratory yet reproducible computational research, implemented as a set of open-source tools, (2) a general-purpose metric for evaluating the biological realism of model orientation maps, and (3) a demonstration that the previous developmental model that formed the basis of the models in this thesis is the only developmental model so far that produces realistic orientation maps. These analytical results, computational models, and research tools together provide a systematic approach for understanding neural responses to visual stimuli across time scales from milliseconds to weeks and spatial scales from microns to centimeters

    The state of MIIND

    Get PDF
    MIIND (Multiple Interacting Instantiations of Neural Dynamics) is a highly modular multi-level C++ framework, that aims to shorten the development time for models in Cognitive Neuroscience (CNS). It offers reusable code modules (libraries of classes and functions) aimed at solving problems that occur repeatedly in modelling, but tries not to impose a specific modelling philosophy or methodology. At the lowest level, it offers support for the implementation of sparse networks. For example, the library SparseImplementationLib supports sparse random networks and the library LayerMappingLib can be used for sparse regular networks of filter-like operators. The library DynamicLib, which builds on top of the library SparseImplementationLib, offers a generic framework for simulating network processes. Presently, several specific network process implementations are provided in MIIND: the Wilson–Cowan and Ornstein–Uhlenbeck type, and population density techniques for leaky-integrate-and-fire neurons driven by Poisson input. A design principle of MIIND is to support detailing: the refinement of an originally simple model into a form where more biological detail is included. Another design principle is extensibility: the reuse of an existing model in a larger, more extended one. One of the main uses of MIIND so far has been the instantiation of neural models of visual attention. Recently, we have added a library for implementing biologically-inspired models of artificial vision, such as HMAX and recent successors. In the long run we hope to be able to apply suitably adapted neuronal mechanisms of attention to these artificial models
    corecore