11 research outputs found

    A compact spike-timing-dependent-plasticity circuit for floating gate weight implementation

    Get PDF
    AbstractSpike timing dependent plasticity (STDP) forms the basis of learning within neural networks. STDP allows for the modification of synaptic weights based upon the relative timing of pre- and post-synaptic spikes. A compact circuit is presented which can implement STDP, including the critical plasticity window, to determine synaptic modification. A physical model to predict the time window for plasticity to occur is formulated and the effects of process variations on the window is analyzed. The STDP circuit is implemented using two dedicated circuit blocks, one for potentiation and one for depression where each block consists of 4 transistors and a polysilicon capacitor. SpectreS simulations of the back-annotated layout of the circuit and experimental results indicate that STDP with biologically plausible critical timing windows over the range from 10µs to 100ms can be implemented. Also a floating gate weight storage capability, with drive circuits, is presented and a detailed analysis correlating weights changes with charging time is given

    Data-driven Neuroscience: Enabling Breakthroughs Via Innovative Data Management

    Get PDF
    Scientists in all disciplines increasingly rely on simulations to develop a better understanding of the subject they are studying. For example the neuroscientists we collaborate with in the Blue Brain project have started to simulate the brain on a supercomputer. The level of detail of their models is unprecedented as they model details on the subcellular level (e.g., the neurotransmitter). This level of detail, however, also leads to a true data deluge and the neuroscientists have only few tools to efficiently analyze the data. This demonstration showcases three innovative spatial management solutions that have substantial impact on computational neuroscience and other disciplines in that they allow to build, analyze and simulate bigger and more detailed models. More particularly, we visualize the novel query execution strategy of FLAT, an index for the scalable and efficient execution of range queries on increasingly detailed spatial models. FLAT is used to build and analyze models of the brain. We furthermore demonstrate how SCOUT uses previous query results to prefetch spatial data with high accuracy and therefore speeds up the analysis of spatial models. We finally also demonstrate TOUCH, a novel in-memory spatial join, that speeds up the model building process

    Spatial Data Management Challenges in the Simulation Sciences

    Get PDF
    Scientists in many disciplines have progressively been using simulations to better understand the natural systems they study. Faster hardware, as well as increasingly precise instruments, allow the construction and simulation of progressively advanced models of various systems. Governed by algorithms and equations, the spatial models at the core of simulations are changed and updated at every simulation step through spatial queries, implementing massive updates. Therefore, the efficient execution of these numerous spatial queries is essential. Two reasons render current spatial indexes inadequate for simulation applications. First, to ensure quick access to data, most of the spatial models in simulations are stored in memory. Most spatial access methods, however, have been optimized for use on disk and are not efficient in memory. Second, in every time step of a simulation, almost all spatial elements change their position, challenging update mechanisms for spatial indexes. In this paper we discuss how these challenges create opportunities for exciting data management research

    TRANSFORMERS: Robust spatial joins on non-uniform data distributions

    Get PDF
    Spatial joins are becoming increasingly ubiquitous in many applications, particularly in the scientific domain. While several approaches have been proposed for joining spatial datasets, each of them has a strength for a particular type of density ratio among the joined datasets. More generally, no single proposed method can efficiently join two spatial datasets in a robust manner with respect to their data distributions. Some approaches do well for datasets with contrasting densities while others do better with similar densities. None of them does well when the datasets have locally divergent data distributions. In this paper we develop TRANSFORMERS, an efficient and robust spatial join approach that is indifferent to such variations of distribution among the joined data. TRANSFORMERS achieves this feat by departing from the state-of-the-art through adapting the join strategy and data layout to local density variations among the joined data. It employs a join method based on data-oriented partitioning when joining areas of substantially different local densities, whereas it uses big partitions (as in space-oriented partitioning) when the densities are similar, while seamlessly switching among these two strategies at runtime. We experimentally demonstrate that TRANSFORMERS outperforms state-of-the-art approaches by a factor of between 2 and 8

    Statistical connectivity provides a sufficient foundation for specific functional connectivity in neocortical neural microcircuits

    Get PDF
    It is well-established that synapse formation involves highly selective chemospecific mechanisms, but how neuron arbors are positioned before synapse formation remains unclear. Using 3D reconstructions of 298 neocortical cells of different types (including nest basket, small basket, large basket, bitufted, pyramidal, and Martinotti cells), we constructed a structural model of a cortical microcircuit, in which cells of different types were independently and randomly placed. We compared the positions of physical appositions resulting from the incidental overlap of axonal and dendritic arbors in the model (statistical structural connectivity) with the positions of putative functional synapses (functional synaptic connectivity) in 90 synaptic connections reconstructed from cortical slice preparations. Overall, we found that statistical connectivity predicted an average of 74 ± 2.7% (mean ± SEM) synapse location distributions for nine types of cortical connections. This finding suggests that chemospecific attractive and repulsive mechanisms generally do not result in pairwise-specific connectivity. In some cases, however, the predicted distributions do not match precisely, indicating that chemospecific steering and aligning of the arbors may occur for some types of connections. This finding suggests that random alignment of axonal and dendritic arbors provides a sufficient foundation for specific functional connectivity to emerge in local neural microcircuits

    Identifying, tabulating, and analyzing contacts between branched neuron morphologies

    No full text
    Simulating neural tissue requires the construction of models of the anatomical structure and physiological function of neural microcircuitry. The Blue Brain Project is simulating the microcircuitry of a neocortical column with very high structural and physiological precision. This paper describes how we model anatomical structure by identifying, tabulating, and analyzing contacts between 104 neurons in a morphologically precise model of a column. A contact occurs when one element touches another, providing the opportunity for the subsequent creation of a simulated synapse. The architecture of our application divides the problem of detecting and analyzing contacts among thousands of processors on the IBM Blue Gene/Lâ„¢ supercomputer. Data required for contact tabulation is encoded with geometrical data for contact detection and is exchanged among processors. Each processor selects a subset of neurons and then iteratively 1) divides the number of points that represents each neuron among column subvolumes, 2) detects contacts in a subvolume, 3) tabulates arbitrary categories of local contacts, 4) aggregates and analyzes global contacts, and 5) revises the contents of a column to achieve a statistical objective. Computing, analyzing, and optimizing local data in parallel across distributed global data objects involve problems common to other domains (such as three-dimensional image processing and registration). Thus, we discuss the generic nature of the application architecture

    Synaptic weight modification and storage in hardware neural networks

    Get PDF
    In 2011 the International Technology Roadmap for Semiconductors, ITRS 2011, outlined how the semiconductor industry should proceed to pursue Moore’s Law past the 18nm generation. It envisioned a concept of ‘More than Moore’, in which existing semiconductor technologies can be exploited to enable the fabrication of diverse systems and in particular systems which integrate non-digital and biologically based functionality. A rapid expansion and growing interest in the fields of microbiology, electrophysiology, and computational neuroscience occurred. This activity has provided significant understanding and insight into the function and structure of the human brain leading to the creation of systems which mimic the operation of the biological nervous system. As the systems expand a need for small area, low power devices which replicate the important biological features of neural networks has been established to implement large scale networks. In this thesis work is presented which focuses on the modification and storage of synaptic weights in hardware neural networks. Test devices were incorporated on 3 chip runs; each chip was fabricated in a 0.35μm process from Austria MicroSystems (AMS) and used for parameter extraction, in accordance with the theoretical analysis presented. A compact circuit is presented which can implement STDP, and has advantages over current implementations in that the critical timing window for synaptic modification is implemented within the circuit. The duration of the critical timing window is set by the subthreshold current controlled by the voltage, Vleak, applied to transistor Mleak in the circuit. A physical model to predict the time window for plasticity to occur is formulated and the effects of process variations on the window is analysed. The STDP circuit is implemented using two dedicated circuit blocks, one for potentiation and one for depression where each block consists of 4 transistors and a polysilicon capacitor, and an area of 980µm2. SpectreS simulations of the back-annotated layout of the circuit and experimental results indicate that STDP with biologically plausible critical timing windows over the range 10µs to 100ms can be implemented. Theoretical analysis using parameters extracted from MOS test devices is used to describe the operation of each device and circuit presented. Simulation results and results obtained from fabricated devices confirm the validity of these designs and approaches. Both the WP and WD circuits have a power consumption of approximately 2.4mW, during a weight update. If no weight update occurs the resting currents within the device are in the nA range, thus each circuit has a power consumption of approximately 1µW. A floating gate, FG, device fabricated using a standard CMOS process is presented. This device is to be integrated with both the WP and WD STDP circuits. The FG device is designed to store negative charge on a FG to represent the synaptic weight of the associated synapse. Charge is added or removed from the FG via Fowler-Nordheim tunnelling. This thesis outlines the design criteria and theoretical operation of this device. A model of the charge storage characteristics is presented and verified using HFCV and PCV experimental results. Limited precision weights, LPW, and its potential use in hardware neural networks is also considered. LPW offers a potential solution in the quest to design a compact FG device for use with CTS. The algorithms presented in this thesis show that LPW allows for a reduction in the synaptic weight storage device while permitting the network to function as intended

    Emergent Properties of in silico Synaptic Transmission in a Model of the Rat Neocortical Column

    Get PDF
    The cerebral cortex occupies nearly 80% of the entire volume of the mammalian brain and is thought to subserve higher cognitive functions like memory, attention and sensory perception. The neocortex is the newest part in the evolution of the cerebral cortex and is perhaps the most intricate brain region ever studied. The neocortical microcircuit is the smallest Œecosystem‚ of the neocortex that consists of a rich assortment of neurons, which are diverse in both their morphological and electrical properties. In the neocortical microcircuit, neurons are horizontally arranged in 6 distinct sheets called layers. The fundamental operating unit of the neocortical microcircuit is believed to be the Neocortical Column (NCC). Functionally, a single NCC is an arrangement of thousands of neurons in a vertical fashion spanning across all the 6 layers. The structure of the entire neocortex arises from a repeated and stereotypical arrangement of several thousands of such columns, where neurons transmit information to each other through specialized points of information transfer called synapses. The dynamics of synaptic transmission can be as diverse as the neurons defining a connection and are crucial to foster the functional properties of the neocortical microcircuit. The Blue Brain Project (BBP) is the first comprehensive endeavour to build a unifying model of the NCC by systematic data integration and biologically detailed simulations. Through the past 5 years, the BBP has built a facility for a data-constraint driven approach towards modelling and integrating biological information across multiple levels of complexity. Guided by fundamental principles derived from biological experiments, the BBP simulation toolchain has undergone a process of continuous refinement to facilitate the frequent construction of detailed in silico models of the NCC. The focus of this thesis lies in characterizing the functional properties of in silico synaptic transmission by incorporating principles of synaptic communication derived through biological experiments. In order to study in silico synaptic transmission it is crucial to gain an understanding of the key players influencing the manner in which synaptic signals are processed in the neocortical microcircuit - ion channel kinetics and distribution profiles, single neuron models and dynamics of synaptic pathways. First, by means of exhaustive literature survey, I identified ion channel kinetics and their distribution profiles on neocortical neurons to build in silico ion channel models. Thereafter, I developed a prototype framework to analyze the somatic and dendritic features of single neuron models constrained by ion channel kinetics. Finally, within a simulation framework integrating the ion channels, single neuron models and dynamics of synaptic transmission, I replicated in vitro experimental protocols in silico, to characterize the transmission properties of monosynaptic connections. These synaptic connections, arising from the axo-dendritric apposition of neuronal arbours were sampled across many instances of in silico NCC models constructed a priori through the BBP simulation toolchain. In this thesis, I show that when principles of synaptic transmission derived from in vitro experiments are incorporated to model in silico synaptic connections, the resulting anatomy and physiology of synaptic connections modelled from elementary biological rules closely match in vitro data. This thesis work demonstrates that the average synaptic response properties in silico are robust to perturbations in the anatomical and physiological properties of modelled connections in the local neocortical microcircuit. A fundamental discovery through this thesis is an insight into the function of the local neocortical microcircuit by examining the effect of morphological diversity on in silico synaptic transmission. I demonstrate here that intrinsic morphological diversity confers an invariance to the average synaptic response properties in silico in the local neocortical microcircuit, termed "microcircuit level robustness and invariance"
    corecore