31 research outputs found

    Characterization and Compensation of Network-Level Anomalies in Mixed-Signal Neuromorphic Modeling Platforms

    Full text link
    Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks

    A research agenda for improving national Ecological Footprint accounts

    Full text link

    Distorted and compensated simulations of the feedforward synfire chain on the ESS.

    No full text
    <p>(A) Synapse loss after mapping the model with different numbers of neurons onto the BrainScaleS System. (B) (,) state space on the ESS with default parameters, 20% weight noise, and 27.4% synapse loss. (C) After compensation for all distortion mechanisms, different separatrices are possible by setting different values of the inhibitory weight. (D) Compensated state space belonging to the blue separatrix in C.</p

    Compensation of homogeneous synaptic loss in the L2/3 model.

    No full text
    <p>Unless explicitly stated otherwise, the default network model (9HC9MC) was used. Here, we use the following color code: blue for the original model, red for the distorted case (50% synapse loss), green for the compensation via increased synaptic weights and purple for the compensation by scaling down the size of the PYR populations. (A–D) Raster plots of spiking activity. The MCs are ordered such that those belonging to the same attractor (and <i>not</i> those within the same HC) are grouped together. Synapse loss weakens the interactions within and among MCs, causing shorter dwell times and longer competition times. Both compensation methods successfuly counter these effects. These phenomena can also be observed in subplots H–P. (E–G) Star plots of average PYR voltages from a sample of 5 PYR cells per MC. Synapse loss leads to a less pronounced difference between the average PYR membrane potential within and outside of active attractors. After compensation, the differences between UP and DOWN states become more pronounced again. These phenomena can also be observed in subplots R and S. (H–K) Average dwell time for various network sizes. (L–O) Fraction of time spent in competitive states (i.e. no active attractors) for various network sizes. (P) Distributions of dwell times. (Q) Average firing rate of PYR cells within an active period of their parent attractor. (R) Average voltage of PYR cells before, during and after their parent attractor is active (UP state). (S) Average voltage of PYR cells before, during and after an attractor they do not belong to is active (DOWN state). For subplots Q, R and S, the abscissa has been subdivided into multiples of the average attractor dwell time in the respective simulations. In subplots R and S the dotted line indicates the leak potential of the PYR cells. (T) Pattern completion in a 25HC25MC network. Estimated activation probability from 25 trials per abscissa value. Synapse loss shifts the curve to the right, i.e., more MCs need to be stimulated to achieve the same probability of activating their parent attractor. Both compensation methods restore the original behavior to a large extent. (U) Attentional blink in a 25HC25MC network: iso-probability contours, measured over 14 trials per (, #MCs) pair. Synapse loss suppresses attentional blink, as inhibition from active attractors becomes to weak to prevent the activation of other stimulated attractors. Compensation by increasing the weight of the remaining synapses alleviates this effect, but scaling down the PYR population sizes directly reduces the percentage of lost synapses and is therefore more effective in restoring attentional blink.</p

    Effects of fixed axonal delays on the L2/3 model.

    No full text
    <p>Unless explicitly stated otherwise, the default network model (9HC9MC) was used. Data from the regular and distorted models is depicted (or highlighted) in blue, and red, respectively. (A) Average firing rate of PYR cells within an active period of their parent attractor. (B, C) Average dwell time for various network sizes. (D, E) Fraction of time spent in competitive states (i.e. no active attractors) for various network sizes. (F) Distributions of dwell times. (G) Average voltage of PYR cells before, during and after their parent attractor is active (UP state). (H) Average voltage of PYR cells before, during and after an attractor they do not belong to is active (DOWN state). For subplots A, G and H, the abscissa has been subdivided into multiples of the average attractor dwell time in the respective simulations. In subplots G and H the dotted line indicates the leak potential of the PYR cells.</p

    Architecture of the BrainScaleS wafer-scale hardware system.

    No full text
    <p>Left: The HICANN building block has two symmetric halves with synapse arrays and neuron circuits. Neural activity is transported horizontally (blue) and vertically (red) via asynchronous buses that span over the entire wafer. Exemplary spike paths are shown in yellow on the HICANN: The incoming spike packet is routed to the synapse drivers. In the event that a neuron spikes, it emits a spike packet back into the routing network. Right: Off-wafer connectivity is established by a hierarchical packed-based network via DNCs and FPGAs. It interfaces the on-wafer routing buses on the HICANN building blocks. Several wafer modules can be interconnected using routing functionality between the FPGAs.</p

    Projection-wise synapse loss of the L2/3 model after the mapping process.

    No full text
    <p>Projection-wise synapse loss in % for the default (9HC×9MC) and large-scale (25HC×25MC) network. See text for the respective differences between the distorted (dist.) and compensated (comp.) networks.</p><p>Projection-wise synapse loss of the L2/3 model after the mapping process.</p
    corecore