139 research outputs found
Konzeption und Entwicklung eines adaptiv-optisch korrigierten Laser-Scanning Retina-Tomographen
Für die Diagnose von Krankheiten des Augenhintergrundes ist die Laser-Scanning-Ophtalmologie ein wertvolles bildgebendes Verfahren, dessen Auflösung aber durch die Aberrationen des Auges begrenzt wird. In dieser Arbeit werden Untersuchungen zum Einsatz eines adaptiv-optischen Systems, bestehend aus einem mikromechanischen Hubelementspiegel und einem Hartmann-Shack- Wellenfrontsensor, durchgeführt. Das Zusammenspiel von Hartmann-Shack-Sensor, Mikroelementspiegel und Scanner wird theoretisch und in numerischen Simulationen untersucht. In weiteren Simulationen werden obere Schranken für die Grenzen der Leistungsfähigkeit von Mikroelementspiegeln berechnet. Es wird ein Aufbau entwickelt, der einen Hartmann-Shack-Sensor in ein Laser-Scanning-Ophthalmoskop integriert. Mit diesem Aufbau werden an mehreren Probanden über Scanwinkel von zwei bis zehn Grad gemittelte Wellenfrontmessungen durchgeführt, dies zum Teil bei paralysiertem Auge
Establishing a Novel Modeling Tool: A Python-Based Interface for a Neuromorphic Hardware System
Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated
Characterization and Compensation of Network-Level Anomalies in Mixed-Signal Neuromorphic Modeling Platforms
Advancing the size and complexity of neural network models leads to an ever
increasing demand for computational resources for their simulation.
Neuromorphic devices offer a number of advantages over conventional computing
architectures, such as high emulation speed or low power consumption, but this
usually comes at the price of reduced configurability and precision. In this
article, we investigate the consequences of several such factors that are
common to neuromorphic devices, more specifically limited hardware resources,
limited parameter configurability and parameter variations. Our final aim is to
provide an array of methods for coping with such inevitable distortion
mechanisms. As a platform for testing our proposed strategies, we use an
executable system specification (ESS) of the BrainScaleS neuromorphic system,
which has been designed as a universal emulation back-end for neuroscientific
modeling. We address the most essential limitations of this device in detail
and study their effects on three prototypical benchmark network models within a
well-defined, systematic workflow. For each network model, we start by defining
quantifiable functionality measures by which we then assess the effects of
typical hardware-specific distortion mechanisms, both in idealized software
simulations and on the ESS. For those effects that cause unacceptable
deviations from the original network dynamics, we suggest generic compensation
mechanisms and demonstrate their effectiveness. Both the suggested workflow and
the investigated compensation mechanisms are largely back-end independent and
do not require additional hardware configurability beyond the one required to
emulate the benchmark networks in the first place. We hereby provide a generic
methodological environment for configurable neuromorphic devices that are
targeted at emulating large-scale, functional neural networks
Accelerated physical emulation of Bayesian inference in spiking neural networks
The massively parallel nature of biological information processing plays an
important role for its superiority to human-engineered computing devices. In
particular, it may hold the key to overcoming the von Neumann bottleneck that
limits contemporary computer architectures. Physical-model neuromorphic devices
seek to replicate not only this inherent parallelism, but also aspects of its
microscopic dynamics in analog circuits emulating neurons and synapses.
However, these machines require network models that are not only adept at
solving particular tasks, but that can also cope with the inherent
imperfections of analog substrates. We present a spiking network model that
performs Bayesian inference through sampling on the BrainScaleS neuromorphic
platform, where we use it for generative and discriminative computations on
visual data. By illustrating its functionality on this platform, we implicitly
demonstrate its robustness to various substrate-specific distortive effects, as
well as its accelerated capability for computation. These results showcase the
advantages of brain-inspired physical computation and provide important
building blocks for large-scale neuromorphic applications.Comment: This preprint has been published 2019 November 14. Please cite as:
Kungl A. F. et al. (2019) Accelerated Physical Emulation of Bayesian
Inference in Spiking Neural Networks. Front. Neurosci. 13:1201. doi:
10.3389/fnins.2019.0120
Demonstrating Advantages of Neuromorphic Computation: A Pilot Study
Neuromorphic devices represent an attempt to mimic aspects of the brain's
architecture and dynamics with the aim of replicating its hallmark functional
capabilities in terms of computational power, robust learning and energy
efficiency. We employ a single-chip prototype of the BrainScaleS 2 neuromorphic
system to implement a proof-of-concept demonstration of reward-modulated
spike-timing-dependent plasticity in a spiking network that learns to play the
Pong video game by smooth pursuit. This system combines an electronic
mixed-signal substrate for emulating neuron and synapse dynamics with an
embedded digital processor for on-chip learning, which in this work also serves
to simulate the virtual environment and learning agent. The analog emulation of
neuronal membrane dynamics enables a 1000-fold acceleration with respect to
biological real-time, with the entire chip operating on a power budget of 57mW.
Compared to an equivalent simulation using state-of-the-art software, the
on-chip emulation is at least one order of magnitude faster and three orders of
magnitude more energy-efficient. We demonstrate how on-chip learning can
mitigate the effects of fixed-pattern noise, which is unavoidable in analog
substrates, while making use of temporal variability for action exploration.
Learning compensates imperfections of the physical substrate, as manifested in
neuronal parameter variability, by adapting synaptic weights to match
respective excitability of individual neurons.Comment: Added measurements with noise in NEST simulation, add notice about
journal publication. Frontiers in Neuromorphic Engineering (2019
Emulating insect brains for neuromorphic navigation
Bees display the remarkable ability to return home in a straight line after
meandering excursions to their environment. Neurobiological imaging studies
have revealed that this capability emerges from a path integration mechanism
implemented within the insect's brain. In the present work, we emulate this
neural network on the neuromorphic mixed-signal processor BrainScaleS-2 to
guide bees, virtually embodied on a digital co-processor, back to their home
location after randomly exploring their environment. To realize the underlying
neural integrators, we introduce single-neuron spike-based short-term memory
cells with axo-axonic synapses. All entities, including environment, sensory
organs, brain, actuators, and the virtual body, run autonomously on a single
BrainScaleS-2 microchip. The functioning network is fine-tuned for better
precision and reliability through an evolution strategy. As BrainScaleS-2
emulates neural processes 1000 times faster than biology, 4800 consecutive bee
journeys distributed over 320 generations occur within only half an hour on a
single neuromorphic core
Hotspots of land use change in Europe
Die Zweitveröffentlichung der Publikation wurde durch Studierende des Projektseminars "Open Access Publizieren an der HU" im Sommersemester 2017 betreut. Nachgenutzt gemäß den CC-Bestimmungen des Lizenzgebers bzw. einer im Dokument selbst enthaltenen CC-Lizenz.Assessing changes in the extent and management intensity of land use is crucial to understanding land-system dynamics and their environmental and social outcomes. Yet, changes in the spatial patterns of land management intensity, and thus how they might relate to changes in the extent of land uses, remains unclear for many world regions.Wecompiled and analyzed high-resolution, spatiallyexplicit land-use change indicators capturing changes in both the extent and management intensity of
cropland, grazing land, forests, and urban areas for all of Europe for the period 1990–2006. Based on these indicators, we identified hotspots of change and explored the spatial concordance of area versus intensity changes.Wefound a clear East–West divide with regard to agriculture, with stronger cropland declines and lower management intensity in the East compared to the West. Yet, these patterns were not uniform and diverging patterns of intensification in areas highly suitable for farming, and disintensification and cropland contraction in more marginal areas emerged. Despite
the moderate overall rates of change, many regions in Europe fell into at least one land-use change hotspot during 1990–2006, often related to a spatial reorganization of land use (i.e., co-occurring area decline and intensification or co-occurring area increase and disintensification). Our analyses highlighted the diverse spatial patterns and heterogeneity of land-use changes in Europe, and the importance of jointly considering changes in the extent and management intensity of land use, as well as feedbacks among land-use sectors. Given this spatial differentiation of land-use change, and thus its
environmental impacts, spatially-explicit assessments of land-use dynamics are important for context-specific, regionalized land-use policy making.Peer Reviewe
Cortical oscillations implement a backbone for sampling-based computation in spiking neural networks
Brains need to deal with an uncertain world. Often, this requires visiting
multiple interpretations of the available information or multiple solutions to
an encountered problem. This gives rise to the so-called mixing problem: since
all of these "valid" states represent powerful attractors, but between
themselves can be very dissimilar, switching between such states can be
difficult. We propose that cortical oscillations can be effectively used to
overcome this challenge. By acting as an effective temperature, background
spiking activity modulates exploration. Rhythmic changes induced by cortical
oscillations can then be interpreted as a form of simulated tempering. We
provide a rigorous mathematical discussion of this link and study some of its
phenomenological implications in computer simulations. This identifies a new
computational role of cortical oscillations and connects them to various
phenomena in the brain, such as sampling-based probabilistic inference, memory
replay, multisensory cue combination and place cell flickering.Comment: 30 pages, 11 figure
Pattern representation and recognition with accelerated analog neuromorphic systems
Despite being originally inspired by the central nervous system, artificial
neural networks have diverged from their biological archetypes as they have
been remodeled to fit particular tasks. In this paper, we review several
possibilites to reverse map these architectures to biologically more realistic
spiking networks with the aim of emulating them on fast, low-power neuromorphic
hardware. Since many of these devices employ analog components, which cannot be
perfectly controlled, finding ways to compensate for the resulting effects
represents a key challenge. Here, we discuss three different strategies to
address this problem: the addition of auxiliary network components for
stabilizing activity, the utilization of inherently robust architectures and a
training method for hardware-emulated networks that functions without perfect
knowledge of the system's dynamics and parameters. For all three scenarios, we
corroborate our theoretical considerations with experimental results on
accelerated analog neuromorphic platforms.Comment: accepted at ISCAS 201
- …