107,147 research outputs found
neuronal simulation system of biological neural networks
Neuroscientists use computer simulations of neural systems in their efforts to
understand processes that underlie neural function. As experimental data in-
crease, it becomes clear that detailed physiological data alone are not enough
to infer how neural circuits work. Experimentalists appear to be recogniz- ing
the need for a quantitative approach to the exploration of the functional
consequences of particular neural features, which is provided by modelling.
The number of computer simulation programs is designed as a tool for de-
velopment and simulation of realistic models of single neurons and neural
networks. The present available packages for modelling of biological neural
networks are often dedicated Unix-based simulation packages, which require
rather large computational power from workstations, typically Unix systems.
The widely distributed packages, as Genesis [8] and Neuron [4], have their own
interpreted scripting language, in which users define components and run- ning
parameters for their simulations. In the hands of experienced users with
access to a compatible computer system, these modelling packages are powerful
research tools. However, they do suffer several drawbacks for non- expert
users: they don't provide a Graphical User Interface (GUI) or have a very
simple one, and as a result of it they can't visually represent the simula-
tion process. Also, the formal structure of the language is difficult and time
consuming to learn; at least initial knowledge and skills about Unix system
are necessary for users
Getting High: High Fidelity Simulation of High Granularity Calorimeters with High Speed
Accurate simulation of physical processes is crucial for the success of
modern particle physics. However, simulating the development and interaction of
particle showers with calorimeter detectors is a time consuming process and
drives the computing needs of large experiments at the LHC and future
colliders. Recently, generative machine learning models based on deep neural
networks have shown promise in speeding up this task by several orders of
magnitude. We investigate the use of a new architecture -- the Bounded
Information Bottleneck Autoencoder -- for modelling electromagnetic showers in
the central region of the Silicon-Tungsten calorimeter of the proposed
International Large Detector. Combined with a novel second post-processing
network, this approach achieves an accurate simulation of differential
distributions including for the first time the shape of the
minimum-ionizing-particle peak compared to a full GEANT4 simulation for a
high-granularity calorimeter with 27k simulated channels. The results are
validated by comparing to established architectures. Our results further
strengthen the case of using generative networks for fast simulation and
demonstrate that physically relevant differential distributions can be
described with high accuracy.Comment: 17 pages, 12 figure
The state of MIIND
MIIND (Multiple Interacting Instantiations of Neural Dynamics) is a highly modular multi-level C++ framework, that aims to shorten the development time for models in Cognitive Neuroscience (CNS). It offers reusable code modules (libraries of classes and functions) aimed at solving problems that occur repeatedly in modelling, but tries not to impose a specific modelling philosophy or methodology. At the lowest level, it offers support for the implementation of sparse networks. For example, the library SparseImplementationLib supports sparse random networks and the library LayerMappingLib can be used for sparse regular networks of filter-like operators. The library DynamicLib, which builds on top of the library SparseImplementationLib, offers a generic framework for simulating network processes. Presently, several specific network process implementations are provided in MIIND: the Wilson–Cowan and Ornstein–Uhlenbeck type, and population density techniques for leaky-integrate-and-fire neurons driven by Poisson input. A design principle of MIIND is to support detailing: the refinement of an originally simple model into a form where more biological detail is included. Another design principle is extensibility: the reuse of an existing model in a larger, more extended one. One of the main uses of MIIND so far has been the instantiation of neural models of visual attention. Recently, we have added a library for implementing biologically-inspired models of artificial vision, such as HMAX and recent successors. In the long run we hope to be able to apply suitably adapted neuronal mechanisms of attention to these artificial models
Data-driven Flood Emulation: Speeding up Urban Flood Predictions by Deep Convolutional Neural Networks
Computational complexity has been the bottleneck of applying physically-based
simulations on large urban areas with high spatial resolution for efficient and
systematic flooding analyses and risk assessments. To address this issue of
long computational time, this paper proposes that the prediction of maximum
water depth rasters can be considered as an image-to-image translation problem
where the results are generated from input elevation rasters using the
information learned from data rather than by conducting simulations, which can
significantly accelerate the prediction process. The proposed approach was
implemented by a deep convolutional neural network trained on flood simulation
data of 18 designed hyetographs on three selected catchments. Multiple tests
with both designed and real rainfall events were performed and the results show
that the flood predictions by neural network uses only 0.5 % of time comparing
with physically-based approaches, with promising accuracy and ability of
generalizations. The proposed neural network can also potentially be applied to
different but relevant problems including flood predictions for urban layout
planning
- …