341,511 research outputs found
A Distributed System for Parallel Simulations
We presented the technologies and algorithms to build a web-based visualization and steering system to monitor the dynamics of remote parallel simulations executed on a Linux Cluster. The polynomial time based algorithm to optimally utilize distributed computing resources over a network to achieve maximum frame-rate was also proposed. Keeping up with the advancements in modern web technologies, we have developed an Ajax-based web frontend which allows users to remotely access and control ongoing computations via a web browser facilitated by visual feedbacks in real-time. Experimental results are also given from sample runs mapped to distributed computing nodes and initiated by users at different geographical locations. Our preliminary results on frame-rates illustrated that system performance was affected by network conditions of the chosen mapping loop including available network bandwidth and computing capacities. The underlying programming framework of our system supports mixed-programming mode and is flexible to integrate most serial or parallel simulation code written in different programming languages such as Fortran, C and Java
The Brain on Low Power Architectures - Efficient Simulation of Cortical Slow Waves and Asynchronous States
Efficient brain simulation is a scientific grand challenge, a
parallel/distributed coding challenge and a source of requirements and
suggestions for future computing architectures. Indeed, the human brain
includes about 10^15 synapses and 10^11 neurons activated at a mean rate of
several Hz. Full brain simulation poses Exascale challenges even if simulated
at the highest abstraction level. The WaveScalES experiment in the Human Brain
Project (HBP) has the goal of matching experimental measures and simulations of
slow waves during deep-sleep and anesthesia and the transition to other brain
states. The focus is the development of dedicated large-scale
parallel/distributed simulation technologies. The ExaNeSt project designs an
ARM-based, low-power HPC architecture scalable to million of cores, developing
a dedicated scalable interconnect system, and SWA/AW simulations are included
among the driving benchmarks. At the joint between both projects is the INFN
proprietary Distributed and Plastic Spiking Neural Networks (DPSNN) simulation
engine. DPSNN can be configured to stress either the networking or the
computation features available on the execution platforms. The simulation
stresses the networking component when the neural net - composed by a
relatively low number of neurons, each one projecting thousands of synapses -
is distributed over a large number of hardware cores. When growing the number
of neurons per core, the computation starts to be the dominating component for
short range connections. This paper reports about preliminary performance
results obtained on an ARM-based HPC prototype developed in the framework of
the ExaNeSt project. Furthermore, a comparison is given of instantaneous power,
total energy consumption, execution time and energetic cost per synaptic event
of SWA/AW DPSNN simulations when executed on either ARM- or Intel-based server
platforms
Allocation of Distributed Generation for Maximum Reduction of Energy Losses in Distribution Systems
The analysis of actual distribution systems with penetration of distributed generation requires powerful tools with capabilities that until very recently were not available in distribution software tools; for instance, probabilistic and time mode simulations. This chapter presents the application of parallel computing to the allocation of distributed generation for maximum reduction of energy losses in distribution system when the system is evaluated during a given period (e.g., the target is to minimize energy losses for periods equal or longer than 1 year). The simulations have been carried out using OpenDSS, a freely available software tool for distribution system studies, when it is driven as a COM DLL from MATLAB using a multicore installation. The chapter details a MATLAB–OpenDSS procedure for allocation of photovoltaic (PV) generation in distribution systems using a parallel Monte Carlo approach and assuming that loads are voltage‐dependent. The main goals are to check the viability of a Monte Carlo method in some studies for which parallel computing can be advantageously applied and propose a simple procedure for minimization of energy losses in distribution systems
Rigorous results on spontaneous symmetry breaking in a one-dimensional driven particle system
We study spontaneous symmetry breaking in a one-dimensional driven
two-species stochastic cellular automaton with parallel sublattice update and
open boundaries. The dynamics are symmetric with respect to interchange of
particles. Starting from an empty initial lattice, the system enters a symmetry
broken state after some time T_1 through an amplification loop of initial
fluctuations. It remains in the symmetry broken state for a time T_2 through a
traffic jam effect. Applying a simple martingale argument, we obtain rigorous
asymptotic estimates for the expected times ~ L ln(L) and ln() ~ L,
where L is the system size. The actual value of T_1 depends strongly on the
initial fluctuation in the amplification loop. Numerical simulations suggest
that T_2 is exponentially distributed with a mean that grows exponentially in
system size. For the phase transition line we argue and confirm by simulations
that the flipping time between sign changes of the difference of particle
numbers approaches an algebraic distribution as the system size tends to
infinity.Comment: 23 pages, 7 figure
Output feedback robust distributed model predictive control for parallel systems in process networks with competitive characteristics
The parallel structure is one of the basic system architectures found in process networks. This paper formulates control strategies for such parallel systems when the states are unmeasured. The competitive coupling and competitive constraints are addressed in the control design. A distributed buffer and pre-estimator are proposed to solve problems relating to coupling and timely communication whilst a distributed moving horizon estimator is employed to further improve the estimation accuracy in the presence of the constraints. An output feedback robust distributed model predictive control algorithm is then developed for such parallel systems. The Lyapunov method is used for the theoretical analysis which produces tractable linear matrix inequalities (LMI). Simulations and experimental results are provided to validate the effectiveness of the proposed approach
A Distributed Algebra System for Time Integration on Parallel Computers
We present a distributed algebra system for efficient and compact
implementation of numerical time integration schemes on parallel computers and
graphics processing units (GPU). The software implementation combines the time
integration library Odeint from Boost with the OpenFPM framework for scalable
scientific computing. Implementing multi-stage, multi-step, or adaptive time
integration methods in distributed-memory parallel codes or on GPUs is
challenging. The present algebra system addresses this by making the time
integration methods from Odeint available in a concise template-expression
language for numerical simulations distributed and parallelized using OpenFPM.
This allows using state-of-the-art time integration schemes, or switching
between schemes, by changing one line of code, while maintaining parallel
scalability. This enables scalable time integration with compact code and
facilitates rapid rewriting and deployment of simulation algorithms. We
benchmark the present software for exponential and sigmoidal dynamics and
present an application example to the 3D Gray-Scott reaction-diffusion problem
on both CPUs and GPUs in only 60 lines of code
- …