260,227 research outputs found
AER Spiking Neuron Computation on GPUs: The Frame-to-AER Generation
Neuro-inspired processing tries to imitate the nervous system and may
resolve complex problems, such as visual recognition. The spike-based philosophy
based on the Address-Event-Representation (AER) is a neuromorphic interchip
communication protocol that allows for massive connectivity between neurons.
Some of the AER-based systems can achieve very high performances in real-time
applications. This philosophy is very different from standard image processing,
which considers the visual information as a succession of frames. These frames
need to be processed in order to extract a result. This usually requires very
expensive operations and high computing resource consumption. Due to its relative
youth, nowadays AER systems are short of cost-effective tools like emulators,
simulators, testers, debuggers, etc. In this paper the first results of a CUDA-based
tool focused on the functional processing of AER spikes is presented, with the aim
of helping in the design and testing of filters and buses management of these
systems.Ministerio de Educación y Ciencia TEC2009-10639-C04-0
On-Line Dependability Enhancement of Multiprocessor SoCs by Resource Management
This paper describes a new approach towards dependable design of homogeneous multi-processor SoCs in an example satellite-navigation application. First, the NoC dependability is functionally verified via embedded software. Then the Xentium processor tiles are periodically verified via on-line self-testing techniques, by using a new IIP Dependability Manager. Based on the Dependability Manager results, faulty tiles are electronically excluded and replaced by fault-free spare tiles via on-line resource management. This integrated approach enables fast electronic fault detection/diagnosis and repair, and hence a high system availability. The dependability application runs in parallel with the actual application, resulting in a very dependable system. All parts have been verified by simulation
Network emulation focusing on QoS-Oriented satellite communication
This chapter proposes network emulation basics and a complete case study of QoS-oriented Satellite Communication
Teaching Concurrent Software Design: A Case Study Using Android
In this article, we explore various parallel and distributed computing topics
from a user-centric software engineering perspective. Specifically, in the
context of mobile application development, we study the basic building blocks
of interactive applications in the form of events, timers, and asynchronous
activities, along with related software modeling, architecture, and design
topics.Comment: Submitted to CDER NSF/IEEE-TCPP Curriculum Initiative on Parallel and
Distributed Computing - Core Topics for Undergraduate
When should I use network emulation ?
The design and development of a complex system requires an adequate methodology and efficient instrumental support in order to early detect and correct anomalies in the functional and non-functional properties of the tested protocols. Among the various tools used to provide experimental support for such developments, network emulation relies on real-time production of impairments on real traffic according to a communication model, either realistically or not. This paper aims at simply presenting to newcomers in network emulation (students, engineers, ...) basic principles and practices illustrated with a few commonly used tools. The motivation behind is to fill a gap in terms of introductory and pragmatic papers in this domain. The study particularly considers centralized approaches, allowing cheap and easy implementation in the context of research labs or industrial developments. In addition, an architectural model for emulation systems is proposed, defining three complementary levels, namely hardware, impairment and model levels. With the help of this architectural framework, various existing tools are situated and described. Various approaches for modeling the emulation actions are studied, such as impairment-based scenarios and virtual architectures, real-time discrete simulation and trace-based systems. Those modeling approaches are described and compared in terms of services and we study their ability to respond to various designer needs to assess when emulation is needed
DFT and BIST of a multichip module for high-energy physics experiments
Engineers at Politecnico di Torino designed a multichip module for high-energy physics experiments conducted on the Large Hadron Collider. An array of these MCMs handles multichannel data acquisition and signal processing. Testing the MCM from board to die level required a combination of DFT strategie
Efficient transfer entropy analysis of non-stationary neural time series
Information theory allows us to investigate information processing in neural
systems in terms of information transfer, storage and modification. Especially
the measure of information transfer, transfer entropy, has seen a dramatic
surge of interest in neuroscience. Estimating transfer entropy from two
processes requires the observation of multiple realizations of these processes
to estimate associated probability density functions. To obtain these
observations, available estimators assume stationarity of processes to allow
pooling of observations over time. This assumption however, is a major obstacle
to the application of these estimators in neuroscience as observed processes
are often non-stationary. As a solution, Gomez-Herrero and colleagues
theoretically showed that the stationarity assumption may be avoided by
estimating transfer entropy from an ensemble of realizations. Such an ensemble
is often readily available in neuroscience experiments in the form of
experimental trials. Thus, in this work we combine the ensemble method with a
recently proposed transfer entropy estimator to make transfer entropy
estimation applicable to non-stationary time series. We present an efficient
implementation of the approach that deals with the increased computational
demand of the ensemble method's practical application. In particular, we use a
massively parallel implementation for a graphics processing unit to handle the
computationally most heavy aspects of the ensemble method. We test the
performance and robustness of our implementation on data from simulated
stochastic processes and demonstrate the method's applicability to
magnetoencephalographic data. While we mainly evaluate the proposed method for
neuroscientific data, we expect it to be applicable in a variety of fields that
are concerned with the analysis of information transfer in complex biological,
social, and artificial systems.Comment: 27 pages, 7 figures, submitted to PLOS ON
- …