12,183 research outputs found
Ultrafast processing of pixel detector data with machine learning frameworks
Modern photon science performed at high repetition rate free-electron laser
(FEL) facilities and beyond relies on 2D pixel detectors operating at
increasing frequencies (towards 100 kHz at LCLS-II) and producing rapidly
increasing amounts of data (towards TB/s). This data must be rapidly stored for
offline analysis and summarized in real time. While at LCLS all raw data has
been stored, at LCLS-II this would lead to a prohibitive cost; instead,
enabling real time processing of pixel detector raw data allows reducing the
size and cost of online processing, offline processing and storage by orders of
magnitude while preserving full photon information, by taking advantage of the
compressibility of sparse data typical for LCLS-II applications. We
investigated if recent developments in machine learning are useful in data
processing for high speed pixel detectors and found that typical deep learning
models and autoencoder architectures failed to yield useful noise reduction
while preserving full photon information, presumably because of the very
different statistics and feature sets between computer vision and radiation
imaging. However, we redesigned in Tensorflow mathematically equivalent
versions of the state-of-the-art, "classical" algorithms used at LCLS. The
novel Tensorflow models resulted in elegant, compact and hardware agnostic
code, gaining 1 to 2 orders of magnitude faster processing on an inexpensive
consumer GPU, reducing by 3 orders of magnitude the projected cost of online
analysis at LCLS-II. Computer vision a decade ago was dominated by hand-crafted
filters; their structure inspired the deep learning revolution resulting in
modern deep convolutional networks; similarly, our novel Tensorflow filters
provide inspiration for designing future deep learning architectures for
ultrafast and efficient processing and classification of pixel detector images
at FEL facilities.Comment: 9 pages, 9 figure
Design and implementation of a multi-octave-band audio camera for realtime diagnosis
Noise pollution investigation takes advantage of two common methods of
diagnosis: measurement using a Sound Level Meter and acoustical imaging. The
former enables a detailed analysis of the surrounding noise spectrum whereas
the latter is rather used for source localization. Both approaches complete
each other, and merging them into a unique system, working in realtime, would
offer new possibilities of dynamic diagnosis. This paper describes the design
of a complete system for this purpose: imaging in realtime the acoustic field
at different octave bands, with a convenient device. The acoustic field is
sampled in time and space using an array of MEMS microphones. This recent
technology enables a compact and fully digital design of the system. However,
performing realtime imaging with resource-intensive algorithm on a large amount
of measured data confronts with a technical challenge. This is overcome by
executing the whole process on a Graphic Processing Unit, which has recently
become an attractive device for parallel computing
Memory and information processing in neuromorphic systems
A striking difference between brain-inspired neuromorphic processors and
current von Neumann processors architectures is the way in which memory and
processing is organized. As Information and Communication Technologies continue
to address the need for increased computational power through the increase of
cores within a digital processor, neuromorphic engineers and scientists can
complement this need by building processor architectures where memory is
distributed with the processing. In this paper we present a survey of
brain-inspired processor architectures that support models of cortical networks
and deep neural networks. These architectures range from serial clocked
implementations of multi-neuron systems to massively parallel asynchronous ones
and from purely digital systems to mixed analog/digital systems which implement
more biological-like models of neurons and synapses together with a suite of
adaptation and learning mechanisms analogous to the ones found in biological
nervous systems. We describe the advantages of the different approaches being
pursued and present the challenges that need to be addressed for building
artificial neural processing systems that can display the richness of behaviors
seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed
neuromorphic computing platforms and system
High-resolution distributed sampling of bandlimited fields with low-precision sensors
The problem of sampling a discrete-time sequence of spatially bandlimited
fields with a bounded dynamic range, in a distributed,
communication-constrained, processing environment is addressed. A central unit,
having access to the data gathered by a dense network of fixed-precision
sensors, operating under stringent inter-node communication constraints, is
required to reconstruct the field snapshots to maximum accuracy. Both
deterministic and stochastic field models are considered. For stochastic
fields, results are established in the almost-sure sense. The feasibility of
having a flexible tradeoff between the oversampling rate (sensor density) and
the analog-to-digital converter (ADC) precision, while achieving an exponential
accuracy in the number of bits per Nyquist-interval per snapshot is
demonstrated. This exposes an underlying ``conservation of bits'' principle:
the bit-budget per Nyquist-interval per snapshot (the rate) can be distributed
along the amplitude axis (sensor-precision) and space (sensor density) in an
almost arbitrary discrete-valued manner, while retaining the same (exponential)
distortion-rate characteristics. Achievable information scaling laws for field
reconstruction over a bounded region are also derived: With N one-bit sensors
per Nyquist-interval, Nyquist-intervals, and total network
bitrate (per-sensor bitrate ), the maximum pointwise distortion goes to zero as
or . This is shown to be possible
with only nearest-neighbor communication, distributed coding, and appropriate
interpolation algorithms. For a fixed, nonzero target distortion, the number of
fixed-precision sensors and the network rate needed is always finite.Comment: 17 pages, 6 figures; paper withdrawn from IEEE Transactions on Signal
Processing and re-submitted to the IEEE Transactions on Information Theor
Three Realizations and Comparison of Hardware for Piezoresistive Tactile Sensors
Tactile sensors are basically arrays of force sensors that are intended to emulate the skin in applications such as assistive robotics. Local electronics are usually implemented to reduce errors and interference caused by long wires. Realizations based on standard microcontrollers, Programmable Systems on Chip (PSoCs) and Field Programmable Gate Arrays (FPGAs) have been proposed by the authors for the case of piezoresistive tactile sensors. The solution employing FPGAs is especially relevant since their performance is closer to that of Application Specific Integrated Circuits (ASICs) than that of the other devices. This paper presents an implementation of such an idea for a specific sensor. For the purpose of comparison, the circuitry based on the other devices is also made for the same sensor. This paper discusses the implementation issues, provides details regarding the design of the hardware based on the three devices and compares them.This work has been partially funded by the Spanish Government under contracts TEC2006-12376 and TEC2009-14446
MarinEye - A tool for marine monitoring
This work presents an autonomous system for marine integrated physical-chemical and biological monitoring – the MarinEye system. It comprises a set of sensors providing diverse and relevant information for oceanic environment characterization and marine biology studies. It is constituted by a physicalchemical water properties sensor suite, a water filtration and sampling system for DNA collection, a plankton imaging
system and biomass assessment acoustic system. The MarinEye system has onboard computational and
logging capabilities allowing it either for autonomous operation or for integration in other marine observing systems (such as Observatories or robotic vehicles. It was designed in order to collect integrated multi-trophic monitoring data. The validation in operational environment on 3 marine observatories: RAIA, BerlengasWatch and Cascais on the coast of Portugal is also discussed.info:eu-repo/semantics/publishedVersio
Real-time refocusing using an FPGA-based standard plenoptic camera
Plenoptic cameras are receiving increased attention in scientific and commercial applications because they capture the entire structure of light in a scene, enabling optical transforms (such as focusing) to be applied computationally after the fact, rather than once and for all at the time a picture is taken. In many settings, real-time inter active performance is also desired, which in turn requires significant computational power due to the large amount of data required to represent a plenoptic image. Although GPUs have been shown to provide acceptable performance for real-time plenoptic rendering, their cost and power requirements make them prohibitive for embedded uses (such as in-camera). On the other hand, the computation to accomplish plenoptic rendering is well structured, suggesting the use of specialized hardware. Accordingly, this paper presents an array of switch-driven finite impulse response filters, implemented with FPGA to accomplish high-throughput spatial-domain rendering. The proposed architecture provides a power-efficient rendering hardware design suitable for full-video applications as required in broadcasting or cinematography. A benchmark assessment of the proposed hardware implementation shows that real-time performance can readily be achieved, with a one order of magnitude performance improvement over a GPU implementation and three orders ofmagnitude performance improvement over a general-purpose CPU implementation
- …