22,945 research outputs found
Innovation Rate Sampling of Pulse Streams with Application to Ultrasound Imaging
Signals comprised of a stream of short pulses appear in many applications
including bio-imaging and radar. The recent finite rate of innovation
framework, has paved the way to low rate sampling of such pulses by noticing
that only a small number of parameters per unit time are needed to fully
describe these signals. Unfortunately, for high rates of innovation, existing
sampling schemes are numerically unstable. In this paper we propose a general
sampling approach which leads to stable recovery even in the presence of many
pulses. We begin by deriving a condition on the sampling kernel which allows
perfect reconstruction of periodic streams from the minimal number of samples.
We then design a compactly supported class of filters, satisfying this
condition. The periodic solution is extended to finite and infinite streams,
and is shown to be numerically stable even for a large number of pulses. High
noise robustness is also demonstrated when the delays are sufficiently
separated. Finally, we process ultrasound imaging data using our techniques,
and show that substantial rate reduction with respect to traditional ultrasound
sampling schemes can be achieved.Comment: 14 pages, 13 figure
Search for gravitational-wave bursts in LIGO data from the fourth science run
The fourth science run of the LIGO and GEO 600 gravitational-wave detectors,
carried out in early 2005, collected data with significantly lower noise than
previous science runs. We report on a search for short-duration
gravitational-wave bursts with arbitrary waveform in the 64-1600 Hz frequency
range appearing in all three LIGO interferometers. Signal consistency tests,
data quality cuts, and auxiliary-channel vetoes are applied to reduce the rate
of spurious triggers. No gravitational-wave signals are detected in 15.5 days
of live observation time; we set a frequentist upper limit of 0.15 per day (at
90% confidence level) on the rate of bursts with large enough amplitudes to be
detected reliably. The amplitude sensitivity of the search, characterized using
Monte Carlo simulations, is several times better than that of previous
searches. We also provide rough estimates of the distances at which
representative supernova and binary black hole merger signals could be detected
with 50% efficiency by this analysis.Comment: Corrected amplitude sensitivities (7% change on average); 30 pages,
submitted to Classical and Quantum Gravit
Coherent network analysis technique for discriminating gravitational-wave bursts from instrumental noise
Existing coherent network analysis techniques for detecting
gravitational-wave bursts simultaneously test data from multiple observatories
for consistency with the expected properties of the signals. These techniques
assume the output of the detector network to be the sum of a stationary
Gaussian noise process and a gravitational-wave signal, and they may fail in
the presence of transient non-stationarities, which are common in real
detectors. In order to address this problem we introduce a consistency test
that is robust against noise non-stationarities and allows one to distinguish
between gravitational-wave bursts and noise transients. This technique does not
require any a priori knowledge of the putative burst waveform.Comment: 18 pages, 11 figures; corrected corrupted figur
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
- …