23,792 research outputs found
Towards low-latency real-time detection of gravitational waves from compact binary coalescences in the era of advanced detectors
Electromagnetic (EM) follow-up observations of gravitational wave (GW) events
will help shed light on the nature of the sources, and more can be learned if
the EM follow-ups can start as soon as the GW event becomes observable. In this
paper, we propose a computationally efficient time-domain algorithm capable of
detecting gravitational waves (GWs) from coalescing binaries of compact objects
with nearly zero time delay. In case when the signal is strong enough, our
algorithm also has the flexibility to trigger EM observation before the merger.
The key to the efficiency of our algorithm arises from the use of chains of
so-called Infinite Impulse Response (IIR) filters, which filter time-series
data recursively. Computational cost is further reduced by a template
interpolation technique that requires filtering to be done only for a much
coarser template bank than otherwise required to sufficiently recover optimal
signal-to-noise ratio. Towards future detectors with sensitivity extending to
lower frequencies, our algorithm's computational cost is shown to increase
rather insignificantly compared to the conventional time-domain correlation
method. Moreover, at latencies of less than hundreds to thousands of seconds,
this method is expected to be computationally more efficient than the
straightforward frequency-domain method.Comment: 19 pages, 6 figures, for PR
Cross-Scale Cost Aggregation for Stereo Matching
Human beings process stereoscopic correspondence across multiple scales.
However, this bio-inspiration is ignored by state-of-the-art cost aggregation
methods for dense stereo correspondence. In this paper, a generic cross-scale
cost aggregation framework is proposed to allow multi-scale interaction in cost
aggregation. We firstly reformulate cost aggregation from a unified
optimization perspective and show that different cost aggregation methods
essentially differ in the choices of similarity kernels. Then, an inter-scale
regularizer is introduced into optimization and solving this new optimization
problem leads to the proposed framework. Since the regularization term is
independent of the similarity kernel, various cost aggregation methods can be
integrated into the proposed general framework. We show that the cross-scale
framework is important as it effectively and efficiently expands
state-of-the-art cost aggregation methods and leads to significant
improvements, when evaluated on Middlebury, KITTI and New Tsukuba datasets.Comment: To Appear in 2013 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR). 2014 (poster, 29.88%
Distributed Hybrid Simulation of the Internet of Things and Smart Territories
This paper deals with the use of hybrid simulation to build and compose
heterogeneous simulation scenarios that can be proficiently exploited to model
and represent the Internet of Things (IoT). Hybrid simulation is a methodology
that combines multiple modalities of modeling/simulation. Complex scenarios are
decomposed into simpler ones, each one being simulated through a specific
simulation strategy. All these simulation building blocks are then synchronized
and coordinated. This simulation methodology is an ideal one to represent IoT
setups, which are usually very demanding, due to the heterogeneity of possible
scenarios arising from the massive deployment of an enormous amount of sensors
and devices. We present a use case concerned with the distributed simulation of
smart territories, a novel view of decentralized geographical spaces that,
thanks to the use of IoT, builds ICT services to manage resources in a way that
is sustainable and not harmful to the environment. Three different simulation
models are combined together, namely, an adaptive agent-based parallel and
distributed simulator, an OMNeT++ based discrete event simulator and a
script-language simulator based on MATLAB. Results from a performance analysis
confirm the viability of using hybrid simulation to model complex IoT
scenarios.Comment: arXiv admin note: substantial text overlap with arXiv:1605.0487
Integration of Absolute Orientation Measurements in the KinectFusion Reconstruction pipeline
In this paper, we show how absolute orientation measurements provided by
low-cost but high-fidelity IMU sensors can be integrated into the KinectFusion
pipeline. We show that integration improves both runtime, robustness and
quality of the 3D reconstruction. In particular, we use this orientation data
to seed and regularize the ICP registration technique. We also present a
technique to filter the pairs of 3D matched points based on the distribution of
their distances. This filter is implemented efficiently on the GPU. Estimating
the distribution of the distances helps control the number of iterations
necessary for the convergence of the ICP algorithm. Finally, we show
experimental results that highlight improvements in robustness, a speed-up of
almost 12%, and a gain in tracking quality of 53% for the ATE metric on the
Freiburg benchmark.Comment: CVPR Workshop on Visual Odometry and Computer Vision Applications
Based on Location Clues 201
Sliding coherence window technique for hierarchical detection of continuous gravitational waves
A novel hierarchical search technique is presented for all-sky surveys for
continuous gravitational-wave sources, such as rapidly spinning nonaxisymmetric
neutron stars. Analyzing yearlong detector data sets over realistic ranges of
parameter space using fully coherent matched-filtering is computationally
prohibitive. Thus more efficient, so-called hierarchical techniques are
essential. Traditionally, the standard hierarchical approach consists of
dividing the data into nonoverlapping segments of which each is coherently
analyzed and subsequently the matched-filter outputs from all segments are
combined incoherently. The present work proposes to break the data into
subsegments shorter than the desired maximum coherence time span (size of the
coherence window). Then matched-filter outputs from the different subsegments
are efficiently combined by sliding the coherence window in time: Subsegments
whose timestamps are closer than coherence window size are combined coherently,
otherwise incoherently. Compared to the standard scheme at the same coherence
time baseline, data sets longer by about 50-100% would have to be analyzed to
achieve the same search sensitivity as with the sliding coherence window
approach. Numerical simulations attest to the analytically estimated
improvement.Comment: 11 pages, 4 figure
- …