8,646 research outputs found
Analysis-by-Synthesis-based Quantization of Compressed Sensing Measurements
We consider a resource-constrained scenario where a compressed sensing- (CS)
based sensor has a low number of measurements which are quantized at a low rate
followed by transmission or storage. Applying this scenario, we develop a new
quantizer design which aims to attain a high-quality reconstruction performance
of a sparse source signal based on analysis-by-synthesis framework. Through
simulations, we compare the performance of the proposed quantization algorithm
vis-a-vis existing quantization methods.Comment: 5 pages, Published in ICASSP 201
Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image
We consider the problem of dense depth prediction from a sparse set of depth
measurements and a single RGB image. Since depth estimation from monocular
images alone is inherently ambiguous and unreliable, to attain a higher level
of robustness and accuracy, we introduce additional sparse depth samples, which
are either acquired with a low-resolution depth sensor or computed via visual
Simultaneous Localization and Mapping (SLAM) algorithms. We propose the use of
a single deep regression network to learn directly from the RGB-D raw data, and
explore the impact of number of depth samples on prediction accuracy. Our
experiments show that, compared to using only RGB images, the addition of 100
spatially random depth samples reduces the prediction root-mean-square error by
50% on the NYU-Depth-v2 indoor dataset. It also boosts the percentage of
reliable prediction from 59% to 92% on the KITTI dataset. We demonstrate two
applications of the proposed algorithm: a plug-in module in SLAM to convert
sparse maps to dense maps, and super-resolution for LiDARs. Software and video
demonstration are publicly available.Comment: accepted to ICRA 2018. 8 pages, 8 figures, 3 tables. Video at
https://www.youtube.com/watch?v=vNIIT_M7x7Y. Code at
https://github.com/fangchangma/sparse-to-dens
Frequency-modulated continuous-wave LiDAR compressive depth-mapping
We present an inexpensive architecture for converting a frequency-modulated
continuous-wave LiDAR system into a compressive-sensing based depth-mapping
camera. Instead of raster scanning to obtain depth-maps, compressive sensing is
used to significantly reduce the number of measurements. Ideally, our approach
requires two difference detectors. % but can operate with only one at the cost
of doubling the number of measurments. Due to the large flux entering the
detectors, the signal amplification from heterodyne detection, and the effects
of background subtraction from compressive sensing, the system can obtain
higher signal-to-noise ratios over detector-array based schemes while scanning
a scene faster than is possible through raster-scanning. %Moreover, we show how
a single total-variation minimization and two fast least-squares minimizations,
instead of a single complex nonlinear minimization, can efficiently recover
high-resolution depth-maps with minimal computational overhead. Moreover, by
efficiently storing only data points from measurements of an
pixel scene, we can easily extract depths by solving only two linear equations
with efficient convex-optimization methods
Rate-distortion Balanced Data Compression for Wireless Sensor Networks
This paper presents a data compression algorithm with error bound guarantee
for wireless sensor networks (WSNs) using compressing neural networks. The
proposed algorithm minimizes data congestion and reduces energy consumption by
exploring spatio-temporal correlations among data samples. The adaptive
rate-distortion feature balances the compressed data size (data rate) with the
required error bound guarantee (distortion level). This compression relieves
the strain on energy and bandwidth resources while collecting WSN data within
tolerable error margins, thereby increasing the scale of WSNs. The algorithm is
evaluated using real-world datasets and compared with conventional methods for
temporal and spatial data compression. The experimental validation reveals that
the proposed algorithm outperforms several existing WSN data compression
methods in terms of compression efficiency and signal reconstruction. Moreover,
an energy analysis shows that compressing the data can reduce the energy
expenditure, and hence expand the service lifespan by several folds.Comment: arXiv admin note: text overlap with arXiv:1408.294
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
- …