4,899 research outputs found
BLADE: Filter Learning for General Purpose Computational Photography
The Rapid and Accurate Image Super Resolution (RAISR) method of Romano,
Isidoro, and Milanfar is a computationally efficient image upscaling method
using a trained set of filters. We describe a generalization of RAISR, which we
name Best Linear Adaptive Enhancement (BLADE). This approach is a trainable
edge-adaptive filtering framework that is general, simple, computationally
efficient, and useful for a wide range of problems in computational
photography. We show applications to operations which may appear in a camera
pipeline including denoising, demosaicing, and stylization
Weighted Mean Curvature
In image processing tasks, spatial priors are essential for robust
computations, regularization, algorithmic design and Bayesian inference. In
this paper, we introduce weighted mean curvature (WMC) as a novel image prior
and present an efficient computation scheme for its discretization in practical
image processing applications. We first demonstrate the favorable properties of
WMC, such as sampling invariance, scale invariance, and contrast invariance
with Gaussian noise model; and we show the relation of WMC to area
regularization. We further propose an efficient computation scheme for
discretized WMC, which is demonstrated herein to process over 33.2
giga-pixels/second on GPU. This scheme yields itself to a convolutional neural
network representation. Finally, WMC is evaluated on synthetic and real images,
showing its superiority quantitatively to total-variation and mean curvature.Comment: 12 page
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Efficient Continuous-Time SLAM for 3D Lidar-Based Online Mapping
Modern 3D laser-range scanners have a high data rate, making online
simultaneous localization and mapping (SLAM) computationally challenging.
Recursive state estimation techniques are efficient but commit to a state
estimate immediately after a new scan is made, which may lead to misalignments
of measurements. We present a 3D SLAM approach that allows for refining
alignments during online mapping. Our method is based on efficient local
mapping and a hierarchical optimization back-end. Measurements of a 3D laser
scanner are aggregated in local multiresolution maps by means of surfel-based
registration. The local maps are used in a multi-level graph for allocentric
mapping and localization. In order to incorporate corrections when refining the
alignment, the individual 3D scans in the local map are modeled as a sub-graph
and graph optimization is performed to account for drift and misalignments in
the local maps. Furthermore, in each sub-graph, a continuous-time
representation of the sensor trajectory allows to correct measurements between
scan poses. We evaluate our approach in multiple experiments by showing
qualitative results. Furthermore, we quantify the map quality by an
entropy-based measure.Comment: In: Proceedings of the International Conference on Robotics and
Automation (ICRA) 201
- …