7,658 research outputs found
On-Manifold Preintegration for Real-Time Visual-Inertial Odometry
Current approaches for visual-inertial odometry (VIO) are able to attain
highly accurate state estimation via nonlinear optimization. However, real-time
optimization quickly becomes infeasible as the trajectory grows over time, this
problem is further emphasized by the fact that inertial measurements come at
high rate, hence leading to fast growth of the number of variables in the
optimization. In this paper, we address this issue by preintegrating inertial
measurements between selected keyframes into single relative motion
constraints. Our first contribution is a \emph{preintegration theory} that
properly addresses the manifold structure of the rotation group. We formally
discuss the generative measurement model as well as the nature of the rotation
noise and derive the expression for the \emph{maximum a posteriori} state
estimator. Our theoretical development enables the computation of all necessary
Jacobians for the optimization and a-posteriori bias correction in analytic
form. The second contribution is to show that the preintegrated IMU model can
be seamlessly integrated into a visual-inertial pipeline under the unifying
framework of factor graphs. This enables the application of
incremental-smoothing algorithms and the use of a \emph{structureless} model
for visual measurements, which avoids optimizing over the 3D points, further
accelerating the computation. We perform an extensive evaluation of our
monocular \VIO pipeline on real and simulated datasets. The results confirm
that our modelling effort leads to accurate state estimation in real-time,
outperforming state-of-the-art approaches.Comment: 20 pages, 24 figures, accepted for publication in IEEE Transactions
on Robotics (TRO) 201
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Keyframe-based visual–inertial odometry using nonlinear optimization
Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy
TOMOBFLOW: feature-preserving noise filtering for electron tomography
<p>Abstract</p> <p>Background</p> <p>Noise filtering techniques are needed in electron tomography to allow proper interpretation of datasets. The standard linear filtering techniques are characterized by a tradeoff between the amount of reduced noise and the blurring of the features of interest. On the other hand, sophisticated anisotropic nonlinear filtering techniques allow noise reduction with good preservation of structures. However, these techniques are computationally intensive and are difficult to be tuned to the problem at hand.</p> <p>Results</p> <p>TOMOBFLOW is a program for noise filtering with capabilities of preservation of biologically relevant information. It is an efficient implementation of the Beltrami flow, a nonlinear filtering method that locally tunes the strength of the smoothing according to an edge indicator based on geometry properties. The fact that this method does not have free parameters hard to be tuned makes TOMOBFLOW a user-friendly filtering program equipped with the power of diffusion-based filtering methods. Furthermore, TOMOBFLOW is provided with abilities to deal with different types and formats of images in order to make it useful for electron tomography in particular and bioimaging in general.</p> <p>Conclusion</p> <p>TOMOBFLOW allows efficient noise filtering of bioimaging datasets with preservation of the features of interest, thereby yielding data better suited for post-processing, visualization and interpretation. It is available at the web site <url>http://www.ual.es/%7ejjfdez/SW/tomobflow.html</url>.</p
Feature-preserving image restoration and its application in biological fluorescence microscopy
This thesis presents a new investigation of image restoration and its application to
fluorescence cell microscopy. The first part of the work is to develop advanced image
denoising algorithms to restore images from noisy observations by using a novel featurepreserving
diffusion approach. I have applied these algorithms to different types of
images, including biometric, biological and natural images, and demonstrated their
superior performance for noise removal and feature preservation, compared to several
state of the art methods. In the second part of my work, I explore a novel, simple and
inexpensive super-resolution restoration method for quantitative microscopy in cell
biology. In this method, a super-resolution image is restored, through an inverse process,
by using multiple diffraction-limited (low) resolution observations, which are acquired
from conventional microscopes whilst translating the sample parallel to the image plane,
so referred to as translation microscopy (TRAM). A key to this new development is the
integration of a robust feature detector, developed in the first part, to the inverse process
to restore high resolution images well above the diffraction limit in the presence of strong
noise. TRAM is a post-image acquisition computational method and can be implemented
with any microscope. Experiments show a nearly 7-fold increase in lateral spatial
resolution in noisy biological environments, delivering multi-colour image resolution of
~30 nm
- …