15,192 research outputs found
Sparse Recovery from Combined Fusion Frame Measurements
Sparse representations have emerged as a powerful tool in signal and
information processing, culminated by the success of new acquisition and
processing techniques such as Compressed Sensing (CS). Fusion frames are very
rich new signal representation methods that use collections of subspaces
instead of vectors to represent signals. This work combines these exciting
fields to introduce a new sparsity model for fusion frames. Signals that are
sparse under the new model can be compressively sampled and uniquely
reconstructed in ways similar to sparse signals using standard CS. The
combination provides a promising new set of mathematical tools and signal
models useful in a variety of applications. With the new model, a sparse signal
has energy in very few of the subspaces of the fusion frame, although it does
not need to be sparse within each of the subspaces it occupies. This sparsity
model is captured using a mixed l1/l2 norm for fusion frames.
A signal sparse in a fusion frame can be sampled using very few random
projections and exactly reconstructed using a convex optimization that
minimizes this mixed l1/l2 norm. The provided sampling conditions generalize
coherence and RIP conditions used in standard CS theory. It is demonstrated
that they are sufficient to guarantee sparse recovery of any signal sparse in
our model. Moreover, a probabilistic analysis is provided using a stochastic
model on the sparse signal that shows that under very mild conditions the
probability of recovery failure decays exponentially with increasing dimension
of the subspaces
Visual-Inertial Mapping with Non-Linear Factor Recovery
Cameras and inertial measurement units are complementary sensors for
ego-motion estimation and environment mapping. Their combination makes
visual-inertial odometry (VIO) systems more accurate and robust. For globally
consistent mapping, however, combining visual and inertial information is not
straightforward. To estimate the motion and geometry with a set of images large
baselines are required. Because of that, most systems operate on keyframes that
have large time intervals between each other. Inertial data on the other hand
quickly degrades with the duration of the intervals and after several seconds
of integration, it typically contains only little useful information.
In this paper, we propose to extract relevant information for visual-inertial
mapping from visual-inertial odometry using non-linear factor recovery. We
reconstruct a set of non-linear factors that make an optimal approximation of
the information on the trajectory accumulated by VIO. To obtain a globally
consistent map we combine these factors with loop-closing constraints using
bundle adjustment. The VIO factors make the roll and pitch angles of the global
map observable, and improve the robustness and the accuracy of the mapping. In
experiments on a public benchmark, we demonstrate superior performance of our
method over the state-of-the-art approaches
- …