1,513 research outputs found
Trifocal Relative Pose from Lines at Points and its Efficient Solution
We present a new minimal problem for relative pose estimation mixing point
features with lines incident at points observed in three views and its
efficient homotopy continuation solver. We demonstrate the generality of the
approach by analyzing and solving an additional problem with mixed point and
line correspondences in three views. The minimal problems include
correspondences of (i) three points and one line and (ii) three points and two
lines through two of the points which is reported and analyzed here for the
first time. These are difficult to solve, as they have 216 and - as shown here
- 312 solutions, but cover important practical situations when line and point
features appear together, e.g., in urban scenes or when observing curves. We
demonstrate that even such difficult problems can be solved robustly using a
suitable homotopy continuation technique and we provide an implementation
optimized for minimal problems that can be integrated into engineering
applications. Our simulated and real experiments demonstrate our solvers in the
camera geometry computation task in structure from motion. We show that new
solvers allow for reconstructing challenging scenes where the standard two-view
initialization of structure from motion fails.Comment: This material is based upon work supported by the National Science
Foundation under Grant No. DMS-1439786 while most authors were in residence
at Brown University's Institute for Computational and Experimental Research
in Mathematics -- ICERM, in Providence, R
SPLODE: Semi-Probabilistic Point and Line Odometry with Depth Estimation from RGB-D Camera Motion
Active depth cameras suffer from several limitations, which cause incomplete
and noisy depth maps, and may consequently affect the performance of RGB-D
Odometry. To address this issue, this paper presents a visual odometry method
based on point and line features that leverages both measurements from a depth
sensor and depth estimates from camera motion. Depth estimates are generated
continuously by a probabilistic depth estimation framework for both types of
features to compensate for the lack of depth measurements and inaccurate
feature depth associations. The framework models explicitly the uncertainty of
triangulating depth from both point and line observations to validate and
obtain precise estimates. Furthermore, depth measurements are exploited by
propagating them through a depth map registration module and using a
frame-to-frame motion estimation method that considers 3D-to-2D and 2D-to-3D
reprojection errors, independently. Results on RGB-D sequences captured on
large indoor and outdoor scenes, where depth sensor limitations are critical,
show that the combination of depth measurements and estimates through our
approach is able to overcome the absence and inaccuracy of depth measurements.Comment: IROS 201
Accurate Optical Flow via Direct Cost Volume Processing
We present an optical flow estimation approach that operates on the full
four-dimensional cost volume. This direct approach shares the structural
benefits of leading stereo matching pipelines, which are known to yield high
accuracy. To this day, such approaches have been considered impractical due to
the size of the cost volume. We show that the full four-dimensional cost volume
can be constructed in a fraction of a second due to its regularity. We then
exploit this regularity further by adapting semi-global matching to the
four-dimensional setting. This yields a pipeline that achieves significantly
higher accuracy than state-of-the-art optical flow methods while being faster
than most. Our approach outperforms all published general-purpose optical flow
methods on both Sintel and KITTI 2015 benchmarks.Comment: Published at the Conference on Computer Vision and Pattern
Recognition (CVPR 2017
3D Object Discovery and Modeling Using Single RGB-D Images Containing Multiple Object Instances
Unsupervised object modeling is important in robotics, especially for
handling a large set of objects. We present a method for unsupervised 3D object
discovery, reconstruction, and localization that exploits multiple instances of
an identical object contained in a single RGB-D image. The proposed method does
not rely on segmentation, scene knowledge, or user input, and thus is easily
scalable. Our method aims to find recurrent patterns in a single RGB-D image by
utilizing appearance and geometry of the salient regions. We extract keypoints
and match them in pairs based on their descriptors. We then generate triplets
of the keypoints matching with each other using several geometric criteria to
minimize false matches. The relative poses of the matched triplets are computed
and clustered to discover sets of triplet pairs with similar relative poses.
Triplets belonging to the same set are likely to belong to the same object and
are used to construct an initial object model. Detection of remaining instances
with the initial object model using RANSAC allows to further expand and refine
the model. The automatically generated object models are both compact and
descriptive. We show quantitative and qualitative results on RGB-D images with
various objects including some from the Amazon Picking Challenge. We also
demonstrate the use of our method in an object picking scenario with a robotic
arm
Linear Global Translation Estimation with Feature Tracks
This paper derives a novel linear position constraint for cameras seeing a
common scene point, which leads to a direct linear method for global camera
translation estimation. Unlike previous solutions, this method deals with
collinear camera motion and weak image association at the same time. The final
linear formulation does not involve the coordinates of scene points, which
makes it efficient even for large scale data. We solve the linear equation
based on norm, which makes our system more robust to outliers in
essential matrices and feature correspondences. We experiment this method on
both sequentially captured images and unordered Internet images. The
experiments demonstrate its strength in robustness, accuracy, and efficiency.Comment: Changes: 1. Adopt BMVC2015 style; 2. Combine sections 3 and 5; 3.
Move "Evaluation on synthetic data" out to supplementary file; 4. Divide
subsection "Evaluation on general data" to subsections "Experiment on
sequential data" and "Experiment on unordered Internet data"; 5. Change Fig.
1 and Fig.8; 6. Move Fig. 6 and Fig. 7 to supplementary file; 7 Change some
symbols; 8. Correct some typo
- …