18,883 research outputs found
Fast and Accurate Camera Covariance Computation for Large 3D Reconstruction
Estimating uncertainty of camera parameters computed in Structure from Motion
(SfM) is an important tool for evaluating the quality of the reconstruction and
guiding the reconstruction process. Yet, the quality of the estimated
parameters of large reconstructions has been rarely evaluated due to the
computational challenges. We present a new algorithm which employs the sparsity
of the uncertainty propagation and speeds the computation up about ten times
\wrt previous approaches. Our computation is accurate and does not use any
approximations. We can compute uncertainties of thousands of cameras in tens of
seconds on a standard PC. We also demonstrate that our approach can be
effectively used for reconstructions of any size by applying it to smaller
sub-reconstructions.Comment: ECCV 201
Calibration Wizard: A Guidance System for Camera Calibration Based on Modelling Geometric and Corner Uncertainty
It is well known that the accuracy of a calibration depends strongly on the
choice of camera poses from which images of a calibration object are acquired.
We present a system -- Calibration Wizard -- that interactively guides a user
towards taking optimal calibration images. For each new image to be taken, the
system computes, from all previously acquired images, the pose that leads to
the globally maximum reduction of expected uncertainty on intrinsic parameters
and then guides the user towards that pose. We also show how to incorporate
uncertainty in corner point position in a novel principled manner, for both,
calibration and computation of the next best pose. Synthetic and real-world
experiments are performed to demonstrate the effectiveness of Calibration
Wizard.Comment: Oral presentation at ICCV 201
Direct Monocular Odometry Using Points and Lines
Most visual odometry algorithm for a monocular camera focuses on points,
either by feature matching, or direct alignment of pixel intensity, while
ignoring a common but important geometry entity: edges. In this paper, we
propose an odometry algorithm that combines points and edges to benefit from
the advantages of both direct and feature based methods. It works better in
texture-less environments and is also more robust to lighting changes and fast
motion by increasing the convergence basin. We maintain a depth map for the
keyframe then in the tracking part, the camera pose is recovered by minimizing
both the photometric error and geometric error to the matched edge in a
probabilistic framework. In the mapping part, edge is used to speed up and
increase stereo matching accuracy. On various public datasets, our algorithm
achieves better or comparable performance than state-of-the-art monocular
odometry methods. In some challenging texture-less environments, our algorithm
reduces the state estimation error over 50%.Comment: ICRA 201
Multi-Scale 3D Scene Flow from Binocular Stereo Sequences
Scene flow methods estimate the three-dimensional motion field for points in the world, using multi-camera video data. Such methods combine multi-view reconstruction with motion estimation. This paper describes an alternative formulation for dense scene flow estimation that provides reliable results using only two cameras by fusing stereo and optical flow estimation into a single coherent framework. Internally, the proposed algorithm generates probability distributions for optical flow and disparity. Taking into account the uncertainty in the intermediate stages allows for more reliable estimation of the 3D scene flow than previous methods allow. To handle the aperture problems inherent in the estimation of optical flow and disparity, a multi-scale method along with a novel region-based technique is used within a regularized solution. This combined approach both preserves discontinuities and prevents over-regularization – two problems commonly associated with the basic multi-scale approaches. Experiments with synthetic and real test data demonstrate the strength of the proposed approach.National Science Foundation (CNS-0202067, IIS-0208876); Office of Naval Research (N00014-03-1-0108
Perception-aware Path Planning
In this paper, we give a double twist to the problem of planning under
uncertainty. State-of-the-art planners seek to minimize the localization
uncertainty by only considering the geometric structure of the scene. In this
paper, we argue that motion planning for vision-controlled robots should be
perception aware in that the robot should also favor texture-rich areas to
minimize the localization uncertainty during a goal-reaching task. Thus, we
describe how to optimally incorporate the photometric information (i.e.,
texture) of the scene, in addition to the the geometric one, to compute the
uncertainty of vision-based localization during path planning. To avoid the
caveats of feature-based localization systems (i.e., dependence on feature type
and user-defined thresholds), we use dense, direct methods. This allows us to
compute the localization uncertainty directly from the intensity values of
every pixel in the image. We also describe how to compute trajectories online,
considering also scenarios with no prior knowledge about the map. The proposed
framework is general and can easily be adapted to different robotic platforms
and scenarios. The effectiveness of our approach is demonstrated with extensive
experiments in both simulated and real-world environments using a
vision-controlled micro aerial vehicle.Comment: 16 pages, 20 figures, revised version. Conditionally accepted for
IEEE Transactions on Robotic
3D ShapeNets: A Deep Representation for Volumetric Shapes
3D shape is a crucial but heavily underutilized cue in today's computer
vision systems, mostly due to the lack of a good generic shape representation.
With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft
Kinect), it is becoming increasingly important to have a powerful 3D shape
representation in the loop. Apart from category recognition, recovering full 3D
shapes from view-based 2.5D depth maps is also a critical part of visual
understanding. To this end, we propose to represent a geometric 3D shape as a
probability distribution of binary variables on a 3D voxel grid, using a
Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the
distribution of complex 3D shapes across different object categories and
arbitrary poses from raw CAD data, and discovers hierarchical compositional
part representations automatically. It naturally supports joint object
recognition and shape completion from 2.5D depth maps, and it enables active
object recognition through view planning. To train our 3D deep learning model,
we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive
experiments show that our 3D deep representation enables significant
performance improvement over the-state-of-the-arts in a variety of tasks.Comment: to be appeared in CVPR 201
Building with Drones: Accurate 3D Facade Reconstruction using MAVs
Automatic reconstruction of 3D models from images using multi-view
Structure-from-Motion methods has been one of the most fruitful outcomes of
computer vision. These advances combined with the growing popularity of Micro
Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools
ubiquitous for large number of Architecture, Engineering and Construction
applications among audiences, mostly unskilled in computer vision. However, to
obtain high-resolution and accurate reconstructions from a large-scale object
using SfM, there are many critical constraints on the quality of image data,
which often become sources of inaccuracy as the current 3D reconstruction
pipelines do not facilitate the users to determine the fidelity of input data
during the image acquisition. In this paper, we present and advocate a
closed-loop interactive approach that performs incremental reconstruction in
real-time and gives users an online feedback about the quality parameters like
Ground Sampling Distance (GSD), image redundancy, etc on a surface mesh. We
also propose a novel multi-scale camera network design to prevent scene drift
caused by incremental map building, and release the first multi-scale image
sequence dataset as a benchmark. Further, we evaluate our system on real
outdoor scenes, and show that our interactive pipeline combined with a
multi-scale camera network approach provides compelling accuracy in multi-view
reconstruction tasks when compared against the state-of-the-art methods.Comment: 8 Pages, 2015 IEEE International Conference on Robotics and
Automation (ICRA '15), Seattle, WA, US
Predicting the Next Best View for 3D Mesh Refinement
3D reconstruction is a core task in many applications such as robot
navigation or sites inspections. Finding the best poses to capture part of the
scene is one of the most challenging topic that goes under the name of Next
Best View. Recently, many volumetric methods have been proposed; they choose
the Next Best View by reasoning over a 3D voxelized space and by finding which
pose minimizes the uncertainty decoded into the voxels. Such methods are
effective, but they do not scale well since the underlaying representation
requires a huge amount of memory. In this paper we propose a novel mesh-based
approach which focuses on the worst reconstructed region of the environment
mesh. We define a photo-consistent index to evaluate the 3D mesh accuracy, and
an energy function over the worst regions of the mesh which takes into account
the mutual parallax with respect to the previous cameras, the angle of
incidence of the viewing ray to the surface and the visibility of the region.
We test our approach over a well known dataset and achieve state-of-the-art
results.Comment: 13 pages, 5 figures, to be published in IAS-1
- …