229,371 research outputs found
MLPnP - A Real-Time Maximum Likelihood Solution to the Perspective-n-Point Problem
In this paper, a statistically optimal solution to the Perspective-n-Point
(PnP) problem is presented. Many solutions to the PnP problem are geometrically
optimal, but do not consider the uncertainties of the observations. In
addition, it would be desirable to have an internal estimation of the accuracy
of the estimated rotation and translation parameters of the camera pose. Thus,
we propose a novel maximum likelihood solution to the PnP problem, that
incorporates image observation uncertainties and remains real-time capable at
the same time. Further, the presented method is general, as is works with 3D
direction vectors instead of 2D image points and is thus able to cope with
arbitrary central camera models. This is achieved by projecting (and thus
reducing) the covariance matrices of the observations to the corresponding
vector tangent space.Comment: Submitted to the ISPRS congress (2016) in Prague. Oral Presentation.
Published in ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., III-3,
131-13
Video Registration in Egocentric Vision under Day and Night Illumination Changes
With the spread of wearable devices and head mounted cameras, a wide range of
application requiring precise user localization is now possible. In this paper
we propose to treat the problem of obtaining the user position with respect to
a known environment as a video registration problem. Video registration, i.e.
the task of aligning an input video sequence to a pre-built 3D model, relies on
a matching process of local keypoints extracted on the query sequence to a 3D
point cloud. The overall registration performance is strictly tied to the
actual quality of this 2D-3D matching, and can degrade if environmental
conditions such as steep changes in lighting like the ones between day and
night occur. To effectively register an egocentric video sequence under these
conditions, we propose to tackle the source of the problem: the matching
process. To overcome the shortcomings of standard matching techniques, we
introduce a novel embedding space that allows us to obtain robust matches by
jointly taking into account local descriptors, their spatial arrangement and
their temporal robustness. The proposal is evaluated using unconstrained
egocentric video sequences both in terms of matching quality and resulting
registration performance using different 3D models of historical landmarks. The
results show that the proposed method can outperform state of the art
registration algorithms, in particular when dealing with the challenges of
night and day sequences
Recommended from our members
Multiscale Design for Solid Freeform Fabrication
One of the advantages of solid freeform fabrication is the ability to fabricate complex
structures on multiple scales, from the macroscale features of an overall part to the
mesoscale topology of its internal architecture and even the microstructure or
composition of the constituent material. This manufacturing freedom poses the challenge
of designing across these scales, especially when a part with designed mesostructure is
part of a larger system with changing requirements that propagate across scales. A setbased multiscale design method is presented for coordinating design across scales and
reducing iterative redesign of SFF parts and their mesostructures. The method is applied
to design a miniature unmanned aerial vehicle system. The system is decomposed into
disciplinary subsystems and constituent parts, including wings with honeycomb
mesostructures that are topologically tailored for stiffness and strength and fabricated
with selective laser sintering. The application illustrates how the design of freeform parts
can be coordinated more efficiently with the design of parent systems.Mechanical Engineerin
Large Scale SfM with the Distributed Camera Model
We introduce the distributed camera model, a novel model for
Structure-from-Motion (SfM). This model describes image observations in terms
of light rays with ray origins and directions rather than pixels. As such, the
proposed model is capable of describing a single camera or multiple cameras
simultaneously as the collection of all light rays observed. We show how the
distributed camera model is a generalization of the standard camera model and
describe a general formulation and solution to the absolute camera pose problem
that works for standard or distributed cameras. The proposed method computes a
solution that is up to 8 times more efficient and robust to rotation
singularities in comparison with gDLS. Finally, this method is used in an novel
large-scale incremental SfM pipeline where distributed cameras are accurately
and robustly merged together. This pipeline is a direct generalization of
traditional incremental SfM; however, instead of incrementally adding one
camera at a time to grow the reconstruction the reconstruction is grown by
adding a distributed camera. Our pipeline produces highly accurate
reconstructions efficiently by avoiding the need for many bundle adjustment
iterations and is capable of computing a 3D model of Rome from over 15,000
images in just 22 minutes.Comment: Published at 2016 3DV Conferenc
- …