114 research outputs found
Minimal Solvers for Single-View Lens-Distorted Camera Auto-Calibration
This paper proposes minimal solvers that use combinations of imaged
translational symmetries and parallel scene lines to jointly estimate lens
undistortion with either affine rectification or focal length and absolute
orientation. We use constraints provided by orthogonal scene planes to recover
the focal length. We show that solvers using feature combinations can recover
more accurate calibrations than solvers using only one feature type on scenes
that have a balance of lines and texture. We also show that the proposed
solvers are complementary and can be used together in a RANSAC-based estimator
to improve auto-calibration accuracy. State-of-the-art performance is
demonstrated on a standard dataset of lens-distorted urban images. The code is
available at https://github.com/ylochman/single-view-autocalib
Radially-Distorted Conjugate Translations
This paper introduces the first minimal solvers that jointly solve for
affine-rectification and radial lens distortion from coplanar repeated
patterns. Even with imagery from moderately distorted lenses, plane
rectification using the pinhole camera model is inaccurate or invalid. The
proposed solvers incorporate lens distortion into the camera model and extend
accurate rectification to wide-angle imagery, which is now common from consumer
cameras. The solvers are derived from constraints induced by the conjugate
translations of an imaged scene plane, which are integrated with the division
model for radial lens distortion. The hidden-variable trick with ideal
saturation is used to reformulate the constraints so that the solvers generated
by the Grobner-basis method are stable, small and fast.
Rectification and lens distortion are recovered from either one conjugately
translated affine-covariant feature or two independently translated
similarity-covariant features. The proposed solvers are used in a \RANSAC-based
estimator, which gives accurate rectifications after few iterations. The
proposed solvers are evaluated against the state-of-the-art and demonstrate
significantly better rectifications on noisy measurements. Qualitative results
on diverse imagery demonstrate high-accuracy undistortions and rectifications.
The source code is publicly available at https://github.com/prittjam/repeats
Rectification from Radially-Distorted Scales
This paper introduces the first minimal solvers that jointly estimate lens
distortion and affine rectification from repetitions of rigidly transformed
coplanar local features. The proposed solvers incorporate lens distortion into
the camera model and extend accurate rectification to wide-angle images that
contain nearly any type of coplanar repeated content. We demonstrate a
principled approach to generating stable minimal solvers by the Grobner basis
method, which is accomplished by sampling feasible monomial bases to maximize
numerical stability. Synthetic and real-image experiments confirm that the
solvers give accurate rectifications from noisy measurements when used in a
RANSAC-based estimator. The proposed solvers demonstrate superior robustness to
noise compared to the state-of-the-art. The solvers work on scenes without
straight lines and, in general, relax the strong assumptions on scene content
made by the state-of-the-art. Accurate rectifications on imagery that was taken
with narrow focal length to near fish-eye lenses demonstrate the wide
applicability of the proposed method. The method is fully automated, and the
code is publicly available at https://github.com/prittjam/repeats.Comment: pre-prin
Content Authoring Using Single Image in Urban Environments for Augmented Reality
© 2016 IEEE. Content authoring is one of essentials of Augmented Reality (AR), which is to emplace an augmented content on a true part of a real scene in order to enhance users' visual experience. For the case of street view single 2D images, the challenge emerges because of clutter environments and unknown position and orientation related to camera pose. Although existing methods based on 2D feature point matching or vanishing point registration may recover the camera pose, the robustness is always challenging because of the uncertainty of feature point detection on texture-less region and displacement of vanishing point detection caused by irregular lines detected on the scene. By taking the advantages of characteristics of the man-made object (e.g. building) widely seen on the street view, this paper proposes a simple yet efficient content authoring approach. In this approach, the building dominant plane where the virtual object will be emplaced is detected and then projected to the frontal-parallel view on which the virtual object can be reliably emplaced. Once the virtual object and the true scene are embedded to each other on the frontal-parallel view, they are able to be converted back to the original view using inverse projection without any distortion. Experiments on public databases show that the proposed method can recover camera pose and implement content emplacement with promising performance
Omnidirectional Stereo Vision for Autonomous Vehicles
Environment perception with cameras is an important requirement for many applications for autonomous vehicles and robots. This work presents a stereoscopic omnidirectional camera system for autonomous vehicles which resolves the problem of a limited field of view and provides a 360° panoramic view of the environment. We present a new projection model for these cameras and show that the camera setup overcomes major drawbacks of traditional perspective cameras in many applications
- …