173 research outputs found
Parameterized Synthetic Image Data Set for Fisheye Lens
Based on different projection geometry, a fisheye image can be presented as a
parameterized non-rectilinear image. Deep neural networks(DNN) is one of the
solutions to extract parameters for fisheye image feature description. However,
a large number of images are required for training a reasonable prediction
model for DNN. In this paper, we propose to extend the scale of the training
dataset using parameterized synthetic images. It effectively boosts the
diversity of images and avoids the data scale limitation. To simulate different
viewing angles and distances, we adopt controllable parameterized projection
processes on transformation. The reliability of the proposed method is proved
by testing images captured by our fisheye camera. The synthetic dataset is the
first dataset that is able to extend to a big scale labeled fisheye image
dataset. It is accessible via: http://www2.leuphana.de/misl/fisheye-data-set/.Comment: 2018 5th International Conference on Information Science and Control
Engineerin
Rectification from Radially-Distorted Scales
This paper introduces the first minimal solvers that jointly estimate lens
distortion and affine rectification from repetitions of rigidly transformed
coplanar local features. The proposed solvers incorporate lens distortion into
the camera model and extend accurate rectification to wide-angle images that
contain nearly any type of coplanar repeated content. We demonstrate a
principled approach to generating stable minimal solvers by the Grobner basis
method, which is accomplished by sampling feasible monomial bases to maximize
numerical stability. Synthetic and real-image experiments confirm that the
solvers give accurate rectifications from noisy measurements when used in a
RANSAC-based estimator. The proposed solvers demonstrate superior robustness to
noise compared to the state-of-the-art. The solvers work on scenes without
straight lines and, in general, relax the strong assumptions on scene content
made by the state-of-the-art. Accurate rectifications on imagery that was taken
with narrow focal length to near fish-eye lenses demonstrate the wide
applicability of the proposed method. The method is fully automated, and the
code is publicly available at https://github.com/prittjam/repeats.Comment: pre-prin
Universal Geometric Camera Calibration with Statistical Model Selection
We propose a new universal camera calibration approach that uses statistical information criteria for automatic camera model selection. It requires the camera to observe a planar pattern from different positions, and then closed-form estimates for the intrinsic and extrinsic parameters are computed followed by nonlinear optimization. In lieu of modeling radial distortion, the lens projection of the camera is modeled, and in addition we include decentering distortion. This approach is particularly advantageous for wide angle (fisheye) camera calibration because it often reduces the complexity of the model compared to modeling radial distortion. We then apply statistical information criteria to automatically select the complexity of the camera model for any lens type. The complete algorithm is evaluated on synthetic and real data for several different lens projections, and a comparison between existing methods which use radial distortion is done
Towards Visual Ego-motion Learning in Robots
Many model-based Visual Odometry (VO) algorithms have been proposed in the
past decade, often restricted to the type of camera optics, or the underlying
motion manifold observed. We envision robots to be able to learn and perform
these tasks, in a minimally supervised setting, as they gain more experience.
To this end, we propose a fully trainable solution to visual ego-motion
estimation for varied camera optics. We propose a visual ego-motion learning
architecture that maps observed optical flow vectors to an ego-motion density
estimate via a Mixture Density Network (MDN). By modeling the architecture as a
Conditional Variational Autoencoder (C-VAE), our model is able to provide
introspective reasoning and prediction for ego-motion induced scene-flow.
Additionally, our proposed model is especially amenable to bootstrapped
ego-motion learning in robots where the supervision in ego-motion estimation
for a particular camera sensor can be obtained from standard navigation-based
sensor fusion strategies (GPS/INS and wheel-odometry fusion). Through
experiments, we show the utility of our proposed approach in enabling the
concept of self-supervised learning for visual ego-motion estimation in
autonomous robots.Comment: Conference paper; Submitted to IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS) 2017, Vancouver CA; 8 pages, 8 figures,
2 table
Radially-Distorted Conjugate Translations
This paper introduces the first minimal solvers that jointly solve for
affine-rectification and radial lens distortion from coplanar repeated
patterns. Even with imagery from moderately distorted lenses, plane
rectification using the pinhole camera model is inaccurate or invalid. The
proposed solvers incorporate lens distortion into the camera model and extend
accurate rectification to wide-angle imagery, which is now common from consumer
cameras. The solvers are derived from constraints induced by the conjugate
translations of an imaged scene plane, which are integrated with the division
model for radial lens distortion. The hidden-variable trick with ideal
saturation is used to reformulate the constraints so that the solvers generated
by the Grobner-basis method are stable, small and fast.
Rectification and lens distortion are recovered from either one conjugately
translated affine-covariant feature or two independently translated
similarity-covariant features. The proposed solvers are used in a \RANSAC-based
estimator, which gives accurate rectifications after few iterations. The
proposed solvers are evaluated against the state-of-the-art and demonstrate
significantly better rectifications on noisy measurements. Qualitative results
on diverse imagery demonstrate high-accuracy undistortions and rectifications.
The source code is publicly available at https://github.com/prittjam/repeats
- …