6,595 research outputs found
3D Reconstruction with Low Resolution, Small Baseline and High Radial Distortion Stereo Images
In this paper we analyze and compare approaches for 3D reconstruction from
low-resolution (250x250), high radial distortion stereo images, which are
acquired with small baseline (approximately 1mm). These images are acquired
with the system NanEye Stereo manufactured by CMOSIS/AWAIBA. These stereo
cameras have also small apertures, which means that high levels of illumination
are required. The goal was to develop an approach yielding accurate
reconstructions, with a low computational cost, i.e., avoiding non-linear
numerical optimization algorithms. In particular we focused on the analysis and
comparison of radial distortion models. To perform the analysis and comparison,
we defined a baseline method based on available software and methods, such as
the Bouguet toolbox [2] or the Computer Vision Toolbox from Matlab. The
approaches tested were based on the use of the polynomial model of radial
distortion, and on the application of the division model. The issue of the
center of distortion was also addressed within the framework of the application
of the division model. We concluded that the division model with a single
radial distortion parameter has limitations
On the Issue of Camera Calibration with Narrow Angular Field of View
This paper considers the issue of calibrating a
camera with narrow angular field of view using standard, perspective
methods in computer vision. In doing so, the significance
of perspective distortion both for camera calibration and for
pose estimation is revealed. Since narrow angular field of view
cameras make it difficult to obtain rich images in terms of perspectivity,
the accuracy of the calibration results is expectedly low.
From this, we propose an alternative method that compensates for
this loss by utilizing the pose readings of a robotic manipulator.
It facilitates accurate pose estimation by nonlinear optimization,
minimizing reprojection errors and errors in the manipulator
transformations at the same time. Accurate pose estimation in
turn enables accurate parametrization of a perspective camera
Algorithms for trajectory integration in multiple views
PhDThis thesis addresses the problem of deriving a coherent and accurate localization
of moving objects from partial visual information when data are generated by cameras
placed in di erent view angles with respect to the scene. The framework is built around
applications of scene monitoring with multiple cameras. Firstly, we demonstrate how a
geometric-based solution exploits the relationships between corresponding feature points
across views and improves accuracy in object location. Then, we improve the estimation
of objects location with geometric transformations that account for lens distortions.
Additionally, we study the integration of the partial visual information generated by each
individual sensor and their combination into one single frame of observation that considers
object association and data fusion. Our approach is fully image-based, only relies on 2D
constructs and does not require any complex computation in 3D space. We exploit the
continuity and coherence in objects' motion when crossing cameras' elds of view. Additionally,
we work under the assumption of planar ground plane and wide baseline (i.e.
cameras' viewpoints are far apart). The main contributions are: i) the development of a
framework for distributed visual sensing that accounts for inaccuracies in the geometry
of multiple views; ii) the reduction of trajectory mapping errors using a statistical-based
homography estimation; iii) the integration of a polynomial method for correcting inaccuracies
caused by the cameras' lens distortion; iv) a global trajectory reconstruction
algorithm that associates and integrates fragments of trajectories generated by each camera
- β¦