20 research outputs found

    Catadioptric stereo-vision system using a spherical mirror

    Get PDF
    Abstract In the computer vision field, the reconstruction of target surfaces is usually achieved by using 3D optical scanners assembled integrating digital cameras and light emitters. However, these solutions are limited by the low field of view, which requires multiple acquisition from different views to reconstruct complex free-form geometries. The combination of mirrors and lenses (catadioptric systems) can be adopted to overcome this issue. In this work, a stereo catadioptric optical scanner has been developed by assembling two digital cameras, a spherical mirror and a multimedia white light projector. The adopted configuration defines a non-single viewpoint system, thus a non-central catadioptric camera model has been developed. An analytical solution to compute the projection of a scene point onto the image plane (forward projection) and vice-versa (backward projection) is presented. The proposed optical setup allows omnidirectional stereo vision thus allowing the reconstruction of target surfaces with a single acquisition. Preliminary results, obtained measuring a hollow specimen, demonstrated the effectiveness of the described approach

    Omnidirectional Stereo Vision for Autonomous Vehicles

    Get PDF
    Environment perception with cameras is an important requirement for many applications for autonomous vehicles and robots. This work presents a stereoscopic omnidirectional camera system for autonomous vehicles which resolves the problem of a limited field of view and provides a 360° panoramic view of the environment. We present a new projection model for these cameras and show that the camera setup overcomes major drawbacks of traditional perspective cameras in many applications

    A Novel Omnidirectional Stereo Vision System with a Single Camera

    Get PDF
    The omnidirectional vision system has been given increasing attentions in recent years in many engineering research areas such as computer vision and mobile robot since it has wide field of view (FOV). A general method for 360 o omnidirectional image acquisition is the catadioptric approach using a coaxially aligned convex mirror and a conventional camera

    Numerical estimation of epipolar curves for omnidirectional sensors

    Get PDF
    The epipolar geometry of couples of omnidirectional sensors is often difficult to express analytically. We propose an algorithm to estimate numerically epipolar curves from omnidirectional pairs of stereovision. This algorithm is not limited to this type of sensors and works, for example, with a combination of a panoramic sensor and a traditional camera. Although the load of calculation necessary for this algorithm is heavy, it works with every kind of sensor (provided that the stereovision pair is completely calibrated) especially with sensor that do not respect the single viewpoint constraint.La géométrie épipolaire des paires de capteurs omnidirectionnels est souvent difficile à exprimer analytiquement. Nous proposons un algorithme pour estimer numériquement les courbes épipolaires des paires de capteurs omnidirectionnels. Cet algorithme n'est toutefois pas limité à ce type de capteur et fonctionne, par exemple, avec une combinaison d'un capteur panoramique et d'une caméra classique. Bien que la charge de calcul requise soit lourde, cet algorithme a l'avantage de fonctionner avec tous les types de capteurs, si la paire de capteurs est complètement calibrée (tous paramètres déterminés). En particulier l'algorithme est applicable pour les capteurs catadioptriques ne respectant pas la contrainte du foyer de projection unique

    Refractive Structure-From-Motion Through a Flat Refractive Interface

    Get PDF
    Recovering 3D scene geometry from underwater images involves the Refractive Structure-from-Motion (RSfM) problem, where the image distortions caused by light refraction at the interface between different propagation media invalidates the single view point assumption. Direct use of the pinhole camera model in RSfM leads to inaccurate camera pose estimation and consequently drift. RSfM methods have been thoroughly studied for the case of a thick glass interface that assumes two refractive interfaces between the camera and the viewed scene. On the other hand, when the camera lens is in direct contact with the water, there is only one refractive interface. By explicitly considering a refractive interface, we develop a succinct derivation of the refractive fundamental matrix in the form of the generalised epipolar constraint for an axial camera. We use the refractive fundamental matrix to refine initial pose estimates obtained by assuming the pinhole model. This strategy allows us to robustly estimate underwater camera poses, where other methods suffer from poor noise-sensitivity. We also formulate a new four view constraint enforcing camera pose consistency along a video which leads us to a novel RSfM framework. For validation we use synthetic data to show the numerical properties of our method and we provide results on real data to demonstrate performance within laboratory settings and for applications in endoscopy
    corecore