16 research outputs found

    Robust Invariants From Functionally Constrained Motion

    Get PDF
    We address in the problem of control-based recovery of robot pose and the environmental lay-out. Panoramic sensors provide us with an 1D projection of characteristic features of a 2D operation map. Trajectories of these projections contain the information about the position of a priori unknown landmarks in the environment. We introduce here the notion of spatiotemporal signatures of projection trajectories. These signatures are global measures, like area, characterized by considerably higher robustness with respect to noise and outliers than the commonly applied point correspondence. By modeling the 2D motion plane as the complex plane we show that by means of complex analysis our method can be embedded in the well-known affine reconstruction paradigm

    EXPERIMENTAL ASSESSMENT OF TECHNIQUES FOR FISHEYE CAMERA CALIBRATION

    Get PDF
    Fisheye lens cameras enable to increase the Field of View (FOV), and consequently they have been largely used in several applications like robotics. The use of this type of cameras in close-range Photogrammetry for high accuracy applications, requires rigorous calibration. The main aim of this work is to present the calibration results of a Fuji Finepix S3PRO camera with Samyang 8mm fisheye lens using rigorous mathematical models. Mathematical models based on Perspective, Stereo-graphic, Equi-distant, Orthogonal and Equi-solid-angle projections were implemented and used in the experiments. The fisheye lenses are generally designed following one of the last four models, and Bower-Samyang 8mm lens is based on Stereo-graphic projection. These models were used in combination with symmetric radial, decentering and affinity distortion models. Experiments were performed to verify which set of IOPs (Interior Orientation Parameters) presented better results to describe the camera inner geometry. Collinearity mathematical model, which is based on perspective projection, presented the less accurate results, which was expected because fisheye lenses are not designed following the perspective projection. Stereo-graphic, Equi-distant, Orthogonal and Equi-solid-angle projections presented similar results even considering that Bower-Samyang fisheye lens was built based on Stereo-graphic projection. The experimental results also demonstrated a small correlation between IOPs and EOPs (Exterior Orientation Parameters) for Bower-Samyang lens

    Single Cone Mirror Omni-Directional Stereo

    Get PDF
    Omni-directional view and stereo information for scene points are both crucial in many computer vision applications. In some demanding applications like autonomous robots, we need to acquire both in real-time without sacrificing too much image resolution. This work describes a novel method to meet all the stringent demands with relatively simple setup and off-the-shelf equipments. Only one simple reflective surface and two regular (perspective) camera views are needed. First we describe the novel stereo method. Then we discuss some variations in practical implementation and their respective tradeoffs

    Long-term experiments with an adaptive spherical view representation for navigation in changing environments

    Get PDF
    Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In this work, we introduce a method to update the reference views in a hybrid metric-topological map so that a mobile robot can continue to localize itself in a changing environment. The updating mechanism, based on the multi-store model of human memory, incorporates a spherical metric representation of the observed visual features for each node in the map, which enables the robot to estimate its heading and navigate using multi-view geometry, as well as representing the local 3D geometry of the environment. A series of experiments demonstrate the persistence performance of the proposed system in real changing environments, including analysis of the long-term stability

    Dense panoramic stereovision : camera calibration and image rectification

    Get PDF
    The particular geometry of panoramic cameras defines complex epipolar lines equations. In this paper, we present a way to warp images from a panoramic stereovision bench, so that the epipolar lines become parallel straight lines, thus allowing the use of an optimized fast pixel correlation based stereovision algorithm. The paper first introduces the geometric characterization of panoramic camera composed of parabolic and spherical mirrors, that computes both the intrinsic parameters of the system (mirror surfaces and intrinsic camera parameters) and the errors alignment between the mirrors. Then, it presents the warping equations that allow to generate rectified images. Calibration and stereovision results are presented.La stéréovision panoramique est une fonctionnalité très intéressante, particulièrement en robotique. Cependant, la géométrie particulière des caméras panoramiques définit des épipolaires dont les équations ne sont pas triviales. Dans cet article, nous présentons un moyen de rectifier les images d’un banc stéréoscopique panoramique, qui fournit des images dans lesquelles les épipolaires sont des lignes parallèles, ce qui permet l’application d’un algorithme optimisé de stéréovision par corrélation des pixels. La première partie de l’article présente un modèle géométrique de formation des images pour une caméra catadioptrique composée d’un miroir parabolique et d’un miroir sphérique, qui inclut les paramètres intrinsèques du système (surfaces des miroirs et paramètres intrinsèques de la caméra), et aussi les erreurs d’alignement entre les miroirs. Une procédure qui permet de rectifier les images panoramiques est ensuite présentée, et des résultats de calibrage et de stéréovision panoramique illustrent l’article

    Omnistereo: panoramic stereo imaging

    Full text link

    Blinking cubes : a method for polygon-based scene reconstruction

    Get PDF
    Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.Includes bibliographical references (p. 50-51).by John Andrew Harvey.S.B.and M.Eng

    3-D scene data recovery using omnidirectional multibaseline stereo

    No full text
    A traditional approach to extracting geometric information from a large scene is to compute multiple 3-D depth maps from stereo pairs or direct range nders, and then to merge the 3-D data. However, the resulting merged depth maps may be subject to merging errors if the relative poses between depth maps are not known exactly. In addition, the 3-D data may also have to be resampled before merging, which adds additional complexity and potential sources of errors. This paper provides a means of directly extracting 3-D data covering a very wide eld of view, thus bypassing the need for numerous depth map merging. In our work, cylindrical images are rst composited from sequences of images taken while the camera is rotated 360 about a vertical axis. By taking such image panoramas at di erent camera locations, we can recover 3-D data of the scene using a set of simple techniques: feature tracking, an 8-point structure from motion algorithm, and multibaseline stereo. We also investigate the e ect of median ltering on the recovered 3-D point distributions, and show the results of our approach applied to both synthetic and real scenes
    corecore