118 research outputs found

    Omnidirectional Stereo Vision for Autonomous Vehicles

    Get PDF
    Environment perception with cameras is an important requirement for many applications for autonomous vehicles and robots. This work presents a stereoscopic omnidirectional camera system for autonomous vehicles which resolves the problem of a limited field of view and provides a 360° panoramic view of the environment. We present a new projection model for these cameras and show that the camera setup overcomes major drawbacks of traditional perspective cameras in many applications

    Panoramic Stereovision and Scene Reconstruction

    Get PDF
    With advancement of research in robotics and computer vision, an increasingly high number of applications require the understanding of a scene in three dimensions. A variety of systems are deployed to do the same. This thesis explores a novel 3D imaging technique. This involves the use of catadioptric cameras in a stereoscopic arrangement. A secondary system aims to stabilize the system in the event that the cameras are misaligned during operation. The system provides a stark advantage due to it being a cost effective alternative to present day standard state-of-the-art systems that achieve the same goal of 3D imaging. The compromise lies in the quality of depth estimation, which can be overcome with a different imager and calibration. The result was a panoramic disparity map generated by the system

    Vision-based Navigation and Mapping Using Non-central Catadioptric Omnidirectional Camera

    Get PDF
    Omnidirectional catadioptric cameras find their use in navigation and mapping, owing to their wide field of view. Having a wider field of view, or rather a potential 360 degree field of view, allows the user to see and move more freely in the navigation space. A catadioptric camera system is a low cost system which consists of a mirror and a camera. A calibration method was developed in order to obtain the relative position and orientation between the two components so that they can be considered as one monolithic system. The position of the system was determined, for an environment using the conditions obtained from the reflective properties of the mirror. Object control points were set up and experiments were performed at different sites to test the mathematical models and the achieved location and mapping accuracy of the system. The obtained positions were then used to map the environment

    Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs)

    Full text link
    We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor’s projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances

    Single Cone Mirror Omni-Directional Stereo

    Get PDF
    Omni-directional view and stereo information for scene points are both crucial in many computer vision applications. In some demanding applications like autonomous robots, we need to acquire both in real-time without sacrificing too much image resolution. This work describes a novel method to meet all the stringent demands with relatively simple setup and off-the-shelf equipments. Only one simple reflective surface and two regular (perspective) camera views are needed. First we describe the novel stereo method. Then we discuss some variations in practical implementation and their respective tradeoffs

    Design of a training tool for improving the use of hand-held detectors in humanitarian demining

    Get PDF
    Purpose - The purpose of this paper is to introduce the design of a training tool intended to improve deminers' technique during close-in detection tasks. Design/methodology/approach - Following an introduction that highlights the impact of mines and improvised explosive devices (IEDs), and the importance of training for enhancing the safety and the efficiency of the deminers, this paper considers the utilization of a sensory tracking system to study the skill of the hand-held detector expert operators. With the compiled information, some critical performance variables can be extracted, assessed, and quantified, so that they can be used afterwards as reference values for the training task. In a second stage, the sensory tracking system is used for analysing the trainee skills. The experimentation phase aims to test the effectiveness of the elements that compose the sensory system to track the hand-held detector during the training sessions. Findings - The proposed training tool will be able to evaluate the deminers' efficiency during the scanning tasks and will provide important information for improving their competences. Originality/value - This paper highlights the need of introducing emerging technologies for enhancing the current training techniques for deminers and proposes a sensory tracking system that can be successfully utilised for evaluating trainees' performance with hand-held detectors. © Emerald Group Publishing Limited.The authors acknowledge funding from the European Community's Seventh Framework Programme (FP7/2007‐2013 TIRAMISU) under Grant Agreement No. 284747 and partial funding under Robocity2030 S‐0505/DPI‐0176 and FORTUNA A1/039883/11 (Agencia Española de Cooperación Internacional para el Desarrollo – AECID). Dr Roemi Fernández acknowledges support from CSIC under grant JAE‐DOC. Dr Héctor Montes acknowledges support from Universidad Tecnológica de Panamá and from CSIC under grant JAE‐DOC.Peer Reviewe

    Real Time UAV Altitude, Attitude and Motion Estimation form Hybrid Stereovision

    Get PDF
    International audienceKnowledge of altitude, attitude and motion is essential for an Unmanned Aerial Vehicle during crit- ical maneuvers such as landing and take-off. In this paper we present a hybrid stereoscopic rig composed of a fisheye and a perspective camera for vision-based navigation. In contrast to classical stereoscopic systems based on feature matching, we propose methods which avoid matching between hybrid views. A plane-sweeping approach is proposed for estimating altitude and de- tecting the ground plane. Rotation and translation are then estimated by decoupling: the fisheye camera con- tributes to evaluating attitude, while the perspective camera contributes to estimating the scale of the trans- lation. The motion can be estimated robustly at the scale, thanks to the knowledge of the altitude. We propose a robust, real-time, accurate, exclusively vision-based approach with an embedded C++ implementation. Although this approach removes the need for any non-visual sensors, it can also be coupled with an Inertial Measurement Unit
    corecore