146 research outputs found

    OMNIDIRECTIONAL IMAGE PROCESSING USING GEODESIC METRIC

    Get PDF
    International audienceDue to distorsions of catadioptric sensors, omnidirectional images can not be treated as classical images. If the equivalence between central catadioptric images and spherical images is now well known and used, spherical analysis often leads to complex methods particularly tricky to employ. In this paper, we propose to derive omnidirectional image treatments by using geodesic metric. We demonstrate that this approach allows to adapt efficiently classical image processing to omnidirectional images

    Adaptative Markov Random Fields for Omnidirectional Vision

    Get PDF
    International audienceImages obtained with catadioptric sensors contain significant deformations which prevent the direct use of classical image treatments. Thus, Markov Random Fields (MRF) whose usefulness is now obvious for projective image processing , can not be used directly on catadioptric images because of the inadequacy of the neighborhood. In this paper, we propose to define a new neighborhood for MRF by using the equivalence theorem developed for central catadioptric sensors. We show the importance of this adaptation for a motion detection application

    Fast Central Catadioptric Line Extraction

    Get PDF
    International audienceLines are particularly important features for different tasks such as calibration, structure from motion, 3D reconstruction in computer vision. However, line detection in catadioptric images is not trivial because the projection of a 3D line is a conic eventually degenerated. If the sensor is calibrated, it has been already demonstrated that each conic can be described by two parameters. In this way, some methods based on the adaptation of conventional line detection methods have been proposed. However, most of these methods suffer from the same disadvantages than in the perspective case (computing time, accuracy, robustness, ...). In this paper, we then propose a new method for line detection in central catadioptric image comparable to the polygonal approximation approach. With this method, only two points of a chain allows to extract with a very high accuracy a catadioptric line. Moreover , this algorithm is particularly fast and is applicable in realtime. We also present experimental results with some quantitative and qualitative evaluations in order to show the quality of the results and the perspectives of this method

    Robust Attitude Estimation with Catadioptric Vision

    Get PDF
    International audienceAttitude (roll and pitch) is an essential data for the navigation of a UAV. Rather than using inertial sensors, we propose a catadioptric vision system allowing a fast, robust and accurate estimation of these angles. We show that the optimization of a sky/ground partitioning criterion associated with the specific geometric characteristics of the catadioptric sensor provides very interesting results. Experimental results obtained on real sequences are presented and compared with inertial sensors measures

    Central catadioptric image processing with geodesic metric

    Get PDF
    International audienceBecause of the distortions produced by the insertion of a mirror, catadioptric images cannot be processed similarly to classical perspective images. Now, although the equivalence between such images and spherical images is well known, the use of spherical harmonic analysis often leads to image processing methods which are more difficult to implement. In this paper, we propose to define catadioptric image processing from the geodesic metric on the unitary sphere. We show that this definition allows to adapt very simply classical image processing methods. We focus more particularly on image gradient estimation, interest point detection, and matching. More generally, the proposed approach extends traditional image processing techniques based on Euclidean metric to central catadioptric images. We show in this paper the efficiency of the approach through different experimental results and quantitative evaluations

    Introduction

    Get PDF

    Dynamic Programming and Skyline Extraction in Catadioptric Infrared Images

    Get PDF
    International audienceUnmanned Aerial Vehicles (UAV) are the subject of an increasing interest in many applications and a key requirement for autonomous navigation is the attitude/position stabilization of the vehicle. Some previous works have suggested using catadioptric vision, instead of traditional perspective cameras, in order to gather much more information from the environment and therefore improve the robustness of the UAV attitude/position estimation. This paper belongs to a series of recent publications of our research group concerning catadioptric vision for UAVs. Currently, we focus on the extraction of skyline in catadioptric images since it provides important information about the attitude/position of the UAV. For example, the DEM-based methods can match the extracted skyline with a Digital Elevation Map (DEM) by process of registration, which permits to estimate the attitude and the position of the camera. Like any standard cameras, catadioptric systems cannot work in low luminosity situations because they are based on visible light. To overcome this important limitation, in this paper, we propose using a catadioptric infrared camera and extending one of our methods of skyline detection towards catadioptric infrared images. The task of extracting the best skyline in images is usually converted in an energy minimization problem that can be solved by dynamic programming. The major contribution of this paper is the extension of dynamic programming for catadioptric images using an adapted neighborhood and an appropriate scanning direction. Finally, we present some experimental results to demonstrate the validity of our approach

    Control of a PTZ camera in a hybrid vision system

    No full text
    In this paper, we propose a new approach to steer a PTZ camera in the direction of a detected object visible from another fixed camera equipped with a fisheye lens. This heterogeneous association of two cameras having different characteristics is called a hybrid stereo-vision system. The presented method employs epipolar geometry in a smart way in order to reduce the range of search of the desired region of interest. Furthermore, we proposed a target recognition method designed to cope with the illumination problems, the distortion of the omnidirectional image and the inherent dissimilarity of resolution and color responses between both cameras. Experimental results with synthetic and real images show the robustness of the proposed method

    Joint 3D Shape and Motion Estimation from Rolling Shutter Light-Field Images

    Full text link
    In this paper, we propose an approach to address the problem of 3D reconstruction of scenes from a single image captured by a light-field camera equipped with a rolling shutter sensor. Our method leverages the 3D information cues present in the light-field and the motion information provided by the rolling shutter effect. We present a generic model for the imaging process of this sensor and a two-stage algorithm that minimizes the re-projection error while considering the position and motion of the camera in a motion-shape bundle adjustment estimation strategy. Thereby, we provide an instantaneous 3D shape-and-pose-and-velocity sensing paradigm. To the best of our knowledge, this is the first study to leverage this type of sensor for this purpose. We also present a new benchmark dataset composed of different light-fields showing rolling shutter effects, which can be used as a common base to improve the evaluation and tracking the progress in the field. We demonstrate the effectiveness and advantages of our approach through several experiments conducted for different scenes and types of motions. The source code and dataset are publicly available at: https://github.com/ICB-Vision-AI/RSL

    Real Time UAV Altitude, Attitude and Motion Estimation form Hybrid Stereovision

    Get PDF
    International audienceKnowledge of altitude, attitude and motion is essential for an Unmanned Aerial Vehicle during crit- ical maneuvers such as landing and take-off. In this paper we present a hybrid stereoscopic rig composed of a fisheye and a perspective camera for vision-based navigation. In contrast to classical stereoscopic systems based on feature matching, we propose methods which avoid matching between hybrid views. A plane-sweeping approach is proposed for estimating altitude and de- tecting the ground plane. Rotation and translation are then estimated by decoupling: the fisheye camera con- tributes to evaluating attitude, while the perspective camera contributes to estimating the scale of the trans- lation. The motion can be estimated robustly at the scale, thanks to the knowledge of the altitude. We propose a robust, real-time, accurate, exclusively vision-based approach with an embedded C++ implementation. Although this approach removes the need for any non-visual sensors, it can also be coupled with an Inertial Measurement Unit
    • …
    corecore