222 research outputs found

    OMNIDIRECTIONAL IMAGE PROCESSING USING GEODESIC METRIC

    Get PDF
    International audienceDue to distorsions of catadioptric sensors, omnidirectional images can not be treated as classical images. If the equivalence between central catadioptric images and spherical images is now well known and used, spherical analysis often leads to complex methods particularly tricky to employ. In this paper, we propose to derive omnidirectional image treatments by using geodesic metric. We demonstrate that this approach allows to adapt efficiently classical image processing to omnidirectional images

    Adaptative Markov Random Fields for Omnidirectional Vision

    Get PDF
    International audienceImages obtained with catadioptric sensors contain significant deformations which prevent the direct use of classical image treatments. Thus, Markov Random Fields (MRF) whose usefulness is now obvious for projective image processing , can not be used directly on catadioptric images because of the inadequacy of the neighborhood. In this paper, we propose to define a new neighborhood for MRF by using the equivalence theorem developed for central catadioptric sensors. We show the importance of this adaptation for a motion detection application

    Robust Attitude Estimation with Catadioptric Vision

    Get PDF
    International audienceAttitude (roll and pitch) is an essential data for the navigation of a UAV. Rather than using inertial sensors, we propose a catadioptric vision system allowing a fast, robust and accurate estimation of these angles. We show that the optimization of a sky/ground partitioning criterion associated with the specific geometric characteristics of the catadioptric sensor provides very interesting results. Experimental results obtained on real sequences are presented and compared with inertial sensors measures

    Central catadioptric image processing with geodesic metric

    Get PDF
    International audienceBecause of the distortions produced by the insertion of a mirror, catadioptric images cannot be processed similarly to classical perspective images. Now, although the equivalence between such images and spherical images is well known, the use of spherical harmonic analysis often leads to image processing methods which are more difficult to implement. In this paper, we propose to define catadioptric image processing from the geodesic metric on the unitary sphere. We show that this definition allows to adapt very simply classical image processing methods. We focus more particularly on image gradient estimation, interest point detection, and matching. More generally, the proposed approach extends traditional image processing techniques based on Euclidean metric to central catadioptric images. We show in this paper the efficiency of the approach through different experimental results and quantitative evaluations

    A MESH TREE OPTIMIZER FOR WI-FI 8

    Get PDF
    Proposed herein is an artificial intelligence/machine learning (AIML) method to predict the goodput/throughout through a given mesh access point (MAP) for a wireless local area network (WLAN) that can be positioned at different possible spots in a mesh tree. The proposed method may communicate these predictions to MAPs and their clients, thus allowing the mesh tree to form (and reconfigure itself) into the most efficient structure. Further, the proposed method may allow clients to choose the MAP that will provide the best goodput/throughput for the mesh tree

    Two View Line-Based Motion and Structure Estimation for Planar Scenes

    Get PDF
    We present an algorithm for reconstruction of piece-wise planar scenes from only two views and based on minimum line correspondences. We first recover camera rotation by matching vanishing points based on the methods already exist in the literature and then recover the camera translation by searching among a family of hypothesized planes passing through one line. Unlike algorithms based on line segments, the presented algorithm does not require an overlap between two line segments or more that one line correspondence across more than two views to recover the translation and achieves the goal by exploiting photometric constraints of the surface around the line. Experimental results on real images prove the functionality of the algorithm

    Fast Central Catadioptric Line Extraction

    Get PDF
    International audienceLines are particularly important features for different tasks such as calibration, structure from motion, 3D reconstruction in computer vision. However, line detection in catadioptric images is not trivial because the projection of a 3D line is a conic eventually degenerated. If the sensor is calibrated, it has been already demonstrated that each conic can be described by two parameters. In this way, some methods based on the adaptation of conventional line detection methods have been proposed. However, most of these methods suffer from the same disadvantages than in the perspective case (computing time, accuracy, robustness, ...). In this paper, we then propose a new method for line detection in central catadioptric image comparable to the polygonal approximation approach. With this method, only two points of a chain allows to extract with a very high accuracy a catadioptric line. Moreover , this algorithm is particularly fast and is applicable in realtime. We also present experimental results with some quantitative and qualitative evaluations in order to show the quality of the results and the perspectives of this method

    Real Time UAV Altitude, Attitude and Motion Estimation form Hybrid Stereovision

    Get PDF
    International audienceKnowledge of altitude, attitude and motion is essential for an Unmanned Aerial Vehicle during crit- ical maneuvers such as landing and take-off. In this paper we present a hybrid stereoscopic rig composed of a fisheye and a perspective camera for vision-based navigation. In contrast to classical stereoscopic systems based on feature matching, we propose methods which avoid matching between hybrid views. A plane-sweeping approach is proposed for estimating altitude and de- tecting the ground plane. Rotation and translation are then estimated by decoupling: the fisheye camera con- tributes to evaluating attitude, while the perspective camera contributes to estimating the scale of the trans- lation. The motion can be estimated robustly at the scale, thanks to the knowledge of the altitude. We propose a robust, real-time, accurate, exclusively vision-based approach with an embedded C++ implementation. Although this approach removes the need for any non-visual sensors, it can also be coupled with an Inertial Measurement Unit
    corecore