77 research outputs found

    A visual servoing path-planning strategy for cameras obeying the unified model

    Get PDF
    Part of 2010 IEEE Multi-Conference on Systems and ControlRecently, a unified camera model has been introduced in visual control systems in order to describe through a unique mathematical model conventional perspective cameras, fisheye cameras, and catadioptric systems. In this paper, a path-planning strategy for visual servoing is proposed for any camera obeying this unified model. The proposed strategy is based on the projection onto a virtual plane of the available image projections. This has two benefits. First, it allows one to perform camera pose estimation and 3D object reconstruction by using methods for conventional camera that are not valid for other cameras. Second, it allows one to perform image pathplanning for multi-constraint satisfaction by using a simplified but equivalent projection model, that in this paper is addressed by introducing polynomial parametrizations of the rotation and translation. The planned image trajectory is hence tracked by using an IBVS controller. The proposed strategy is validated through simulations with image noise and calibration errors typical of real experiments. It is worth remarking that visual servoing path-planning for non conventional perspective cameras has not been proposed yet in the literature. © 2010 IEEE.published_or_final_versionThe 2010 IEEE International Symposium on Computer-Aided Control System Design (CACSD), Yokohama, Japan, 8-10 September 2010. In Proceedings of CACSD, 2010, p. 1795-180

    Fisheye-Lens-Based Visual Sun Compass for Perception of Spatial Orientation

    Get PDF
    In complex aeronautical and space engineering systems, conventional sensors used for environment perception fail to determine the orientation because of the influences of the special environments, wherein geomagnetic fields are exceptional and the Global Positioning System is unavailable. This paper presents a fisheye-lens-based visual sun compass that is efficient in determining orientation in such applications. The mathematical model is described, and the absolute orientation is identified by image processing techniques. For robust detection of the sun in the image of the visual sun compass, a modified maximally stable extremal region algorithm and a method named constrained least squares with pruning are proposed. In comparison with conventional sensors, the proposed visual sun compass can provide absolute orientation with a tiny size and light weight in especial environments. Experiments are carried out with a prototype validating the efficiency of the proposed visual sun compass

    Photometric visual servoing for omnidirectional cameras

    Get PDF
    International audience2D visual servoing consists in using data provided by a vision sensor for controlling the motions of a dynamic system. Most of visual servoing approaches has relied on the geometric features that have to be tracked and matched in the image acquired by the camera. Recent works have highlighted the interest of taking into account the photometric information of the entire image. This approach was tackled with images of perspective cameras. We propose, in this paper, to extend this technique to central cameras. This generalization allows to apply this kind of method to catadioptric cameras and wide field of view cameras. Several experiments have been successfully done with a fisheye camera in order to control a 6 degrees of freedom (dof) robot and with a catadioptric camera for a mobile robot navigation task

    The Double Sphere Camera Model

    Full text link
    Vision-based motion estimation and 3D reconstruction, which have numerous applications (e.g., autonomous driving, navigation systems for airborne devices and augmented reality) are receiving significant research attention. To increase the accuracy and robustness, several researchers have recently demonstrated the benefit of using large field-of-view cameras for such applications. In this paper, we provide an extensive review of existing models for large field-of-view cameras. For each model we provide projection and unprojection functions and the subspace of points that result in valid projection. Then, we propose the Double Sphere camera model that well fits with large field-of-view lenses, is computationally inexpensive and has a closed-form inverse. We evaluate the model using a calibration dataset with several different lenses and compare the models using the metrics that are relevant for Visual Odometry, i.e., reprojection error, as well as computation time for projection and unprojection functions and their Jacobians. We also provide qualitative results and discuss the performance of all models

    Enhancing 3D Visual Odometry with Single-Camera Stereo Omnidirectional Systems

    Full text link
    We explore low-cost solutions for efficiently improving the 3D pose estimation problem of a single camera moving in an unfamiliar environment. The visual odometry (VO) task -- as it is called when using computer vision to estimate egomotion -- is of particular interest to mobile robots as well as humans with visual impairments. The payload capacity of small robots like micro-aerial vehicles (drones) requires the use of portable perception equipment, which is constrained by size, weight, energy consumption, and processing power. Using a single camera as the passive sensor for the VO task satisfies these requirements, and it motivates the proposed solutions presented in this thesis. To deliver the portability goal with a single off-the-shelf camera, we have taken two approaches: The first one, and the most extensively studied here, revolves around an unorthodox camera-mirrors configuration (catadioptrics) achieving a stereo omnidirectional system (SOS). The second approach relies on expanding the visual features from the scene into higher dimensionalities to track the pose of a conventional camera in a photogrammetric fashion. The first goal has many interdependent challenges, which we address as part of this thesis: SOS design, projection model, adequate calibration procedure, and application to VO. We show several practical advantages for the single-camera SOS due to its complete 360-degree stereo views, that other conventional 3D sensors lack due to their limited field of view. Since our omnidirectional stereo (omnistereo) views are captured by a single camera, a truly instantaneous pair of panoramic images is possible for 3D perception tasks. Finally, we address the VO problem as a direct multichannel tracking approach, which increases the pose estimation accuracy of the baseline method (i.e., using only grayscale or color information) under the photometric error minimization as the heart of the “direct” tracking algorithm. Currently, this solution has been tested on standard monocular cameras, but it could also be applied to an SOS. We believe the challenges that we attempted to solve have not been considered previously with the level of detail needed for successfully performing VO with a single camera as the ultimate goal in both real-life and simulated scenes

    Omnidirectional Stereo Vision for Autonomous Vehicles

    Get PDF
    Environment perception with cameras is an important requirement for many applications for autonomous vehicles and robots. This work presents a stereoscopic omnidirectional camera system for autonomous vehicles which resolves the problem of a limited field of view and provides a 360° panoramic view of the environment. We present a new projection model for these cameras and show that the camera setup overcomes major drawbacks of traditional perspective cameras in many applications
    • …
    corecore