59 research outputs found

    A Theory of Catadioptric Image Formation

    Get PDF
    Conventional video cameras have limited fields of view which make them restrictive for certain applications in computational vision. A catadioptric sensor uses a combination of lenses and mirrors placed in a carefully arranged configuration to capture a much wider field of view. When designing a catadioptric sensor, the shape of the mirror(s) should ideally be selected to ensure that the complete catadioptric system has a single effective viewpoint. The reason a single viewpoint is so desirable is that it is a requirement for the generation of pure perspective images from the sensed image(s). In this paper, we derive and analyze the complete class of single-lens single-mirror catadioptric sensors which satisfy the fixed viewpoint constraint. Some of the solutions turn out to be degenerate with no practical value, while other solutions lead to realizable sensors. We also derive an expression for the spatial resolution of a catadioptric sensor, and include a preliminary analysis of the defocus blur caused by the use of a curved mirror

    Face tracking using a hyperbolic catadioptric omnidirectional system

    Get PDF
    In the first part of this paper, we present a brief review on catadioptric omnidirectional systems. The special case of the hyperbolic omnidirectional system is analysed in depth. The literature shows that a hyperboloidal mirror has two clear advantages over alternative geometries. Firstly, a hyperboloidal mirror has a single projection centre [1]. Secondly, the image resolution is uniformly distributed along the mirror’s radius [2]. In the second part of this paper we show empirical results for the detection and tracking of faces from the omnidirectional images using Viola-Jones method. Both panoramic and perspective projections, extracted from the omnidirectional image, were used for that purpose. The omnidirectional image size was 480x480 pixels, in greyscale. The tracking method used regions of interest (ROIs) set as the result of the detections of faces from a panoramic projection of the image. In order to avoid losing or duplicating detections, the panoramic projection was extended horizontally. Duplications were eliminated based on the ROIs established by previous detections. After a confirmed detection, faces were tracked from perspective projections (which are called virtual cameras), each one associated with a particular face. The zoom, pan and tilt of each virtual camera was determined by the ROIs previously computed on the panoramic image. The results show that, when using a careful combination of the two projections, good frame rates can be achieved in the task of tracking faces reliably

    Vision-Based Navigation III: Pose and Motion from Omnidirectional Optical Flow and a Digital Terrain Map

    Full text link
    An algorithm for pose and motion estimation using corresponding features in omnidirectional images and a digital terrain map is proposed. In previous paper, such algorithm for regular camera was considered. Using a Digital Terrain (or Digital Elevation) Map (DTM/DEM) as a global reference enables recovering the absolute position and orientation of the camera. In order to do this, the DTM is used to formulate a constraint between corresponding features in two consecutive frames. In this paper, these constraints are extended to handle non-central projection, as is the case with many omnidirectional systems. The utilization of omnidirectional data is shown to improve the robustness and accuracy of the navigation algorithm. The feasibility of this algorithm is established through lab experimentation with two kinds of omnidirectional acquisition systems. The first one is polydioptric cameras while the second is catadioptric camera.Comment: 6 pages, 9 figure

    ''FlyVIZ'': A Novel Display Device to Provide Humans with 360o Vision by Coupling Catadioptric Camera with HMD.

    Get PDF
    International audienceHave you ever dreamed of having eyes in the back of your head? In this paper we present a novel display device called FlyVIZ which enables humans to experience a real-time 360° vision of their surroundings for the first time. To do so, we combine a panoramic image acquisition system (positioned on top of the user's head) with a Head-Mounted Display (HMD). The omnidirectional images are transformed to fit the characteristics of HMD screens. As a result, the user can see his/her surroundings, in real-time, with 360° images mapped into the HMD field-of- view. We foresee potential applications in different fields where augmented human capacity (an extended field-of-view) could benefit, such as surveillance, security, or entertainment. FlyVIZ could also be used in novel perception and neuroscience studies

    Omnidirectional vision on UAV for attitude computation

    Get PDF
    International audienceUnmanned Aerial Vehicles (UAVs) are the subject of an increasing interest in many applications. Autonomy is one of the major advantages of these vehicles. It is then necessary to develop particular sensors in order to provide efficient navigation functions. In this paper, we propose a method for attitude computation catadioptric images. We first demonstrate the advantages of the catadioptric vision sensor for this application. In fact, the geometric properties of the sensor permit to compute easily the roll and pitch angles. The method consists in separating the sky from the earth in order to detect the horizon. We propose an adaptation of the Markov Random Fields for catadioptric images for this segmentation. The second step consists in estimating the parameters of the horizon line thanks to a robust estimation algorithm. We also present the angle estimation algorithm and finally, we show experimental results on synthetic and real images captured from an airplane

    Methods for Reliable Robot Vision with a Dioptric System

    Get PDF
    Image processin

    Robust Attitude Estimation with Catadioptric Vision

    Get PDF
    International audienceAttitude (roll and pitch) is an essential data for the navigation of a UAV. Rather than using inertial sensors, we propose a catadioptric vision system allowing a fast, robust and accurate estimation of these angles. We show that the optimization of a sky/ground partitioning criterion associated with the specific geometric characteristics of the catadioptric sensor provides very interesting results. Experimental results obtained on real sequences are presented and compared with inertial sensors measures

    A flexible technique for accurate omnidirectional camera calibration and structure from motion

    Get PDF
    In this paper, we present a flexible new technique for single viewpoint omnidirectional camera calibration. The proposed method only requires the camera to observe a planar pattern shown at a few different orientations. Either the camera or the planar pattern can be freely moved. No a priori knowledge of the motion is required, nor a specific model of the omnidirectional sensor. The only assumption is that the image projection function can be described by a Taylor series expansion whose coefficients are estimated by solving a two-step least-squares linear minimization problem. To test the proposed technique, we calibrated a panoramic camera having a field of view greater than 200° in the vertical direction, and we obtained very good results. To investigate the accuracy of the calibration, we also used the estimated omni-camera model in a structure from motion experiment. We obtained a 3D metric reconstruction of a scene from two highly distorted omnidirectional images by using image correspondences only. Compared with classical techniques, which rely on a specific parametric model of the omnidirectional camera, the proposed procedure is independent of the sensor, easy to use, and flexible. 1
    • …
    corecore