51 research outputs found

    Under vehicle perception for high level safety measures using a catadioptric camera system

    Get PDF
    In recent years, under vehicle surveillance and the classification of the vehicles become an indispensable task that must be achieved for security measures in certain areas such as shopping centers, government buildings, army camps etc. The main challenge to achieve this task is to monitor the under frames of the means of transportations. In this paper, we present a novel solution to achieve this aim. Our solution consists of three main parts: monitoring, detection and classification. In the first part we design a new catadioptric camera system in which the perspective camera points downwards to the catadioptric mirror mounted to the body of a mobile robot. Thanks to the catadioptric mirror the scenes against the camera optical axis direction can be viewed. In the second part we use speeded up robust features (SURF) in an object recognition algorithm. Fast appearance based mapping algorithm (FAB-MAP) is exploited for the classification of the means of transportations in the third part. Proposed technique is implemented in a laboratory environment

    EXPERIMENTAL ASSESSMENT OF TECHNIQUES FOR FISHEYE CAMERA CALIBRATION

    Get PDF
    Fisheye lens cameras enable to increase the Field of View (FOV), and consequently they have been largely used in several applications like robotics. The use of this type of cameras in close-range Photogrammetry for high accuracy applications, requires rigorous calibration. The main aim of this work is to present the calibration results of a Fuji Finepix S3PRO camera with Samyang 8mm fisheye lens using rigorous mathematical models. Mathematical models based on Perspective, Stereo-graphic, Equi-distant, Orthogonal and Equi-solid-angle projections were implemented and used in the experiments. The fisheye lenses are generally designed following one of the last four models, and Bower-Samyang 8mm lens is based on Stereo-graphic projection. These models were used in combination with symmetric radial, decentering and affinity distortion models. Experiments were performed to verify which set of IOPs (Interior Orientation Parameters) presented better results to describe the camera inner geometry. Collinearity mathematical model, which is based on perspective projection, presented the less accurate results, which was expected because fisheye lenses are not designed following the perspective projection. Stereo-graphic, Equi-distant, Orthogonal and Equi-solid-angle projections presented similar results even considering that Bower-Samyang fisheye lens was built based on Stereo-graphic projection. The experimental results also demonstrated a small correlation between IOPs and EOPs (Exterior Orientation Parameters) for Bower-Samyang lens

    Single Cone Mirror Omni-Directional Stereo

    Get PDF
    Omni-directional view and stereo information for scene points are both crucial in many computer vision applications. In some demanding applications like autonomous robots, we need to acquire both in real-time without sacrificing too much image resolution. This work describes a novel method to meet all the stringent demands with relatively simple setup and off-the-shelf equipments. Only one simple reflective surface and two regular (perspective) camera views are needed. First we describe the novel stereo method. Then we discuss some variations in practical implementation and their respective tradeoffs

    Dynamic 3D-Vision

    Get PDF

    Realtime Color Stereovision Processing

    Get PDF
    Recent developments in aviation have made micro air vehicles (MAVs) a reality. These featherweight palm-sized radio-controlled flying saucers embody the future of air-to-ground combat. No one has ever successfully implemented an autonomous control system for MAVs. Because MAVs are physically small with limited energy supplies, video signals offer superiority over radar for navigational applications. This research takes a step forward in real time machine vision processing. It investigates techniques for implementing a real time stereovision processing system using two miniature color cameras. The effects of poor-quality optics are overcome by a robust algorithm, which operates in real time and achieves frame rates up to 10 fps in ideal conditions. The vision system implements innovative work in the following five areas of vision processing: fast image registration preprocessing, object detection, feature correspondence, distortion-compensated ranging, and multi scale nominal frequency-based object recognition. Results indicate that the system can provide adequate obstacle avoidance feedback for autonomous vehicle control. However, typical relative position errors are about 10%-to high for surveillance applications. The range of operation is also limited to between 6 - 30 m. The root of this limitation is imprecise feature correspondence: with perfect feature correspondence the range would extend to between 0.5 - 30 m. Stereo camera separation limits the near range, while optical resolution limits the far range. Image frame sizes are 160x120 pixels. Increasing this size will improve far range characteristics but will also decrease frame rate. Image preprocessing proved to be less appropriate than precision camera alignment in this application. A proof of concept for object recognition shows promise for applications with more precise object detection. Future recommendations are offered in all five areas of vision processing

    3D display size matters: Compensating for the perceptual effects of S3D display scaling

    Get PDF
    Abstract Introduction In recent years the consumer electronics market has been flooded with a variety of S3D products, which rely on a variety of display and image segregation technologies. For each display system, the ideal viewing conditions (eg. viewing angle) can be defined in order to obtain the desired 3D experience. SMPTE and THX [1, 2] have provided specific standards and guidelines for the ideal viewing angle for theatre and television. However, screen dimension 1 is an uncontrolled variable since the same content could be displayed on a mobile autostereoscopic device, 3D monitor, HD 3DTV or in a 3D movie theatre. Adapting a S3D film to a variety of screen sizes is necessary for most, if not all, popular movies if the distributors are to maximize their exposure and therefore earnings. However, unlike 2D film the S3D scaling process is complicated by a variety of 1 The range of viewing distances typically used are correlated with the size of the display, with audiences moving closer as screens get smaller. If field of view is constant it is often the distance that is more important. Since they normally co-vary here we will focus on screen size and related disparity scaling issues, but will point out the role of viewing distance in particular when it is warranted. computational and perceptual issues that can significantly impact the audience experience. As outlined below, the existing approaches to scaling S3D content for a variety of delivery form factors can be divided into two main categories: those applied during acquisition and those applied during postproduction or display. The most common strategy is some combination of pre and post-production approaches. However, inevitably some degree of perceptual and geometric distortion will remain. A better understanding of these distortions and their perceptual consequences will provide S3D content creators with insight and context for using sophisticated scaling approaches based on both acquisition and post-production techniques. This paper will review the principal issues related to S3D content scaling, some of the technical solutions available to content makers/ providers and the perceptual consequences for audiences. Stereoscopic Geometry As was shown by Spottiswood in the early 1950's [3], displaying stereoscopic 3D content at different sizes may dramatically influence the audience's S3D experience. Given the interdependence of acquisition and display parameters; most filmmakers, while trying to protect for different screen dimensions will have a target viewing condition when they begin filming. Figures 1 and 2 depict stereoscopic viewing and acquisition geometry, respectivel

    Omnistereo: panoramic stereo imaging

    Full text link
    corecore