8,478 research outputs found

    Hybrid Focal Stereo Networks for Pattern Analysis in Homogeneous Scenes

    Full text link
    In this paper we address the problem of multiple camera calibration in the presence of a homogeneous scene, and without the possibility of employing calibration object based methods. The proposed solution exploits salient features present in a larger field of view, but instead of employing active vision we replace the cameras with stereo rigs featuring a long focal analysis camera, as well as a short focal registration camera. Thus, we are able to propose an accurate solution which does not require intrinsic variation models as in the case of zooming cameras. Moreover, the availability of the two views simultaneously in each rig allows for pose re-estimation between rigs as often as necessary. The algorithm has been successfully validated in an indoor setting, as well as on a difficult scene featuring a highly dense pilgrim crowd in Makkah.Comment: 13 pages, 6 figures, submitted to Machine Vision and Application

    Automated archiving of archaeological aerial images

    Get PDF
    The main purpose of any aerial photo archive is to allow quick access to images based on content and location. Therefore, next to a description of technical parameters and depicted content, georeferencing of every image is of vital importance. This can be done either by identifying the main photographed object (georeferencing of the image content) or by mapping the center point and/or the outline of the image footprint. The paper proposes a new image archiving workflow. The new pipeline is based on the parameters that are logged by a commercial, but cost-effective GNSS/IMU solution and processed with in-house-developed software. Together, these components allow one to automatically geolocate and rectify the (oblique) aerial images (by a simple planar rectification using the exterior orientation parameters) and to retrieve their footprints with reasonable accuracy, which is automatically stored as a vector file. The data of three test flights were used to determine the accuracy of the device, which turned out to be better than 1° for roll and pitch (mean between 0.0 and 0.21 with a standard deviation of 0.17–0.46) and better than 2.5° for yaw angles (mean between 0.0 and −0.14 with a standard deviation of 0.58–0.94). This turned out to be sufficient to enable a fast and almost automatic GIS-based archiving of all of the imagery

    An effective strategy of real-time vision-based control for a Stewart platform

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksA Stewart platform is a kind of parallel robot which can be used for a wide variety of technological and industrial applications. In this paper, a Stewart platform designed and assembled at the Universitat Polit`ecnica de Catalunya (UPC) by our research group is presented. The main objective is to overcome the enormous difficulties that arise when a real-time vision-based control of a fast moving object placed on these mechanisms is required. In addition, a description of its geometric characteristics, the calibration process, together with an illustrative experiment to demonstrate the good behavior of the platform is given.Postprint (author's final draft

    A distributed camera system for multi-resolution surveillance

    Get PDF
    We describe an architecture for a multi-camera, multi-resolution surveillance system. The aim is to support a set of distributed static and pan-tilt-zoom (PTZ) cameras and visual tracking algorithms, together with a central supervisor unit. Each camera (and possibly pan-tilt device) has a dedicated process and processor. Asynchronous interprocess communications and archiving of data are achieved in a simple and effective way via a central repository, implemented using an SQL database. Visual tracking data from static views are stored dynamically into tables in the database via client calls to the SQL server. A supervisor process running on the SQL server determines if active zoom cameras should be dispatched to observe a particular target, and this message is effected via writing demands into another database table. We show results from a real implementation of the system comprising one static camera overviewing the environment under consideration and a PTZ camera operating under closed-loop velocity control, which uses a fast and robust level-set-based region tracker. Experiments demonstrate the effectiveness of our approach and its feasibility to multi-camera systems for intelligent surveillance

    Video Guidance, Landing, and Imaging system (VGLIS) for space missions

    Get PDF
    The feasibility of an autonomous video guidance system that is capable of observing a planetary surface during terminal descent and selecting the most acceptable landing site was demonstrated. The system was breadboarded and "flown" on a physical simulator consisting of a control panel and monitor, a dynamic simulator, and a PDP-9 computer. The breadboard VGLIS consisted of an image dissector camera and the appropriate processing logic. Results are reported

    Calibration and removal of lateral chromatic aberration in images

    Get PDF
    This paper addresses the problem of compensating for lateral chromatic aberration in digital images through colour plane realignment. Two main contributions are made: the derivation of a model for lateral chromatic aberration in images, and the subsequent calibration of this model from a single view of a chess pattern. These advances lead to a practical and accurate alternative for the compensation of lateral chromatic aberrations. Experimental results validate the proposed models and calibration algorithm. The effects of colour channel correlations resulting from the camera colour filter array interpolation is examined and found to have a negligible magnitude relative to the chromatic aberration. Results with real data show how the removal of lateral chromatic aberration significantly improves the colour quality of the image

    A relative-intensity two-color phosphor thermography system

    Get PDF
    The NASA LaRC has developed a relative-intensity two-color phosphor thermography system. This system has become a standard technique for acquiring aerothermodynamic data in LaRC Hypersonic Facilities Complex (HFC). The relative intensity theory and its application to the LaRC phosphor thermography system is discussed along with the investment casting technique which is critical to the utilization of the phosphor method for aerothermodynamic studies. Various approaches to obtaining quantitative heat transfer data using thermographic phosphors are addressed and comparisons between thin-film data and thermographic phosphor data on an orbiter-like configuration are presented. In general, data from these two techniques are in good agreement. A discussion is given on the application of phosphors to integration heat transfer data reduction techniques (the thin film method) and preliminary heat transfer data obtained on a calibration sphere using thin-film equations are presented. Finally, plans for a new phosphor system which uses target recognition software are discussed

    Calibration with concurrent PT axes

    Get PDF
    The introduction of active (pan-tilt-zoom or PTZ) cameras in Smart Rooms in addition to fixed static cameras allows to improve resolution in volumetric reconstruction, adding the capability to track smaller objects with higher precision in actual 3D world coordinates. To accomplish this goal, precise camera calibration data should be available for any pan, tilt, and zoom settings of each PTZ camera. The PTZ calibration method proposed in this paper introduces a novel solution to the problem of computing extrinsic and intrinsic parameters for active cameras. We first determine the rotation center of the camera expressed under an arbitrary world coordinate origin. Then, we obtain an equation relating any rotation of the camera with the movement of the principal point to define extrinsic parameters for any value of pan and tilt. Once this position is determined, we compute how intrinsic parameters change as a function of zoom. We validate our method by evaluating the re-projection error and its stability for points inside and outside the calibration set.Postprint (published version
    corecore