4 research outputs found

    Machine-Vision Aids for Improved Flight Operations

    Get PDF
    The development of machine vision based pilot aids to help reduce night approach and landing accidents is explored. The techniques developed are motivated by the desire to use the available information sources for navigation such as the airport lighting layout, attitude sensors and Global Positioning System to derive more precise aircraft position and orientation information. The fact that airport lighting geometry is known and that images of airport lighting can be acquired by the camera, has lead to the synthesis of machine vision based algorithms for runway relative aircraft position and orientation estimation. The main contribution of this research is the synthesis of seven navigation algorithms based on two broad families of solutions. The first family of solution methods consists of techniques that reconstruct the airport lighting layout from the camera image and then estimate the aircraft position components by comparing the reconstructed lighting layout geometry with the known model of the airport lighting layout geometry. The second family of methods comprises techniques that synthesize the image of the airport lighting layout using a camera model and estimate the aircraft position and orientation by comparing this image with the actual image of the airport lighting acquired by the camera. Algorithms 1 through 4 belong to the first family of solutions while Algorithms 5 through 7 belong to the second family of solutions. Algorithms 1 and 2 are parameter optimization methods, Algorithms 3 and 4 are feature correspondence methods and Algorithms 5 through 7 are Kalman filter centered algorithms. Results of computer simulation are presented to demonstrate the performance of all the seven algorithms developed

    Proceedings of the Augmented VIsual Display (AVID) Research Workshop

    Get PDF
    The papers, abstracts, and presentations were presented at a three day workshop focused on sensor modeling and simulation, and image enhancement, processing, and fusion. The technical sessions emphasized how sensor technology can be used to create visual imagery adequate for aircraft control and operations. Participants from industry, government, and academic laboratories contributed to panels on Sensor Systems, Sensor Modeling, Sensor Fusion, Image Processing (Computer and Human Vision), and Image Evaluation and Metrics

    Integration of 3D vision based structure estimation and visual robot control

    Get PDF
    Enabling robot manipulators to manipulate and/or recognise arbitrarily placed 3D objects under sensory control is one of the key issues in robotics. Such robot sensors should be capable of providing 3D information about objects in order to accomplish the above mentioned tasks. Such robot sensors should also provide the means for multisensor or multimeasurement integration. Finally, such 3D information should be efficiently used for performing desired tasks. This work develops a novel computational frame wo rk for solving some of these problems. A vision (camera) sensor is used in conjunction with a robot manipulator, in the frame-work of active vision to estimate 3D structure (3D geometrical model) of a class of objects. Such information is used for the visual robot control, in the frame-work of model based vision. One part o f this dissertation is devoted to the system calibration. The camera and eye/hand calibration is presented. Several contributions are introduced in this part, intended to improve existing calibration procedures. This results in more efficient and accurate calibrations. Experimental results are presented. Second part of this work is devoted to the methods of image processing and image representation. Methods for extracting and representing necessary image features comprising vision based measurements are given. Third part of this dissertation is devoted to the 3D geometrical model reconstruction of a class o f objects (polyhedral objects). A new technique for 3D model reconstruction from an image sequence is introduced. This algorithm estimates a 3D model of an object in terms of 3D straight-line segments (wire-frame model) by integrating pertinent information over an image sequence. The image sequence is obtained from a moving camera mounted on a robot arm. Experimental results are presented. Fourth part of this dissertation is devoted to the robot visual control. A new visual control strategy is introduced. In particular, the necessary homogeneous transformation matrix for the robot gripper in order to grasp an arbitrarily placed 3D object is estimated. This problem is posed as a problem of 3D displacement (motion) estimation between the reference model of an object and the actual model of the object. Further, the basic algorithm is extended to handle multiple object manipulation and recognition. Experimental results are presented
    corecore