1,995 research outputs found

    Flexible Stereo: Constrained, Non-rigid, Wide-baseline Stereo Vision for Fixed-wing Aerial Platforms

    Full text link
    This paper proposes a computationally efficient method to estimate the time-varying relative pose between two visual-inertial sensor rigs mounted on the flexible wings of a fixed-wing unmanned aerial vehicle (UAV). The estimated relative poses are used to generate highly accurate depth maps in real-time and can be employed for obstacle avoidance in low-altitude flights or landing maneuvers. The approach is structured as follows: Initially, a wing model is identified by fitting a probability density function to measured deviations from the nominal relative baseline transformation. At run-time, the prior knowledge about the wing model is fused in an Extended Kalman filter~(EKF) together with relative pose measurements obtained from solving a relative perspective N-point problem (PNP), and the linear accelerations and angular velocities measured by the two inertial measurement units (IMU) which are rigidly attached to the cameras. Results obtained from extensive synthetic experiments demonstrate that our proposed framework is able to estimate highly accurate baseline transformations and depth maps.Comment: Accepted for publication in IEEE International Conference on Robotics and Automation (ICRA), 2018, Brisban

    Automated Visual Database Creation For A Ground Vehicle Simulator

    Get PDF
    This research focuses on extracting road models from stereo video sequences taken from a moving vehicle. The proposed method combines color histogram based segmentation, active contours (snakes) and morphological processing to extract road boundary coordinates for conversion into Matlab or Multigen OpenFlight compatible polygonal representations. Color segmentation uses an initial truth frame to develop a color probability density function (PDF) of the road versus the terrain. Subsequent frames are segmented using a Maximum Apostiori Probability (MAP) criteria and the resulting templates are used to update the PDFs. Color segmentation worked well where there was minimal shadowing and occlusion by other cars. A snake algorithm was used to find the road edges which were converted to 3D coordinates using stereo disparity and vehicle position information. The resulting 3D road models were accurate to within 1 meter

    Homography-Based Passive Vehicle Speed Measuring

    Get PDF
    An apparatus for passively measuring vehicle speed includes at least one video camera or acquiring images of a roadway upon which at least one moving vehicle travels upon, each of the images comprising a plurality of pixels. A computer processes pixel data associated with the plurality of pixels, including using a adaptive background subtraction model to perform background subtraction on the pixel data to identify a plurality of foreground pixels, extracting a plurality of blobs from the foreground pixels, and rectifying the blobs to form a plurality of rectified blobs using a homography matrix. The homography matrix is obtained by comparing at least one known distance in the roadway with distances between the pixels. Using a planar homography transform, the moving vehicle is identified from the plurality of rectified blobs, wherein the respective ones of the plurality of rectified blobs include vehicle data associated with the moving vehicle.The speed of the moving vehicle is computed from the vehicle data

    Guidance for benthic habitat mapping: an aerial photographic approach

    Get PDF
    This document, Guidance for Benthic Habitat Mapping: An Aerial Photographic Approach, describes proven technology that can be applied in an operational manner by state-level scientists and resource managers. This information is based on the experience gained by NOAA Coastal Services Center staff and state-level cooperators in the production of a series of benthic habitat data sets in Delaware, Florida, Maine, Massachusetts, New York, Rhode Island, the Virgin Islands, and Washington, as well as during Center-sponsored workshops on coral remote sensing and seagrass and aquatic habitat assessment. (PDF contains 39 pages) The original benthic habitat document, NOAA Coastal Change Analysis Program (C-CAP): Guidance for Regional Implementation (Dobson et al.), was published by the Department of Commerce in 1995. That document summarized procedures that were to be used by scientists throughout the United States to develop consistent and reliable coastal land cover and benthic habitat information. Advances in technology and new methodologies for generating these data created the need for this updated report, which builds upon the foundation of its predecessor

    Towards End-to-end Car License Plate Location and Recognition in Unconstrained Scenarios

    Full text link
    Benefiting from the rapid development of convolutional neural networks, the performance of car license plate detection and recognition has been largely improved. Nonetheless, challenges still exist especially for real-world applications. In this paper, we present an efficient and accurate framework to solve the license plate detection and recognition tasks simultaneously. It is a lightweight and unified deep neural network, that can be optimized end-to-end and work in real-time. Specifically, for unconstrained scenarios, an anchor-free method is adopted to efficiently detect the bounding box and four corners of a license plate, which are used to extract and rectify the target region features. Then, a novel convolutional neural network branch is designed to further extract features of characters without segmentation. Finally, recognition task is treated as sequence labelling problems, which are solved by Connectionist Temporal Classification (CTC) directly. Several public datasets including images collected from different scenarios under various conditions are chosen for evaluation. A large number of experiments indicate that the proposed method significantly outperforms the previous state-of-the-art methods in both speed and precision

    Integrated Stereovision for an Autonomous Ground Vehicle Competing in the Darpa Grand Challenge

    Get PDF
    The DARPA Grand Challenge (DGC) 2005 was a competition, in form of a desert race for autonomous ground vehicles, arranged by the U.S. Defense Advanced Research Project Agency (DARPA). The purpose was to encourage research and development of related technology. The objective of the race was to track a distance of 131.6 miles in less than 10 hours without any human interaction. Only public GPS signals and terrain sensors were allowed for navigation and obstacle detection. One of the teams competing in the DGC was Team Caltech from California Institute of Technology, consisting primarily of undergraduate students. The vehicle representing Team Caltech was a 2005 Ford E-350 van, named Alice. Alice had been modified for off-road driving and equipped with multiple sensors, computers and actuators. One type of terrain sensors used on Alice was stereovision. Two camera pairs were used for short and long range obstacle detection. This master thesis concerns development, testing and integration of stereovision sensors during the final four months leading to the race. To begin with, the stereovision system on Alice was not ready to use and had not undergone any testing. The work described in this thesis enabled operation of stereovision. It further improved its capability such that it increased the overall performance of Alice. Reliability was demonstrated through multiple desert field tests. Obstacle avoidance and navigation using only stereovision was successfully demonstrated. The completed work includes design and implementation of algorithms to improve camera focus and exposure control, increase processing speed and remove noise. Also hardware and software parameters were configured to achieve best possible operation. Alice managed to qualify to the race as one of the top ten vehicles. However she was only able to complete about 8 miles before running over a concrete barrier and out of the course, as a result of hardware failures and state estimation errors

    Self-Calibration of Multi-Camera Systems for Vehicle Surround Sensing

    Get PDF
    Multi-camera systems are being deployed in a variety of vehicles and mobile robots today. To eliminate the need for cost and labor intensive maintenance and calibration, continuous self-calibration is highly desirable. In this book we present such an approach for self-calibration of multi-Camera systems for vehicle surround sensing. In an extensive evaluation we assess our algorithm quantitatively using real-world data
    corecore