5 research outputs found

    Mapping and Localization in Urban Environments Using Cameras

    Get PDF
    In this work we present a system to fully automatically create a highly accurate visual feature map from image data aquired from within a moving vehicle. Moreover, a system for high precision self localization is presented. Furthermore, we present a method to automatically learn a visual descriptor. The map relative self localization is centimeter accurate and allows autonomous driving

    Generation and Exploitation of Local Orthographic Imagery for Road Vehicle Localisation

    No full text
    This paper is about road vehicle localisation based on vision using synthesised local orthographic imagery. We exploit state of the art stereo visual odometry (VO) on our survey vehicle to generate high precision synthetic orthographic images of the road surface as would be seen from overhead. The fidelity and detail of these images far exceeds that of aerial photographs. When undertaking subsequent passes of the same route, the vehicle is localised against the survey vehicle's trajectory by maximising the mutual information between the synthetic orthographic images and live image streams. Thus we explicitly leverage the gross appearance of the workspace rather than a discrete set of point features. We test our technique on data gathered from a road vehicle and show that centimeter-level precision is possible without the complexity and instability of contemporary feature based techniques. © 2012 IEEE

    Generation and Exploitation of Local Orthographic Imagery for Road Vehicle Localisation

    No full text
    This paper is about road vehicle localisation based on vision using synthesised local orthographic imagery. We exploit state of the art stereo visual odometry (VO) on our survey vehicle to generate high precision synthetic orthographic images of the road surface as would be seen from overhead. The fidelity and detail of these images far exceeds that of aerial photographs. When undertaking subsequent passes of the same route, the vehicle is localised against the survey vehicle's trajectory by maximising the mutual information between the synthetic orthographic images and live image streams. Thus we explicitly leverage the gross appearance of the workspace rather than a discrete set of point features. We test our technique on data gathered from a road vehicle and show that centimeter-level precision is possible without the complexity and instability of contemporary feature based techniques. © 2012 IEEE

    Robust Localization in 3D Prior Maps for Autonomous Driving.

    Full text link
    In order to navigate autonomously, many self-driving vehicles require precise localization within an a priori known map that is annotated with exact lane locations, traffic signs, and additional metadata that govern the rules of the road. This approach transforms the extremely difficult and unpredictable task of online perception into a more structured localization problem—where exact localization in these maps provides the autonomous agent a wealth of knowledge for safe navigation. This thesis presents several novel localization algorithms that leverage a high-fidelity three-dimensional (3D) prior map that together provide a robust and reliable framework for vehicle localization. First, we present a generic probabilistic method for localizing an autonomous vehicle equipped with a 3D light detection and ranging (LIDAR) scanner. This proposed algorithm models the world as a mixture of several Gaussians, characterizing the z-height and reflectivity distribution of the environment—which we rasterize to facilitate fast and exact multiresolution inference. Second, we propose a visual localization strategy that replaces the expensive 3D LIDAR scanners with significantly cheaper, commodity cameras. In doing so, we exploit a graphics processing unit to generate synthetic views of our belief environment, resulting in a localization solution that achieves a similar order of magnitude error rate with a sensor that is several orders of magnitude cheaper. Finally, we propose a visual obstacle detection algorithm that leverages knowledge of our high-fidelity prior maps in its obstacle prediction model. This not only provides obstacle awareness at high rates for vehicle navigation, but also improves our visual localization quality as we are cognizant of static and non-static regions of the environment. All of these proposed algorithms are demonstrated to be real-time solutions for our self-driving car.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133410/1/rwolcott_1.pd

    Cross-Layer System Design for Autonomous Driving

    Full text link
    Autonomous driving has gained tremendous popularity and becomes one of the most emerging applications recently, which allows the vehicle to drive by itself without requiring help from a human. The demand of this application continues to grow leading to ever increasing investment from industry in the last decade. Unfortunately, autonomous driving systems remain unavailable to the public and are still under development even with the recent considerable advancement achieved in our community. Several key challenges are observed across the stack of autonomous driving systems and must be addressed to bridge the gap. This dissertation investigates cross-layer autonomous driving systems from hardware architecture, software algorithms to human-vehicle interaction. In the hardware architecture layer, we investigate and present the design constraints of autonomous driving systems. With an end-to-end autonomous driving system framework we built, we accelerate the computational bottlenecks identified and thoroughly investigate the implications and trade-offs across various accelerator platforms. In the software algorithm layer, we propose an accelerating technique for object recognition, which is one of the critical bottlenecks in autonomous driving systems. We exploit the similarity across frames in streaming videos for autonomous vehicles and reuse the intermediate outputs computed in the algorithm to reduce the computation required and improve the performance. In the human-vehicle interaction layer, we design a conversational in-vehicle interface framework which enables drivers to interact with vehicles by using natural human language to improve the usability of autonomous driving features. We also integrate this framework into a commercially available vehicle and conduct a real-world driving study.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/149951/1/shihclin_1.pd
    corecore