3 research outputs found

    積算状態推定に基づくヒューマノイドロボットの継続的タスク実行システムの構成法

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学准教授 岡田 慧, 東京大学教授 中村 仁彦, 東京大学教授 稲葉 雅幸, 東京大学教授 國吉 康夫, 東京大学准教授 高野 渉University of Tokyo(東京大学

    Direct Superpixel Labeling for Mobile Robot Navigation Using Learned General Optical Flow Templates

    Get PDF
    © 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), 14-18 September 2014, Chicago, IL.DOI: 10.1109/IROS.2014.6942685Towards the goal of autonomous obstacle avoidance for mobile robots, we present a method for superpixel labeling using optical flow templates. Optical flow provides a rich source of information that complements image appearance and point clouds in determining traversability. While much past work uses optical flow towards traversability in a heuristic manner, the method we present here instead classifies flow according to several optical flow templates that are specific to the typical environment shape. Our first contribution over prior work in superpixel labeling using optical flow templates is large improvements in accuracy and efficiency by inference directly from spatiotemporal gradients instead of from independently- computed optical flow, and from improved optical flow modeling for obstacles. Our second contribution over the same is extending superpixel labeling methods to arbitrary camera optics without the need to calibrate the camera, by developing and demonstrating a method for learning optical flow templates from unlabeled video. Our experiments demonstrate successful obstacle detection in an outdoor mobile robot dataset

    Robust Localization in 3D Prior Maps for Autonomous Driving.

    Full text link
    In order to navigate autonomously, many self-driving vehicles require precise localization within an a priori known map that is annotated with exact lane locations, traffic signs, and additional metadata that govern the rules of the road. This approach transforms the extremely difficult and unpredictable task of online perception into a more structured localization problem—where exact localization in these maps provides the autonomous agent a wealth of knowledge for safe navigation. This thesis presents several novel localization algorithms that leverage a high-fidelity three-dimensional (3D) prior map that together provide a robust and reliable framework for vehicle localization. First, we present a generic probabilistic method for localizing an autonomous vehicle equipped with a 3D light detection and ranging (LIDAR) scanner. This proposed algorithm models the world as a mixture of several Gaussians, characterizing the z-height and reflectivity distribution of the environment—which we rasterize to facilitate fast and exact multiresolution inference. Second, we propose a visual localization strategy that replaces the expensive 3D LIDAR scanners with significantly cheaper, commodity cameras. In doing so, we exploit a graphics processing unit to generate synthetic views of our belief environment, resulting in a localization solution that achieves a similar order of magnitude error rate with a sensor that is several orders of magnitude cheaper. Finally, we propose a visual obstacle detection algorithm that leverages knowledge of our high-fidelity prior maps in its obstacle prediction model. This not only provides obstacle awareness at high rates for vehicle navigation, but also improves our visual localization quality as we are cognizant of static and non-static regions of the environment. All of these proposed algorithms are demonstrated to be real-time solutions for our self-driving car.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133410/1/rwolcott_1.pd
    corecore