84,512 research outputs found

    The Right (Angled) Perspective: Improving the Understanding of Road Scenes Using Boosted Inverse Perspective Mapping

    Full text link
    Many tasks performed by autonomous vehicles such as road marking detection, object tracking, and path planning are simpler in bird's-eye view. Hence, Inverse Perspective Mapping (IPM) is often applied to remove the perspective effect from a vehicle's front-facing camera and to remap its images into a 2D domain, resulting in a top-down view. Unfortunately, however, this leads to unnatural blurring and stretching of objects at further distance, due to the resolution of the camera, limiting applicability. In this paper, we present an adversarial learning approach for generating a significantly improved IPM from a single camera image in real time. The generated bird's-eye-view images contain sharper features (e.g. road markings) and a more homogeneous illumination, while (dynamic) objects are automatically removed from the scene, thus revealing the underlying road layout in an improved fashion. We demonstrate our framework using real-world data from the Oxford RobotCar Dataset and show that scene understanding tasks directly benefit from our boosted IPM approach.Comment: equal contribution of first two authors, 8 full pages, 6 figures, accepted at IV 201

    Toward Online Probabilistic Path Replanning

    Get PDF
    In this talk we present work on sensor-based motion planning in initially unknown dynamic environments. Motion detection and probabilistic motion modeling are combined with a smooth navigation function to perform on-line path planning and replanning in cluttered dynamic environments such as public exhibitions. Human behavior is unforeseeable in most situations that include human-robot interaction, e.g. service robots or robotic companions. This makes motion prediction problematic (they rarely move e.g. with constant velocity along straight lines), especially in settings which include large numbers of humans. Additionally, the robot is usually required to react swiftly rather than optimally, in other words the time required to calculate the plan becomes part of the optimality criterion. The "Probabilistic Navigation Function" (PNF) is an approach for planning in these cluttered dynamic environments. It relies on probabilistic worst-case computations of the collision risk and weighs regions based on that estimate. The PNF is intended to be used for gradient-descent control of a vehicle, where the gradient indicates the best trade-off between risk and detour. An underlying reactive collision avoidance provides the tight perception-action loop to cope with the remaining collision probability. As this is work in progress, we present the approach and describe finished components and give an outlook on remaining implementation issues. Two algorithmic building blocks have been developed and tested: On-line motion detection from a mobile platform is performed by the SLIP scan alignment method to separate static from dynamic objects (it also helps with pose estimation). The interface between motion detection and path planning is a probabilistic co-occurrence estimation measuring the risk of future collisions given environment constraints and worst-case scenarios, which unifies dynamic and static elements. The risk is translated into traversal costs for an E* path planner, which produces smooth navigation functions that can incorporate new environmental information in near real-time
    • …
    corecore