4 research outputs found

    Belief Space-Guided Navigation for Robots and Autonomous Vehicles

    Get PDF
    Navigating through the environment is a fundamental capability for mobile robots, which is still very challenging today. Most robotic applications these days, such as mining, disaster response, and agriculture, require the robots to move and perform tasks in a variety of environments which are stochastic and sometimes even unpredictable. A robot often cannot directly observe its current state but instead estimates a distribution over the set of possible states based on sensor measurements that are both noisy and partial. The actual robot position differs from its prediction after applying a motion command, due to actuation noise. Classic algorithms for navigation should adapt themselves to where the behavior of the environment is stochastic, and the execution of the motions has great uncertainty. To solve such challenging problems, we propose to guide the robot's navigation in the belief space. Belief space-guided navigation differs fundamentally from planning without uncertainty where the state of the robot is always assumed to be known precisely. The robot senses its environment, estimates its current state due to perception uncertainty, and decides whether a new (or priori) action is appropriate. Based on that determination, it actuates its sensors to move with motion uncertainty in the environment. This inspires us to connect robot perception and motion planning, and reason about the uncertainty to improve the quality of plan so that the robot can follow a collision-free, feasible kinodynamic, and task-optimal trajectory. In this dissertation, we explore the belief space-guided robotic navigation problems, which include belief space-based scene understanding for autonomous vehicles and introduce belief space guided robotic planning. We first investigate how belief space can facilitate scene understanding under the context of lane marking quality assessment in the application of autonomous driving. We propose a new problem by measuring the quality of roads and ensuring they are ready for autonomous driving. We focus on developing three quality metrics for lane markings (LMs), correctness metric, shape metric, and visibility metric, and algorithms to assess LM qualities to facilitate scene understanding. As another example of using belief space for better scene understanding, we utilize crowdsourced images from multiple vehicles to help verify LMs for high-definition (HD) map maintenance. An LM is consistent if belief functions from the map and the image satisfy statistical hypothesis testing. We further extend the Bayesian belief model into a sequential belief update using crowdsourced images. LMs with a higher probability of existence are kept in the HD map whereas those with a lower probability of existence are removed from the HD map. Belief space can also help us to tightly connect perception and motion planning. As an example, we develop a motion planning strategy for autonomous vehicles. Named as virtual lane boundary approach, this framework considers obstacle avoidance, trajectory smoothness (to satisfy vehicle kinodynamic constraints), trajectory continuity (to avoid sudden movements), global positioning system (GPS) following quality (to execute the global plan), and lane following or partial direction following (to meet human expectation). Consequently, vehicle motion is more human-compatible than existing approaches. As another example of how belief space can help guide robots for different tasks, we propose to use it for the probabilistic boundary coverage of unknown target fields (UTFs). We employ Gaussian processes as a local belief function to approximate a field boundary distribution in an ellipse-shaped local region. The local belief function allows us to predict UTF boundary trends and establish an adjacent ellipse for further exploration. The process is governed by a depth-first search process until UTF is approximately enclosed by connected ellipses when the boundary coverage process ends. We formally prove that our boundary coverage process guarantees the enclosure above a given coverage ratio with a preset probability threshold

    対象物体と指配置のコンフィグレーション空間を用いた不確かさを扱える効率的なケージング計画

    Get PDF
    学位の種別:課程博士University of Tokyo(東京大学

    Coverage Diameters of Polygons

    No full text

    Coverage Diameters of Polygons

    No full text
    Abstract — This paper formalizes and proposes an algorithm to compute coverage diameters of polygons in 2D. Roughly speaking, the coverage diameter of a polygon is the longest possible distance between two points through which the polygon cannot pass in between. The primary use of coverage diameter is to form a cage for transporting an object, not necessarily convex, with multiple disc-shaped robots. The main idea of the computation of coverage diameter is to convert the problem into a graph structure, then perform the search for a solution path in that graph. The proposed algorithm runs in O(n 2 log n) time for the input polygon with n vertices. I
    corecore