4,867 research outputs found

    Visibility maintenance via controlled invariance for leader-follower Dubins-like vehicles

    Full text link
    The paper studies the visibility maintenance problem (VMP) for a leader-follower pair of Dubins-like vehicles with input constraints, and proposes an original solution based on the notion of controlled invariance. The nonlinear model describing the relative dynamics of the vehicles is interpreted as linear uncertain system, with the leader robot acting as an external disturbance. The VMP is then reformulated as a linear constrained regulation problem with additive disturbances (DLCRP). Positive D-invariance conditions for linear uncertain systems with parametric disturbance matrix are introduced and used to solve the VMP when box bounds on the state, control input and disturbance are considered. The proposed design procedure is shown to be easily adaptable to more general working scenarios. Extensive simulation results are provided to illustrate the theory and show the effectiveness of our approachComment: 17 pages, 24 figures, extended version of the journal paper of the authors submitted to Automatic

    Positional estimation techniques for an autonomous mobile robot

    Get PDF
    Techniques for positional estimation of a mobile robot navigation in an indoor environment are described. A comprehensive review of the various positional estimation techniques studied in the literature is first presented. The techniques are divided into four different types and each of them is discussed briefly. Two different kinds of environments are considered for positional estimation; mountainous natural terrain and an urban, man-made environment with polyhedral buildings. In both cases, the robot is assumed to be equipped with single visual camera that can be panned and tilted and also a 3-D description (world model) of the environment is given. Such a description could be obtained from a stereo pair of aerial images or from the architectural plans of the buildings. Techniques for positional estimation using the camera input and the world model are presented

    Combining Subgoal Graphs with Reinforcement Learning to Build a Rational Pathfinder

    Full text link
    In this paper, we present a hierarchical path planning framework called SG-RL (subgoal graphs-reinforcement learning), to plan rational paths for agents maneuvering in continuous and uncertain environments. By "rational", we mean (1) efficient path planning to eliminate first-move lags; (2) collision-free and smooth for agents with kinematic constraints satisfied. SG-RL works in a two-level manner. At the first level, SG-RL uses a geometric path-planning method, i.e., Simple Subgoal Graphs (SSG), to efficiently find optimal abstract paths, also called subgoal sequences. At the second level, SG-RL uses an RL method, i.e., Least-Squares Policy Iteration (LSPI), to learn near-optimal motion-planning policies which can generate kinematically feasible and collision-free trajectories between adjacent subgoals. The first advantage of the proposed method is that SSG can solve the limitations of sparse reward and local minima trap for RL agents; thus, LSPI can be used to generate paths in complex environments. The second advantage is that, when the environment changes slightly (i.e., unexpected obstacles appearing), SG-RL does not need to reconstruct subgoal graphs and replan subgoal sequences using SSG, since LSPI can deal with uncertainties by exploiting its generalization ability to handle changes in environments. Simulation experiments in representative scenarios demonstrate that, compared with existing methods, SG-RL can work well on large-scale maps with relatively low action-switching frequencies and shorter path lengths, and SG-RL can deal with small changes in environments. We further demonstrate that the design of reward functions and the types of training environments are important factors for learning feasible policies.Comment: 20 page

    Simultaneous localization and map-building using active vision

    No full text
    An active approach to sensing can provide the focused measurement capability over a wide field of view which allows correctly formulated Simultaneous Localization and Map-Building (SLAM) to be implemented with vision, permitting repeatable long-term localization using only naturally occurring, automatically-detected features. In this paper, we present the first example of a general system for autonomous localization using active vision, enabled here by a high-performance stereo head, addressing such issues as uncertainty-based measurement selection, automatic map-maintenance, and goal-directed steering. We present varied real-time experiments in a complex environment.Published versio

    Simple yet stable bearing-only navigation

    Get PDF
    This article describes a simple monocular navigation system for a mobile robot based on the map-and-replay technique. The presented method is robust and easy to implement and does not require sensor calibration or structured environment, and its computational complexity is independent of the environment size. The method can navigate a robot while sensing only one landmark at a time, making it more robust than other monocular approaches. The aforementioned properties of the method allow even low-cost robots to effectively act in large outdoor and indoor environments with natural landmarks only. The basic idea is to utilize a monocular vision to correct only the robot's heading, leaving distance measurements to the odometry. The heading correction itself can suppress the odometric error and prevent the overall position error from diverging. The influence of a map-based heading estimation and odometric errors on the overall position uncertainty is examined. A claim is stated that for closed polygonal trajectories, the position error of this type of navigation does not diverge. The claim is defended mathematically and experimentally. The method has been experimentally tested in a set of indoor and outdoor experiments, during which the average position errors have been lower than 0.3 m for paths more than 1 km long

    Pause-and-Go Self-Balancing Formation Control of Autonomous Vehicles Using Vision and Ultrasound Sensors

    Get PDF
    In this work, we implement a decentralized and noncooperative state estimation and control algorithm to autonomously balance a team of robots in a circular formation pattern. The group of robots includes a leader periodically moving at a constant steering angle and a set of followers that, by only leveraging intermittent and noisy proximity measurements, independently implement a fully decentralized state estimation control algorithm to determine and adjust their relative position with closest neighbors. The algorithm is conducted in a pause-and-go sequence, where, during the pause, each robot stops to gather and process the information coming from the measurements, estimate the relative phase with respect to the others, and identify its closest pursuant. During the go, each robot accelerates to space from its closest pursuant and then to move at a constant speed when the desired spacing is achieved. The algorithm is tested in an unprecedented experiment on a custom-made low-cost caster-wheeled robotic framework featuring sonar and vision sensors mounted on a rotating platform to estimate the proximity distance to closer neighbors. The control scheme, which does not necessitate cooperation and is capable of coping with uncertain and intermittent sensor feedback data, is shown to be effective in balancing the robot on the circle even when, at a steady state, no feedback sensor data are available
    corecore