4,476 research outputs found

    Automated Mobile Robots under the Influence of Random Disturbances

    Get PDF
    The dynamics of sensors operated devices such as Automated Mobile Robots and more generally automated target seeking devices is studied in presence of noise. We introduce a simple and analytically tractable class of dynamics which permits to classify qualitatively and somehow quantitatively also the approach to the targets when fluctuations corrupt the ideal trajectories. Our model constitute a first evaluation of the feasibility of an efficient approach when the parameters of the model (statistics of the noise, lengths of the path and progressing steps and heading velocity) are know

    Combining Subgoal Graphs with Reinforcement Learning to Build a Rational Pathfinder

    Full text link
    In this paper, we present a hierarchical path planning framework called SG-RL (subgoal graphs-reinforcement learning), to plan rational paths for agents maneuvering in continuous and uncertain environments. By "rational", we mean (1) efficient path planning to eliminate first-move lags; (2) collision-free and smooth for agents with kinematic constraints satisfied. SG-RL works in a two-level manner. At the first level, SG-RL uses a geometric path-planning method, i.e., Simple Subgoal Graphs (SSG), to efficiently find optimal abstract paths, also called subgoal sequences. At the second level, SG-RL uses an RL method, i.e., Least-Squares Policy Iteration (LSPI), to learn near-optimal motion-planning policies which can generate kinematically feasible and collision-free trajectories between adjacent subgoals. The first advantage of the proposed method is that SSG can solve the limitations of sparse reward and local minima trap for RL agents; thus, LSPI can be used to generate paths in complex environments. The second advantage is that, when the environment changes slightly (i.e., unexpected obstacles appearing), SG-RL does not need to reconstruct subgoal graphs and replan subgoal sequences using SSG, since LSPI can deal with uncertainties by exploiting its generalization ability to handle changes in environments. Simulation experiments in representative scenarios demonstrate that, compared with existing methods, SG-RL can work well on large-scale maps with relatively low action-switching frequencies and shorter path lengths, and SG-RL can deal with small changes in environments. We further demonstrate that the design of reward functions and the types of training environments are important factors for learning feasible policies.Comment: 20 page

    Trust Repair in Human-Swarm Teams+

    Get PDF
    Swarm robots are coordinated via simple control laws to generate emergent behaviors such as flocking, rendezvous, and deployment. Human-swarm teaming has been widely proposed for scenarios, such as human-supervised teams of unmanned aerial vehicles (UAV) for disaster rescue, UAV and ground vehicle cooperation for building security, and soldier-UAV teaming in combat. Effective cooperation requires an appropriate level of trust, between a human and a swarm. When an UAV swarm is deployed in a real-world environment, its performance is subject to real-world factors, such as system reliability and wind disturbances. Degraded performance of a robot can cause undesired swarm behaviors, decreasing human trust. This loss of trust, in turn, can trigger human intervention in UAVs' task executions, decreasing cooperation effectiveness if inappropriate. Therefore, to promote effective cooperation we propose and test a trust-repairing method (Trust-repair) restoring performance and human trust in the swarm to an appropriate level by correcting undesired swarm behaviors. Faulty swarms caused by both external and internal factors were simulated to evaluate the performance of the Trust-repair algorithm in repairing swarm performance and restoring human trust. Results show that Trust-repair is effective in restoring trust to a level intermediate between normal and faulty conditions

    Beyond Reynolds: A Constraint-Driven Approach to Cluster Flocking

    Full text link
    In this paper, we present an original set of flocking rules using an ecologically-inspired paradigm for control of multi-robot systems. We translate these rules into a constraint-driven optimal control problem where the agents minimize energy consumption subject to safety and task constraints. We prove several properties about the feasible space of the optimal control problem and show that velocity consensus is an optimal solution. We also motivate the inclusion of slack variables in constraint-driven problems when the global state is only partially observable by each agent. Finally, we analyze the case where the communication topology is fixed and connected, and prove that our proposed flocking rules achieve velocity consensus.Comment: 6 page

    Cognitive Reasoning for Compliant Robot Manipulation

    Get PDF
    Physically compliant contact is a major element for many tasks in everyday environments. A universal service robot that is utilized to collect leaves in a park, polish a workpiece, or clean solar panels requires the cognition and manipulation capabilities to facilitate such compliant interaction. Evolution equipped humans with advanced mental abilities to envision physical contact situations and their resulting outcome, dexterous motor skills to perform the actions accordingly, as well as a sense of quality to rate the outcome of the task. In order to achieve human-like performance, a robot must provide the necessary methods to represent, plan, execute, and interpret compliant manipulation tasks. This dissertation covers those four steps of reasoning in the concept of intelligent physical compliance. The contributions advance the capabilities of service robots by combining artificial intelligence reasoning methods and control strategies for compliant manipulation. A classification of manipulation tasks is conducted to identify the central research questions of the addressed topic. Novel representations are derived to describe the properties of physical interaction. Special attention is given to wiping tasks which are predominant in everyday environments. It is investigated how symbolic task descriptions can be translated into meaningful robot commands. A particle distribution model is used to plan goal-oriented wiping actions and predict the quality according to the anticipated result. The planned tool motions are converted into the joint space of the humanoid robot Rollin' Justin to perform the tasks in the real world. In order to execute the motions in a physically compliant fashion, a hierarchical whole-body impedance controller is integrated into the framework. The controller is automatically parameterized with respect to the requirements of the particular task. Haptic feedback is utilized to infer contact and interpret the performance semantically. Finally, the robot is able to compensate for possible disturbances as it plans additional recovery motions while effectively closing the cognitive control loop. Among others, the developed concept is applied in an actual space robotics mission, in which an astronaut aboard the International Space Station (ISS) commands Rollin' Justin to maintain a Martian solar panel farm in a mock-up environment. This application demonstrates the far-reaching impact of the proposed approach and the associated opportunities that emerge with the availability of cognition-enabled service robots
    • …
    corecore