7 research outputs found

    On Blocking Collisions between People, Objects and other Robots

    Full text link
    Intentional or unintentional contacts are bound to occur increasingly more often due to the deployment of autonomous systems in human environments. In this paper, we devise methods to computationally predict imminent collisions between objects, robots and people, and use an upper-body humanoid robot to block them if they are likely to happen. We employ statistical methods for effective collision prediction followed by sensor-based trajectory generation and real-time control to attempt to stop the likely collisions using the most favorable part of the blocking robot. We thoroughly investigate collisions in various types of experimental setups involving objects, robots, and people. Overall, the main contribution of this paper is to devise sensor-based prediction, trajectory generation and control processes for highly articulated robots to prevent collisions against people, and conduct numerous experiments to validate this approach

    Cognitive Task Planning for Smart Industrial Robots

    Get PDF
    This research work presents a novel Cognitive Task Planning framework for Smart Industrial Robots. The framework makes an industrial mobile manipulator robot Cognitive by applying Semantic Web Technologies. It also introduces a novel Navigation Among Movable Obstacles algorithm for robots navigating and manipulating inside a firm. The objective of Industrie 4.0 is the creation of Smart Factories: modular firms provided with cyber-physical systems able to strong customize products under the condition of highly flexible mass-production. Such systems should real-time communicate and cooperate with each other and with humans via the Internet of Things. They should intelligently adapt to the changing surroundings and autonomously navigate inside a firm while moving obstacles that occlude free paths, even if seen for the first time. At the end, in order to accomplish all these tasks while being efficient, they should learn from their actions and from that of other agents. Most of existing industrial mobile robots navigate along pre-generated trajectories. They follow ectrified wires embedded in the ground or lines painted on th efloor. When there is no expectation of environment changes and cycle times are critical, this planning is functional. When workspaces and tasks change frequently, it is better to plan dynamically: robots should autonomously navigate without relying on modifications of their environments. Consider the human behavior: humans reason about the environment and consider the possibility of moving obstacles if a certain goal cannot be reached or if moving objects may significantly shorten the path to it. This problem is named Navigation Among Movable Obstacles and is mostly known in rescue robotics. This work transposes the problem on an industrial scenario and tries to deal with its two challenges: the high dimensionality of the state space and the treatment of uncertainty. The proposed NAMO algorithm aims to focus exploration on less explored areas. For this reason it extends the Kinodynamic Motion Planning by Interior-Exterior Cell Exploration algorithm. The extension does not impose obstacles avoidance: it assigns an importance to each cell by combining the efforts necessary to reach it and that needed to free it from obstacles. The obtained algorithm is scalable because of its independence from the size of the map and from the number, shape, and pose of obstacles. It does not impose restrictions on actions to be performed: the robot can both push and grasp every object. Currently, the algorithm assumes full world knowledge but the environment is reconfigurable and the algorithm can be easily extended in order to solve NAMO problems in unknown environments. The algorithm handles sensor feedbacks and corrects uncertainties. Usually Robotics separates Motion Planning and Manipulation problems. NAMO forces their combined processing by introducing the need of manipulating multiple objects, often unknown, while navigating. Adopting standard precomputed grasps is not sufficient to deal with the big amount of existing different objects. A Semantic Knowledge Framework is proposed in support of the proposed algorithm by giving robots the ability to learn to manipulate objects and disseminate the information gained during the fulfillment of tasks. The Framework is composed by an Ontology and an Engine. The Ontology extends the IEEE Standard Ontologies for Robotics and Automation and contains descriptions of learned manipulation tasks and detected objects. It is accessible from any robot connected to the Cloud. It can be considered a data store for the efficient and reliable execution of repetitive tasks; and a Web-based repository for the exchange of information between robots and for the speed up of the learning phase. No other manipulation ontology exists respecting the IEEE Standard and, regardless the standard, the proposed ontology differs from the existing ones because of the type of features saved and the efficient way in which they can be accessed: through a super fast Cascade Hashing algorithm. The Engine lets compute and store the manipulation actions when not present in the Ontology. It is based on Reinforcement Learning techniques that avoid massive trainings on large-scale databases and favors human-robot interactions. The overall system is flexible and easily adaptable to different robots operating in different industrial environments. It is characterized by a modular structure where each software block is completely reusable. Every block is based on the open-source Robot Operating System. Not all industrial robot controllers are designed to be ROS-compliant. This thesis presents the method adopted during this research in order to Open Industrial Robot Controllers and create a ROS-Industrial interface for them

    Navigation among movable obstacles in unknown environments

    Get PDF
    This work presents a new class of algorithms that extend the domain of Navigation Among Movable Obstacles (NAMO) to unknown environments. Efficient real-time algorithms for solving NAMO problems even when no initial environment information is available to the robot are presented and validated. The algorithms yield optimal solutions and are evaluated for real-time performance on a series of simulated domains with more than 70 obstacles. In contrast to previous NAMO algorithms that required a pre-specified environment model, this work considers the realistic domain where the robot is limited by its sensor range. It must navigate to a goal position in an environment of static and movable objects. The robot can move objects if the goal cannot be reached or if moving the object significantly shortens the path. The robot gains information about the world by bringing distant objects into its sensor range. The first practical planner for this exponentially complex domain is presented. The planner reduces the search-space through a collection of techniques, such as upper bound calculations and the maintenance of sorted lists with underestimates. Further, the algorithm is only considering manipulation actions if these actions are creating a new opening in the environment. In the addition to the evaluation of the planner itself is each of this techniques also validated independently.M.S.Committee Chair: Mike Stilman; Committee Member: Henrik Chistensen; Committee Member: Irfan Ess

    Navigation Among Movable Obstacles in unknown environments

    No full text
    © 2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.This paper explores the Navigation Among Movable Obstacles (NAMO) problem in an unknown environment. We consider the realistic scenario in which the robot has to navigate to a goal position in an unknown environment consisting of static and movable objects. The robot may move objects if the goal can not be reached otherwise or if moving the object may significantly shorten the path to the goal. We consider real situations in which the robot only has limited sensing information and where the action selection can therefore only be based on partial knowledge learned from the environment at that point. This paper introduces an algorithm that significantly reduces the necessary calculations to accomplish this task compared to a direct approach. We present an efficient implementation for the case of planar, axis-aligned environments and report experimental results on challenging scenarios with more than 50 objects

    Reactive Planning With Legged Robots In Unknown Environments

    Get PDF
    Unlike the problem of safe task and motion planning in a completely known environment, the setting where the obstacles in a robot\u27s workspace are not initially known and are incrementally revealed online has so far received little theoretical interest, with existing algorithms usually demanding constant deliberative replanning in the presence of unanticipated conditions. Moreover, even though recent advances show that legged platforms are becoming better at traversing rough terrains and environments, legged robots are still mostly used as locomotion research platforms, with applications restricted to domains where interaction with the environment is usually not needed and actively avoided. In order to accomplish challenging tasks with such highly dynamic robots in unexplored environments, this research suggests with formal arguments and empirical demonstration the effectiveness of a hierarchical control structure, that we believe is the first provably correct deliberative/reactive planner to engage an unmodified general purpose mobile manipulator in physical rearrangements of its environment. To this end, we develop the mobile manipulation maneuvers to accomplish each task at hand, successfully anchor the useful kinematic unicycle template to control our legged platforms, and integrate perceptual feedback with low-level control to coordinate each robot\u27s movement. At the same time, this research builds toward a useful abstraction for task planning in unknown environments, and provides an avenue for incorporating partial prior knowledge within a deterministic framework well suited to existing vector field planning methods, by exploiting recent developments in semantic SLAM and object pose and triangular mesh extraction using convolutional neural net architectures. Under specific sufficient conditions, formal results guarantee collision avoidance and convergence to designated (fixed or slowly moving) targets, for both a single robot and a robot gripping and manipulating objects, in previously unexplored workspaces cluttered with non-convex obstacles. We encourage the application of our methods by providing accompanying software with open-source implementations of our algorithms
    corecore