820 research outputs found

    Planning manipulation movements of a dual-arm system considering obstacle removing

    Get PDF
    The paper deals with the problem of planning movements of two hand-arm robotic systems, considering the possibility of using the robot hands to remove potential obstacles in order to obtain a free access to grasp a desired object. The approach is based on a variation of a Probabilistic Road Map that does not rule out the samples implying collisions with removable objects but instead classifies them according to the collided obstacle(s), and allows the search of free paths with the indication of which objects must be removed from the work-space to make the path actually valid; we call it Probabilistic Road Map with Obstacles (PRMwO). The proposed system includes a task assignment system that distributes the task among the robots, using for that purpose a precedence graph built from the results of the PRMwO. The approach has been implemented for a real dual-arm robotic system, and some simulated and real running examples are presented in the paper. (C) 2014 Elsevier B.V. All rights reserved.Postprint (published version

    Manipulation Planning for Forceful Human-Robot-Collaboration

    Get PDF
    This thesis addresses the problem of manipulation planning for forceful human-robot collaboration. Particularly, the focus is on the scenario where a human applies a sequence of changing external forces through forceful operations (e.g. cutting a circular piece off a board) on an object that is grasped by a cooperative robot. We present a range of planners that 1) enable the robot to stabilize and position the object under the human applied forces by exploiting supports from both the object-robot and object-environment contacts; 2) improve task efficiency by minimizing the need of configuration and grasp changes required by the changing external forces; 3) improve human comfort during the forceful interaction by optimizing the defined comfort criteria. We first focus on the instance of using only robotic grasps, where the robot is supposed to grasp/regrasp the object multiple times to keep it stable under the changing external forces. We introduce a planner that can generate an efficient manipulation plan by intelligently deciding when the robot should change its grasp on the object as the human applies the forces, and choosing subsequent grasps such that they minimize the number of regrasps required in the long-term. The planner searches for such an efficient plan by first finding a minimal sequence of grasp configurations that are able to keep the object stable under the changing forces, and then generating connecting trajectories to switch between the planned configurations, i.e. planning regrasps. We perform the search for such a grasp (configuration) sequence by sampling stable configurations for the external forces, building an operation graph using these stable configurations and then searching the operation graph to minimize the number of regrasps. We solve the problem of bimanual regrasp planning under the assumption of no support surface, enabling the robot to regrasp an object in the air by finding intermediate configurations at which both the bimanual and unimanual grasps can hold the object stable under gravity. We present a variety of experiments to show the performance of our planner, particularly in minimizing the number of regrasps for forceful manipulation tasks and planning stable regrasps. We then explore the problem of using both the object-environment contacts and object-robot contacts, which enlarges the set of stable configurations and thus boosts the robot’s capability in stabilizing the object under external forces. We present a planner that can intelligently exploit the environment’s and robot’s stabilization capabilities within a unified planning framework to search for a minimal number of stable contact configurations. A big computational bottleneck in this planner is due to the static stability analysis of a large number of candidate configurations. We introduce a containment relation between different contact configurations, to efficiently prune the stability checking process. We present a set of real-robot and simulated experiments illustrating the effectiveness of the proposed framework. We present a detailed analysis of the proposed containment relationship, particularly in improving the planning efficiency. We present a planning algorithm to further improve the cooperative robot behaviour concerning human comfort during the forceful human-robot interaction. Particularly, we are interested in empowering the robot with the capability of grasping and positioning the object not only to ensure the object stability against the human applied forces, but also to improve human experience and comfort during the interaction. We address human comfort as the muscular activation level required to apply a desired external force, together with the human spatial perception, i.e. the so-called peripersonal-space comfort during the interaction. We propose to maximize both comfort metrics to optimize the robot and object configuration such that the human can apply a forceful operation comfortably. We present a set of human-robot drilling and cutting experiments which verify the efficiency of the proposed metrics in improving the overall comfort and HRI experience, without compromising the force stability. In addition to the above planning work, we present a conic formulation to approximate the distribution of a forceful operation in the wrench space with a polyhedral cone, which enables the planner to efficiently assess the stability of a system configuration even in the presence of force uncertainties that are inherent in the human applied forceful operations. We also develop a graphical user interface, which human users can easily use to specify various forceful tasks, i.e. sequences of forceful operations on selected objects, in an interactive manner. The user interface ties in human task specification, on-demand manipulation planning and robot-assisted fabrication together. We present a set of human-robot experiments using the interface demonstrating the feasibility of our system. In short, in this thesis we present a series of planners for object manipulation under changing external forces. We show the object contacts with the robot and the environment enable the robot to manipulate an object under external forces, while making the most of the object contacts has the potential to eliminate redundant changes during manipulation, e.g. regrasp, and thus improve task efficiency and smoothness. We also show the necessity of optimizing human comfort in planning for forceful human-robot manipulation tasks. We believe the work presented here can be a key component in a human-robot collaboration framework

    Collision-Free Humanoid Reaching: Past, Present and Future

    Get PDF

    Offline and Online Planning and Control Strategies for the Multi-Contact and Biped Locomotion of Humanoid Robots

    Get PDF
    In the past decades, the Research on humanoid robots made progress forward accomplishing exceptionally dynamic and agile motions. Starting from the DARPA Robotic Challenge in 2015, humanoid platforms have been successfully employed to perform more and more challenging tasks with the eventual aim of assisting or replacing humans in hazardous and stressful working situations. However, the deployment of these complex machines in realistic domestic and working environments still represents a high-level challenge for robotics. Such environments are characterized by unstructured and cluttered settings with continuously varying conditions due to the dynamic presence of humans and other mobile entities, which cannot only compromise the operation of the robotic system but can also pose severe risks both to the people and the robot itself due to unexpected interactions and impacts. The ability to react to these unexpected interactions is therefore a paramount requirement for enabling the robot to adapt its behavior to the task needs and the characteristics of the environment. Further, the capability to move in a complex and varying environment is an essential skill for a humanoid robot for the execution of any task. Indeed, human instructions may often require the robot to move and reach a desired location, e.g., for bringing an object or for inspecting a specific place of an infrastructure. In this context, a flexible and autonomous walking behavior is an essential skill, study of which represents one of the main topics of this Thesis, considering disturbances and unfeasibilities coming both from the environment and dynamic obstacles that populate realistic scenarios.  Locomotion planning strategies are still an open theme in the humanoids and legged robots research and can be classified in sample-based and optimization-based planning algorithms. The first, explore the configuration space, finding a feasible path between the start and goal robot’s configuration with different logic depending on the algorithm. They suffer of a high computational cost that often makes difficult, if not impossible, their online implementations but, compared to their counterparts, they do not need any environment or robot simplification to find a solution and they are probabilistic complete, meaning that a feasible solution can be certainly found if at least one exists. The goal of this thesis is to merge the two algorithms in a coupled offline-online planning framework to generate an offline global trajectory with a sample-based approach to cope with any kind of cluttered and complex environment, and online locally refine it during the execution, using a faster optimization-based algorithm that more suits an online implementation. The offline planner performances are improved by planning in the robot contact space instead of the whole-body robot configuration space, requiring an algorithm that maps the two state spaces.   The framework proposes a methodology to generate whole-body trajectories for the motion of humanoid and legged robots in realistic and dynamically changing environments.  This thesis focuses on the design and test of each component of this planning framework, whose validation is carried out on the real robotic platforms CENTAURO and COMAN+ in various loco-manipulation tasks scenarios. &nbsp

    MaestROB: A Robotics Framework for Integrated Orchestration of Low-Level Control and High-Level Reasoning

    Full text link
    This paper describes a framework called MaestROB. It is designed to make the robots perform complex tasks with high precision by simple high-level instructions given by natural language or demonstration. To realize this, it handles a hierarchical structure by using the knowledge stored in the forms of ontology and rules for bridging among different levels of instructions. Accordingly, the framework has multiple layers of processing components; perception and actuation control at the low level, symbolic planner and Watson APIs for cognitive capabilities and semantic understanding, and orchestration of these components by a new open source robot middleware called Project Intu at its core. We show how this framework can be used in a complex scenario where multiple actors (human, a communication robot, and an industrial robot) collaborate to perform a common industrial task. Human teaches an assembly task to Pepper (a humanoid robot from SoftBank Robotics) using natural language conversation and demonstration. Our framework helps Pepper perceive the human demonstration and generate a sequence of actions for UR5 (collaborative robot arm from Universal Robots), which ultimately performs the assembly (e.g. insertion) task.Comment: IEEE International Conference on Robotics and Automation (ICRA) 2018. Video: https://www.youtube.com/watch?v=19JsdZi0TW

    Hierarchical Experience-informed Navigation for Multi-modal Quadrupedal Rebar Grid Traversal

    Full text link
    This study focuses on a layered, experience-based, multi-modal contact planning framework for agile quadrupedal locomotion over a constrained rebar environment. To this end, our hierarchical planner incorporates locomotion-specific modules into the high-level contact sequence planner and solves kinodynamically-aware trajectory optimization as the low-level motion planner. Through quantitative analysis of the experience accumulation process and experimental validation of the kinodynamic feasibility of the generated locomotion trajectories, we demonstrate that the experience planning heuristic offers an effective way of providing candidate footholds for a legged contact planner. Additionally, we introduce a guiding torso path heuristic at the global planning level to enhance the navigation success rate in the presence of environmental obstacles. Our results indicate that the torso-path guided experience accumulation requires significantly fewer offline trials to successfully reach the goal compared to regular experience accumulation. Finally, our planning framework is validated in both dynamics simulations and real hardware implementations on a quadrupedal robot provided by Skymul Inc

    Control strategies for cleaning robots in domestic applications: A comprehensive review:

    Get PDF
    Service robots are built and developed for various applications to support humans as companion, caretaker, or domestic support. As the number of elderly people grows, service robots will be in increasing demand. Particularly, one of the main tasks performed by elderly people, and others, is the complex task of cleaning. Therefore, cleaning tasks, such as sweeping floors, washing dishes, and wiping windows, have been developed for the domestic environment using service robots or robot manipulators with several control approaches. This article is primarily focused on control methodology used for cleaning tasks. Specifically, this work mainly discusses classical control and learning-based controlled methods. The classical control approaches, which consist of position control, force control, and impedance control , are commonly used for cleaning purposes in a highly controlled environment. However, classical control methods cannot be generalized for cluttered environment so that learning-based control methods could be an alternative solution. Learning-based control methods for cleaning tasks can encompass three approaches: learning from demonstration (LfD), supervised learning (SL), and reinforcement learning (RL). These control approaches have their own capabilities to generalize the cleaning tasks in the new environment. For example, LfD, which many research groups have used for cleaning tasks, can generate complex cleaning trajectories based on human demonstration. Also, SL can support the prediction of dirt areas and cleaning motion using large number of data set. Finally, RL can learn cleaning actions and interact with the new environment by the robot itself. In this context, this article aims to provide a general overview of robotic cleaning tasks based on different types of control methods using manipulator. It also suggest a description of the future directions of cleaning tasks based on the evaluation of the control approaches
    • …
    corecore