24,638 research outputs found

    Human control strategies for multi-robot teams

    Get PDF
    Expanding human span of control over teams of robots presents an obstacle to the wider deployment of robots for practical tasks in a variety of areas. One difficulty is that many different types of human interactions may be necessary to maintain and control a robot team. We have developed a taxonomy of human-robot tasks based on complexity of control that helps explicate the forms of control likely to be needed and the demands they pose to human operators. In this paper we use research from two of these areas to illustrate our taxonomy and its utility in characterizing and improving human-robot interaction

    A Whole-Body Pose Taxonomy for Loco-Manipulation Tasks

    Full text link
    Exploiting interaction with the environment is a promising and powerful way to enhance stability of humanoid robots and robustness while executing locomotion and manipulation tasks. Recently some works have started to show advances in this direction considering humanoid locomotion with multi-contacts, but to be able to fully develop such abilities in a more autonomous way, we need to first understand and classify the variety of possible poses a humanoid robot can achieve to balance. To this end, we propose the adaptation of a successful idea widely used in the field of robot grasping to the field of humanoid balance with multi-contacts: a whole-body pose taxonomy classifying the set of whole-body robot configurations that use the environment to enhance stability. We have revised criteria of classification used to develop grasping taxonomies, focusing on structuring and simplifying the large number of possible poses the human body can adopt. We propose a taxonomy with 46 poses, containing three main categories, considering number and type of supports as well as possible transitions between poses. The taxonomy induces a classification of motion primitives based on the pose used for support, and a set of rules to store and generate new motions. We present preliminary results that apply known segmentation techniques to motion data from the KIT whole-body motion database. Using motion capture data with multi-contacts, we can identify support poses providing a segmentation that can distinguish between locomotion and manipulation parts of an action.Comment: 8 pages, 7 figures, 1 table with full page figure that appears in landscape page, 2015 IEEE/RSJ International Conference on Intelligent Robots and System

    A taxonomy of preferences for physically assistive robots

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Assistive devices and technologies are getting common and some commercial products are starting to be available. However, the deployment of robots able to physically interact with a person in an assistive manner is still a challenging problem. Apart from the design and control, the robot must be able to adapt to the user it is attending in order to become a useful tool for caregivers. This robot behavior adaptation comes through the definition of user preferences for the task such that the robot can act in the user’s desired way. This article presents a taxonomy of user preferences for assistive scenarios, including physical interactions, that may be used to improve robot decision-making algorithms. The taxonomy categorizes the preferences based on their semantics and possible uses. We propose the categorization in two levels of application (global and specific) as well as two types (primary and modifier). Examples of real preference classifications are presented in three assistive tasks: feeding, shoe fitting and coat dressing.Peer ReviewedPostprint (author's final draft

    Online Robot Introspection via Wrench-based Action Grammars

    Full text link
    Robotic failure is all too common in unstructured robot tasks. Despite well-designed controllers, robots often fail due to unexpected events. How do robots measure unexpected events? Many do not. Most robots are driven by the sense-plan act paradigm, however more recently robots are undergoing a sense-plan-act-verify paradigm. In this work, we present a principled methodology to bootstrap online robot introspection for contact tasks. In effect, we are trying to enable the robot to answer the question: what did I do? Is my behavior as expected or not? To this end, we analyze noisy wrench data and postulate that the latter inherently contains patterns that can be effectively represented by a vocabulary. The vocabulary is generated by segmenting and encoding the data. When the wrench information represents a sequence of sub-tasks, we can think of the vocabulary forming a sentence (set of words with grammar rules) for a given sub-task; allowing the latter to be uniquely represented. The grammar, which can also include unexpected events, was classified in offline and online scenarios as well as for simulated and real robot experiments. Multiclass Support Vector Machines (SVMs) were used offline, while online probabilistic SVMs were are used to give temporal confidence to the introspection result. The contribution of our work is the presentation of a generalizable online semantic scheme that enables a robot to understand its high-level state whether nominal or abnormal. It is shown to work in offline and online scenarios for a particularly challenging contact task: snap assemblies. We perform the snap assembly in one-arm simulated and real one-arm experiments and a simulated two-arm experiment. This verification mechanism can be used by high-level planners or reasoning systems to enable intelligent failure recovery or determine the next most optima manipulation skill to be used.Comment: arXiv admin note: substantial text overlap with arXiv:1609.0494

    Planning hand-arm grasping motions with human-like appearance

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksFinalista de l’IROS Best Application Paper Award a la 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, ICROS.This paper addresses the problem of obtaining human-like motions on hand-arm robotic systems performing pick-and-place actions. The focus is set on the coordinated movements of the robotic arm and the anthropomorphic mechanical hand, with which the arm is equipped. For this, human movements performing different grasps are captured and mapped to the robot in order to compute the human hand synergies. These synergies are used to reduce the complexity of the planning phase by reducing the dimension of the search space. In addition, the paper proposes a sampling-based planner, which guides the motion planning ollowing the synergies. The introduced approach is tested in an application example and thoroughly compared with other state-of-the-art planning algorithms, obtaining better results.Peer ReviewedAward-winningPostprint (author's final draft

    A first approach to a taxonomy-based classification framework for hand grasps

    Get PDF
    Many solutions have been proposed to help amputated subjects regain the lost functionality. In order to interact with the outer world and objects that populate it, it is crucial for these subjects to being able to perform essential grasps. In this paper we propose a preliminary solution for the online classification of 8 basics hand grasps by considering physiological signals, namely Surface Electromyography (sEMG), exploiting a quantitative taxonomy of the considered movement. The hierarchical organization of the taxonomy allows a decomposition of the classification phase between couples of movement groups. The idea is that the closest to the roots the more hard is the classification, but on the meantime the miss-classification error is less problematic, since the two movements will be close to each other. The proposed solution is subject-independent, which means that signals from many different subjects are considered by the probabilistic framework to modelize the input signals. The information has been modeled offline by using a Gaussian Mixture Model (GMM), and then testen online on a unseen subject, by using a Gaussian-based classification. In order to be able to process the signal online, an accurate preprocessing phase is needed, in particular, we apply the Wavelet Transform (Wavelet Transform) to the Electromyography (EMG) signal. Thanks to this approach we are able to develop a robust and general solution, which can adapt quickly to new subjects, with no need of long and draining training phase. In this preliminary study we were able to reach a mean accuracy of 76.5%, reaching up to 97.29% in the higher levels
    • …
    corecore