51 research outputs found

    Geometry-aware Manipulability Learning, Tracking and Transfer

    Full text link
    Body posture influences human and robots performance in manipulation tasks, as appropriate poses facilitate motion or force exertion along different axes. In robotics, manipulability ellipsoids arise as a powerful descriptor to analyze, control and design the robot dexterity as a function of the articulatory joint configuration. This descriptor can be designed according to different task requirements, such as tracking a desired position or apply a specific force. In this context, this paper presents a novel \emph{manipulability transfer} framework, a method that allows robots to learn and reproduce manipulability ellipsoids from expert demonstrations. The proposed learning scheme is built on a tensor-based formulation of a Gaussian mixture model that takes into account that manipulability ellipsoids lie on the manifold of symmetric positive definite matrices. Learning is coupled with a geometry-aware tracking controller allowing robots to follow a desired profile of manipulability ellipsoids. Extensive evaluations in simulation with redundant manipulators, a robotic hand and humanoids agents, as well as an experiment with two real dual-arm systems validate the feasibility of the approach.Comment: Accepted for publication in the Intl. Journal of Robotics Research (IJRR). Website: https://sites.google.com/view/manipulability. Code: https://github.com/NoemieJaquier/Manipulability. 24 pages, 20 figures, 3 tables, 4 appendice

    Analysis and Transfer of Human Movement Manipulability in Industry-like Activities

    Full text link
    Humans exhibit outstanding learning, planning and adaptation capabilities while performing different types of industrial tasks. Given some knowledge about the task requirements, humans are able to plan their limbs motion in anticipation of the execution of specific skills. For example, when an operator needs to drill a hole on a surface, the posture of her limbs varies to guarantee a stable configuration that is compatible with the drilling task specifications, e.g. exerting a force orthogonal to the surface. Therefore, we are interested in analyzing the human arms motion patterns in industrial activities. To do so, we build our analysis on the so-called manipulability ellipsoid, which captures a posture-dependent ability to perform motion and exert forces along different task directions. Through thorough analysis of the human movement manipulability, we found that the ellipsoid shape is task dependent and often provides more information about the human motion than classical manipulability indices. Moreover, we show how manipulability patterns can be transferred to robots by learning a probabilistic model and employing a manipulability tracking controller that acts on the task planning and execution according to predefined control hierarchies.Comment: Accepted for publication in IROS'20. Website: https://sites.google.com/view/manipulability/home . Video: https://youtu.be/q0GZwvwW9A

    Decentralized Ability-Aware Adaptive Control for Multi-robot Collaborative Manipulation

    Get PDF
    Multi-robot teams can achieve more dexterous, complex and heavier payload tasks than a single robot, yet effective collaboration is required. Multi-robot collaboration is extremely challenging due to the different kinematic and dynamics capabilities of the robots, the limited communication between them, and the uncertainty of the system parameters. In this paper, a Decentralized Ability-Aware Adaptive Control is proposed to address these challenges based on two key features. Firstly, the common manipulation task is represented by the proposed nominal task ellipsoid, which is used to maximize each robot force capability online via optimizing its configuration. Secondly, a decentralized adaptive controller is designed to be Lyapunov stable in spite of heterogeneous actuation constraints of the robots and uncertain physical parameters of the object and environment. In the proposed framework, decentralized coordination and load distribution between the robots is achieved without communication, while only the control deficiency is broadcast if any of the robots reaches its force limits. In this case, the object reference trajectory is modified in a decentralized manner to guarantee stable interaction. Finally, we perform several numerical and physical simulations to analyse and verify the proposed method with heterogeneous multi-robot teams in collaborative manipulation tasks.Comment: The article has been submitted to IEEE Robotics and Automation Letters (RA-L) with ICRA 2021 conference option; the article has been accepted for publication in RA-

    A probabilistic framework for learning geometry-based robot manipulation skills

    Get PDF
    Programming robots to perform complex manipulation tasks is difficult because many tasks require sophisticated controllers that may rely on data such as manipulability ellipsoids, stiffness/damping and inertia matrices. Such data are naturally represented as Symmetric Positive Definite (SPD) matrices to capture specific geometric characteristics of the data, which increases the complexity of hard-coding them. To alleviate this difficulty, the Learning from Demonstration (LfD) paradigm can be used in order to learn robot manipulation skills with specific geometric constraints encapsulated in SPD matrices. Learned skills often need to be adapted when they are applied to new situations. While existing techniques can adapt Cartesian and joint space trajectories described by various desired points, the adaptation of motion skills encapsulated in SPD matrices remains an open problem. In this paper, we introduce a new LfD framework that can learn robot manipulation skills encapsulated in SPD matrices from expert demonstrations and adapt them to new situations defined by new start-, via- and end-matrices. The proposed approach leverages Kernelized Movement Primitives (KMPs) to generate SPD-based robot manipulation skills that smoothly adapt the demonstrations to conform to new constraints. We validate the proposed framework using a couple of simulations in addition to a real experiment scenario

    Representation and control of coordinated-motion tasks for human-robot systems

    Get PDF
    It is challenging for robots to perform various tasks in a human environment. This is because many human-centered tasks require coordination in both hands and may often involve cooperation with another human. Although human-centered tasks require different types of coordinated movements, most of the existing methodologies have focused only on specific types of coordination. This thesis aims at the description and control of coordinated-motion tasks for human-robot systems; i.e., humanoid robots as well as multi-robot and human-robot systems. First, for bimanually coordinated-motion tasks in dual-manipulator systems, we propose the Extended-Cooperative-Task-Space (ECTS) representation, which extends the existing Cooperative-Task-Space (CTS) representation based on the kinematic models for human bimanual movements in Biomechanics. The proposed ECTS representation can represent the whole spectrum of dual-arm motion/force coordination using two sets of ECTS motion/force variables in a unified manner. The type of coordination can be easily chosen by two meaningful coefficients, and during coordinated-motion tasks, each set of variables directly describes two different aspects of coordinated motion and force behaviors. Thus, the operator can specify coordinated-motion/force tasks more intuitively in high-level descriptions, and the specified tasks can be easily reused in other situations with greater flexibility. Moreover, we present consistent procedures of using the ECTS representation for task specifications in the upper-body and lower-body subsystems of humanoid robots in order to perform manipulation and locomotion tasks, respectively. Besides, we propose and discuss performance indices derived based on the ECTS representation, which can be used to evaluate and optimize the performance of any type of dual-arm manipulation tasks. We show that using the ECTS representation for specifying both dual-arm manipulation and biped locomotion tasks can greatly simplify the motion planning process, allowing the operator to focus on high-level descriptions of those tasks. Both upper-body and lower-body task specifications are demonstrated by specifying whole-body task examples on a Hubo II+ robot carrying out dual-arm manipulation as well as biped locomotion tasks in a simulation environment. We also present the results from experiments on a dual-arm robot (Baxter) for teleoperating various types of coordinated-motion tasks using a single 6D mouse interface. The specified upper- and lower-body tasks can be considered as coordinated motions with constraints. In order to express various constraints imposed across the whole-body, we discuss the modeling of whole-body structure and the computations for robotic systems having multiple kinematic chains. Then we present a whole-body controller formulated as a quadratic programming, which can take different types of constraints into account in a prioritized manner. We validate the whole-body controller based on the simulation results on a Hubo II+ robot performing specified whole-body task examples with a number of motion and force constraints as well as actuation limits. Lastly, we discuss an extension of the ECTS representation, called Hierarchical Extended-Cooperative-Task Space (H-ECTS) framework, which uses tree-structured graphical representations for coordinated-motion tasks of multi-robot and human-robot systems. The H-ECTS framework is validated by experimental results on two Baxter robots cooperating with each other as well as with an additional human partner

    Kontextsensitive Körperregulierung für redundante Roboter

    Get PDF
    In the past few decades the classical 6 degrees of freedom manipulators' dominance has been challenged by the rise of 7 degrees of freedom redundant robots. Similarly, with increased availability of humanoid robots in academic research, roboticists suddenly have access to highly dexterous platforms with multiple kinematic chains capable of undertaking multiple tasks simultaneously. The execution of lower-priority tasks, however, are often done in task/scenario specific fashion. Consequently, these systems are not scalable and slight changes in the application often implies re-engineering the entire control system and deployment which impedes the development process over time. This thesis introduces an alternative systematic method of addressing the secondary tasks and redundancy resolution called, context aware body regulation. Contexts consist of one or multiple tasks, however, unlike the conventional definitions, the tasks within a context are not rigidly defined and maintain some level of abstraction. For instance, following a particular trajectory constitutes a concrete task while performing a Cartesian motion with the end-effector represents an abstraction of the same task and is more appropriate for context formulation. Furthermore, contexts are often made up of multiple abstract tasks that collectively describe a reoccurring situation. Body regulation is an umbrella term for a collection of schemes for addressing the robots' redundancy when a particular context occurs. Context aware body regulation offers several advantages over traditional methods. Most notably among them are reusability, scalability and composability of contexts and body regulation schemes. These three fundamental concerns are realized theoretically by in-depth study and through mathematical analysis of contexts and regulation strategies; and are practically implemented by a component based software architecture that complements the theoretical aspects. The findings of the thesis are applicable to any redundant manipulator and humanoids, and allow them to be used in real world applications. Proposed methodology presents an alternative approach for the control of robots and offers a new perspective for future deployment of robotic solutions.Im Verlauf der letzten Jahrzehnte wich der Einfluss klassischer Roboterarme mit 6 Freiheitsgraden zunehmend denen neuer und vielfältigerer Manipulatoren mit 7 Gelenken. Ebenso stehen der Forschung mit den neuartigen Humanoiden inzwischen auch hoch-redundante Roboterplattformen mit mehreren kinematischen Ketten zur Verfügung. Diese überaus flexiblen und komplexen Roboter-Kinematiken ermöglichen generell das gleichzeitige Verfolgen mehrerer priorisierter Bewegungsaufgaben. Die Steuerung der weniger wichtigen Aufgaben erfolgt jedoch oft in anwendungsspezifischer Art und Weise, welche die Skalierung der Regelung zu generellen Kontexten verhindert. Selbst kleine Änderungen in der Anwendung bewirken oft schon, dass große Teile der Robotersteuerung überarbeitet werden müssen, was wiederum den gesamten Entwicklungsprozess behindert. Diese Dissertation stellt eine alternative, systematische Methode vor um die Redundanz neuer komplexer Robotersysteme zu bewältigen und vielfältige, priorisierte Bewegungsaufgaben parallel zu steuern: Die so genannte kontextsensitive Körperregulierung. Darin bestehen Kontexte aus einer oder mehreren Bewegungsaufgaben. Anders als in konventionellen Anwendungen sind die Aufgaben nicht fest definiert und beinhalten eine gewisse Abstraktion. Beispielsweise stellt das Folgen einer bestimmten Trajektorie eine sehr konkrete Bewegungsaufgabe dar, während die Ausführung einer Kartesischen Bewegung mit dem Endeffektor eine Abstraktion darstellt, die für die Kontextformulierung besser geeignet ist. Kontexte setzen sich oft aus mehreren solcher abstrakten Aufgaben zusammen und beschreiben kollektiv eine sich wiederholende Situation. Durch die Verwendung der kontextsensitiven Körperregulierung ergeben sich vielfältige Vorteile gegenüber traditionellen Methoden: Wiederverwendbarkeit, Skalierbarkeit, sowie Komponierbarkeit von Konzepten. Diese drei fundamentalen Eigenschaften werden in der vorliegenden Arbeit theoretisch mittels gründlicher mathematischer Analyse aufgezeigt und praktisch mittels einer auf Komponenten basierenden Softwarearchitektur realisiert. Die Ergebnisse dieser Dissertation lassen sich auf beliebige redundante Manipulatoren oder humanoide Roboter anwenden und befähigen diese damit zur realen Anwendung außerhalb des Labors. Die hier vorgestellte Methode zur Regelung von Robotern stellt damit eine neue Perspektive für die zukünftige Entwicklung von robotischen Lösungen dar

    Shared control for natural motion and safety in hands-on robotic surgery

    Get PDF
    Hands-on robotic surgery is where the surgeon controls the tool's motion by applying forces and torques to the robot holding the tool, allowing the robot-environment interaction to be felt though the tool itself. To further improve results, shared control strategies are used to combine the strengths of the surgeon with those of the robot. One such strategy is active constraints, which prevent motion into regions deemed unsafe or unnecessary. While research in active constraints on rigid anatomy has been well-established, limited work on dynamic active constraints (DACs) for deformable soft tissue has been performed, particularly on strategies which handle multiple sensing modalities. In addition, attaching the tool to the robot imposes the end effector dynamics onto the surgeon, reducing dexterity and increasing fatigue. Current control policies on these systems only compensate for gravity, ignoring other dynamic effects. This thesis presents several research contributions to shared control in hands-on robotic surgery, which create a more natural motion for the surgeon and expand the usage of DACs to point clouds. A novel null-space based optimization technique has been developed which minimizes the end effector friction, mass, and inertia of redundant robots, creating a more natural motion, one which is closer to the feeling of the tool unattached to the robot. By operating in the null-space, the surgeon is left in full control of the procedure. A novel DACs approach has also been developed, which operates on point clouds. This allows its application to various sensing technologies, such as 3D cameras or CT scans and, therefore, various surgeries. Experimental validation in point-to-point motion trials and a virtual reality ultrasound scenario demonstrate a reduction in work when maneuvering the tool and improvements in accuracy and speed when performing virtual ultrasound scans. Overall, the results suggest that these techniques could increase the ease of use for the surgeon and improve patient safety.Open Acces
    corecore