166 research outputs found

    Recognition of Haptic Interaction Patterns in Dyadic Joint Object Manipulation

    Get PDF
    The development of robots that can physically cooperate with humans has attained interest in the last decades. Obviously, this effort requires a deep understanding of the intrinsic properties of interaction. Up to now, many researchers have focused on inferring human intents in terms of intermediate or terminal goals in physical tasks. On the other hand, working side by side with people, an autonomous robot additionally needs to come up with in-depth information about underlying haptic interaction patterns that are typically encountered during human-human cooperation. However, to our knowledge, no study has yet focused on characterizing such detailed information. In this sense, this work is pioneering as an effort to gain deeper understanding of interaction patterns involving two or more humans in a physical task. We present a labeled human-human-interaction dataset, which captures the interaction of two humans, who collaboratively transport an object in an haptics-enabled virtual environment. In the light of information gained by studying this dataset, we propose that the actions of cooperating partners can be examined under three interaction types: In any cooperative task, the interacting humans either 1) work in harmony, 2) cope with conflicts, or 3) remain passive during interaction. In line with this conception, we present a taxonomy of human interaction patterns; then propose five different feature sets, comprising force-, velocity-and power-related information, for the classification of these patterns. Our evaluation shows that using a multi-class support vector machine (SVM) classifier, we can accomplish a correct classification rate of 86 percent for the identification of interaction patterns, an accuracy obtained by fusing a selected set of most informative features by Minimum Redundancy Maximum Relevance (mRMR) feature selection method

    A Novel Haptic Feature Set for the Classification of Interactive Motor Behaviors in Collaborative Object Transfer

    Get PDF
    Haptics provides a natural and intuitive channel of communication during the interaction of two humans in complex physical tasks, such as joint object transportation. However, despite the utmost importance of touch in physical interactions, the use of haptics is underrepresented when developing intelligent systems. This study explores the prominence of haptic data to extract information about underlying interaction patterns within human-human cooperation. For this purpose, we design salient haptic features describing the collaboration quality within a physical dyadic task and investigate the use of these features to classify the interaction patterns. We categorize the interaction into four discrete behavior classes. These classes describe whether the partners work in harmony or face conflicts while jointly transporting an object through translational or rotational movements. We test the proposed features on a physical human-human interaction (pHHI) dataset, consisting of data collected from 12 human dyads. Using these data, we verify the salience of haptic features by achieving a correct classification rate over 91% using a Random Forest classifier

    Robot learning from demonstration of force-based manipulation tasks

    Get PDF
    One of the main challenges in Robotics is to develop robots that can interact with humans in a natural way, sharing the same dynamic and unstructured environments. Such an interaction may be aimed at assisting, helping or collaborating with a human user. To achieve this, the robot must be endowed with a cognitive system that allows it not only to learn new skills from its human partner, but also to refine or improve those already learned. In this context, learning from demonstration appears as a natural and userfriendly way to transfer knowledge from humans to robots. This dissertation addresses such a topic and its application to an unexplored field, namely force-based manipulation tasks learning. In this kind of scenarios, force signals can convey data about the stiffness of a given object, the inertial components acting on a tool, a desired force profile to be reached, etc. Therefore, if the user wants the robot to learn a manipulation skill successfully, it is essential that its cognitive system is able to deal with force perceptions. The first issue this thesis tackles is to extract the input information that is relevant for learning the task at hand, which is also known as the what to imitate? problem. Here, the proposed solution takes into consideration that the robot actions are a function of sensory signals, in other words the importance of each perception is assessed through its correlation with the robot movements. A Mutual Information analysis is used for selecting the most relevant inputs according to their influence on the output space. In this way, the robot can gather all the information coming from its sensory system, and the perception selection module proposed here automatically chooses the data the robot needs to learn a given task. Having selected the relevant input information for the task, it is necessary to represent the human demonstrations in a compact way, encoding the relevant characteristics of the data, for instance, sequential information, uncertainty, constraints, etc. This issue is the next problem addressed in this thesis. Here, a probabilistic learning framework based on hidden Markov models and Gaussian mixture regression is proposed for learning force-based manipulation skills. The outstanding features of such a framework are: (i) it is able to deal with the noise and uncertainty of force signals because of its probabilistic formulation, (ii) it exploits the sequential information embedded in the model for managing perceptual aliasing and time discrepancies, and (iii) it takes advantage of task variables to encode those force-based skills where the robot actions are modulated by an external parameter. Therefore, the resulting learning structure is able to robustly encode and reproduce different manipulation tasks. After, this thesis goes a step forward by proposing a novel whole framework for learning impedance-based behaviors from demonstrations. The key aspects here are that this new structure merges vision and force information for encoding the data compactly, and it allows the robot to have different behaviors by shaping its compliance level over the course of the task. This is achieved by a parametric probabilistic model, whose Gaussian components are the basis of a statistical dynamical system that governs the robot motion. From the force perceptions, the stiffness of the springs composing such a system are estimated, allowing the robot to shape its compliance. This approach permits to extend the learning paradigm to other fields different from the common trajectory following. The proposed frameworks are tested in three scenarios, namely, (a) the ball-in-box task, (b) drink pouring, and (c) a collaborative assembly, where the experimental results evidence the importance of using force perceptions as well as the usefulness and strengths of the methods

    Perception-intention-action cycle as a human acceptable way for improving human-robot collaborative tasks

    Get PDF
    © ACM 2023. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in HRI '23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, https://doi.org/10.1145/3568294.3580149.In Human-Robot Collaboration (HRC) tasks, the classical Perception-Action cycle can not fully explain the collaborative behaviour of the human-robot pair until it is extended to Perception-Intention-Action (PIA) cycle, giving to the human's intention a key role at the same level of the robot's perception and not as a subblock of this. Although part of the human's intention can be perceived or inferred by the other agent, this is prone to misunderstandings so the true intention has to be explicitly informed in some cases to fulfill the task. Here, we explore both types of intention and we combine them with the robot's perception through the concept of Situation Awareness (SA). We validate the PIA cycle and its acceptance by the user with a preliminary experiment in an object transportation task showing that its usage can increase trust in the robot.Peer ReviewedPostprint (author's final draft

    Cooperative SLAM-based object transportation by two humanoid robots in a cluttered environment

    Get PDF
    International audienceIn this work, we tackle the problem of making two humanoid robots navigate in a cluttered environment while transporting a very large object that simply can not be moved by a single robot. We present a complete navigation scheme, from the incremental construction of a map of the environment and the computation of collision-free trajectories to the control to execute those trajectories. We present experiments conducted on real Nao robots, equipped with RGB-D sensors mounted on their heads, moving an object around obstacles. Our experiments show that a significantly large object can be transported without changing the robot's main hardware, and therefore enacting the capacity of humanoid robots in real-life situations

    Trust-Based Control of Robotic Manipulators in Collaborative Assembly in Manufacturing

    Get PDF
    Human-robot interaction (HRI) is vastly addressed in the field of automation and manufacturing. Most of the HRI literature in manufacturing explored physical human-robot interaction (pHRI) and invested in finding means for ensuring safety and optimized effort sharing amongst a team of humans and robots. The recent emergence of safe, lightweight, and human-friendly robots has opened a new realm for human-robot collaboration (HRC) in collaborative manufacturing. For such robots with the new HRI functionalities to interact closely and effectively with a human coworker, new human-centered controllers that integrate both physical and social interaction are demanded. Social human-robot interaction (sHRI) has been demonstrated in robots with affective abilities in education, social services, health care, and entertainment. Nonetheless, sHRI should not be limited only to those areas. In particular, we focus on human trust in robot as a basis of social interaction. Human trust in robot and robot anthropomorphic features have high impacts on sHRI. Trust is one of the key factors in sHRI and a prerequisite for effective HRC. Trust characterizes the reliance and tendency of human in using robots. Factors within a robotic system (e.g. performance, reliability, or attribute), the task, and the surrounding environment can all impact the trust dynamically. Over-reliance or under-reliance might occur due to improper trust, which results in poor team collaboration, and hence higher task load and lower overall task performance. The goal of this dissertation is to develop intelligent control algorithms for the manipulator robots that integrate both physical and social HRI factors in the collaborative manufacturing. First, the evolution of human trust in a collaborative robot model is identified and verified through a series of human-in-the-loop experiments. This model serves as a computational trust model estimating an objective criterion for the evolution of human trust in robot rather than estimating an individual\u27s actual level of trust. Second, an HRI-based framework is developed for controlling the speed of a robot performing pick and place tasks. The impact of the consideration of the different level of interaction in the robot controller on the overall efficiency and HRI criteria such as human perceived workload and trust and robot usability is studied using a series of human-in-the-loop experiments. Third, an HRI-based framework is developed for planning and controlling the robot motion in performing hand-over tasks to the human. Again, series of human-in-the-loop experimental studies are conducted to evaluate the impact of implementation of the frameworks on overall efficiency and HRI criteria such as human workload and trust and robot usability. Finally, another framework is proposed for the cooperative manipulation of a common object by a team of a human and a robot. This framework proposes a trust-based role allocation strategy for adjusting the proactive behavior of the robot performing a cooperative manipulation task in HRC scenarios. For the mentioned frameworks, the results of the experiments show that integrating HRI in the robot controller leads to a lower human workload while it maintains a threshold level of human trust in robot and does not degrade robot usability and efficiency

    Progress and Prospects of the Human-Robot Collaboration

    Get PDF
    International audienceRecent technological advances in hardware designof the robotic platforms enabled the implementationof various control modalities for improved interactions withhumans and unstructured environments. An important applicationarea for the integration of robots with such advancedinteraction capabilities is human-robot collaboration. Thisaspect represents high socio-economic impacts and maintainsthe sense of purpose of the involved people, as the robotsdo not completely replace the humans from the workprocess. The research community’s recent surge of interestin this area has been devoted to the implementation of variousmethodologies to achieve intuitive and seamless humanrobot-environment interactions by incorporating the collaborativepartners’ superior capabilities, e.g. human’s cognitiveand robot’s physical power generation capacity. In fact,the main purpose of this paper is to review the state-of-thearton intermediate human-robot interfaces (bi-directional),robot control modalities, system stability, benchmarking andrelevant use cases, and to extend views on the required futuredevelopments in the realm of human-robot collaboration

    Learning Controllers for Reactive and Proactive Behaviors in Human–Robot Collaboration

    Get PDF
    Designed to safely share the same workspace as humans and assist them in a variety of tasks, the new collaborative robots are targeting manufacturing and service applications that once were considered unattainable. The large diversity of tasks to carry out, the unstructured environments and the close interaction with humans call for collaborative robots to seamlessly adapt their behaviors so as to cooperate with the users successfully under different and possibly new situations (characterized, for example, by positions of objects/landmarks in the environment, or by the user pose). This paper investigates how controllers capable of reactive and proactive behaviors in collaborative tasks can be learned from demonstrations. The proposed approach exploits the temporal coherence and dynamic characteristics of the task observed during the training phase to build a probabilistic model that enables the robot to both react to the user actions and lead the task when needed. The method is an extension of the Hidden Semi-Markov Model where the duration probability distribution is adapted according to the interaction with the user. This Adaptive Duration Hidden Semi-Markov Model (ADHSMM) is used to retrieve a sequence of states governing a trajectory optimization that provides the reference and gain matrices to the robot controller. A proof-of-concept evaluation is first carried out in a pouring task. The proposed framework is then tested in a collaborative task using a 7 DOF backdrivable manipulator
    • …
    corecore