1,234 research outputs found

    Force-based Perception and Control Strategies for Human-Robot Shared Object Manipulation

    Get PDF
    Physical Human-Robot Interaction (PHRI) is essential for the future integration of robots in human-centered environments. In these settings, robots are expected to share the same workspace, interact physically, and collaborate with humans to achieve a common task. One of the primary tasks that require human-robot collaboration is object manipulation. The main challenges that need to be addressed to achieve a seamless cooperative object manipulation are related to uncertainties in human trajectory, grasp position, and intention. The object’s motion trajectory intended by the human is not always defined for the robot and the human may grasp any part of the object depending on the desired trajectory. In addition, the state-of-the-art object-manipulation control schemes suffer from the translation/rotation problem, where the human cannot move the object in all degrees of freedom, independently, and thus, needs to exert extra effort to accomplish the task. To address the challenges, first, we propose an estimation method for identifying the human grasp position. We extend the conventional contact point estimation method by formulating a new identification model with the human applied torque as an unknown parameter and employing empirical conditions to estimate the human grasp position. The proposed method is compared with a conventional contact point estimation using the experimental data collected for various collaboration scenarios. Second, given the human grasp position, a control strategy is suggested to transport the object in all degrees of freedom, independently. We employ the concept of “the instantaneous center of zero velocity” to reduce the human effort by minimizing the exerted human force. The stability of the interaction is evaluated using a passivity-based analysis of the closed-loop system, including the object and the robotic manipulator. The performance of the proposed control scheme is validated through simulation of scenarios containing rotations and translations of the object. Our study indicates that the exerted torque of the human has a significant effect on the human grasp position estimation. Besides, the knowledge of the human grasp position can be used in the control scheme design to avoid the translation/rotation problem and reduce the human effort

    Force-based control for human-robot cooperative object manipulation

    Get PDF
    In Physical Human-Robot Interaction (PHRI), humans and robots share the workspace and physically interact and collaborate to perform a common task. However, robots do not have human levels of intelligence or the capacity to adapt in performing collaborative tasks. Moreover, the presence of humans in the vicinity of the robot requires ensuring their safety, both in terms of software and hardware. One of the aspects related to safety is the stability of the human-robot control system, which can be placed in jeopardy due to several factors such as internal time delays. Another aspect is the mutual understanding between humans and robots to prevent conflicts in performing a task. The kinesthetic transmission of the human intention is, in general, ambiguous when an object is involved, and the robot cannot distinguish the human intention to rotate from the intention to translate (the translation/rotation problem).This thesis examines the aforementioned issues related to PHRI. First, the instability arising due to a time delay is addressed. For this purpose, the time delay in the system is modeled with the exponential function, and the effect of system parameters on the stability of the interaction is examined analytically. The proposed method is compared with the state-of-the-art criteria used to study the stability of PHRI systems with similar setups and high human stiffness. Second, the unknown human grasp position is estimated by exploiting the interaction forces measured by a force/torque sensor at the robot end effector. To address cases where the human interaction torque is non-zero, the unknown parameter vector is augmented to include the human-applied torque. The proposed method is also compared via experimental studies with the conventional method, which assumes a contact point (i.e., that human torque is equal to zero). Finally, the translation/rotation problem in shared object manipulation is tackled by proposing and developing a new control scheme based on the identification of the ongoing task and the adaptation of the robot\u27s role, i.e., whether it is a passive follower or an active assistant. This scheme allows the human to transport the object independently in all degrees of freedom and also reduces human effort, which is an important factor in PHRI, especially for repetitive tasks. Simulation and experimental results clearly demonstrate that the force required to be applied by the human is significantly reduced once the task is identified

    Dyadic collaborative manipulation formalism for optimizing human-robot teaming

    Get PDF
    Dyadic collaborative Manipulation (DcM) is a term we use to refer to a team of two individuals, the agent and the partner, jointly manipulating an object. The two individuals partner together to form a distributed system, augmenting their manipulation abilities. Effective collaboration between the two individuals during joint action depends on: (i) the breadth of the agent’s action repertoire, (ii) the level of model acquaintance between the two individuals, (iii) the ability to adapt online of one’s own actions to the actions of their partner, and (iv) the ability to estimate the partner’s intentions and goals. Key to the successful completion of co-manipulation tasks with changing goals is the agent’s ability to change grasp-holds, especially in large object co-manipulation scenarios. Hence, in this work we developed a Trajectory Optimization (TO) method to enhance the repertoire of actions of robotic agents, by enabling them to plan and execute hybrid motions, i.e. motions that include discrete contact transitions, continuous trajectories and force profiles. The effectiveness of the TO method is investigated numerically and in simulation, in a number of manipulation scenarios with both a single and a bimanual robot. In addition, it is worth noting that transitions from free motion to contact is a challenging problem in robotics, in part due to its hybrid nature. Additionally, disregarding the effects of impacts at the motion planning level often results in intractable impulsive contact forces. To address this challenge, we introduce an impact-aware multi-mode TO method that combines hybrid dynamics and hybrid control in a coherent fashion. A key concept in our approach is the incorporation of an explicit contact force transmission model into the TO method. This allows the simultaneous optimization of the contact forces, contact timings, continuous motion trajectories and compliance, while satisfying task constraints. To demonstrate the benefits of our method, we compared our method against standard compliance control and an impact-agnostic TO method in physical simulations. Also, we experimentally validated the proposed method with a robot manipulator on the task of halting a large-momentum object. Further, we propose a principled formalism to address the joint planning problem in DcM scenarios and we solve the joint problem holistically via model-based optimization by representing the human's behavior as task space forces. The task of finding the partner-aware contact points, forces and the respective timing of grasp-hold changes are carried out by a TO method using non-linear programming. Using simulations, the capability of the optimization method is investigated in terms of robot policy changes (trajectories, timings, grasp-holds) to potential changes of the collaborative partner policies. We also realized, in hardware, effective co-manipulation of a large object by the human and the robot, including eminent grasp changes as well as optimal dyadic interactions to realize the joint task. To address the online adaptation challenge of joint motion plans in dyads, we propose an efficient bilevel formulation which combines graph search methods with trajectory optimization, enabling robotic agents to adapt their policy on-the-fly in accordance to changes of the dyadic task. This method is the first to empower agents with the ability to plan online in hybrid spaces; optimizing over discrete contact locations, contact sequence patterns, continuous trajectories, and force profiles for co-manipulation tasks. This is particularly important in large object co-manipulation tasks that require on-the-fly plan adaptation. We demonstrate in simulation and with robot experiments the efficacy of the bilevel optimization by investigating the effect of robot policy changes in response to real-time alterations of the goal. This thesis provides insight into joint manipulation setups performed by human-robot teams. In particular, it studies computational models of joint action and exploits the uncharted hybrid action space, that is especially relevant in general manipulation and co-manipulation tasks. It contributes towards developing a framework for DcM, capable of planning motions in the contact-force space, realizing these motions while considering impacts and joint action relations, as well as adapting on-the-fly these motion plans with respect to changes of the co-manipulation goals

    Robot learning from demonstration of force-based manipulation tasks

    Get PDF
    One of the main challenges in Robotics is to develop robots that can interact with humans in a natural way, sharing the same dynamic and unstructured environments. Such an interaction may be aimed at assisting, helping or collaborating with a human user. To achieve this, the robot must be endowed with a cognitive system that allows it not only to learn new skills from its human partner, but also to refine or improve those already learned. In this context, learning from demonstration appears as a natural and userfriendly way to transfer knowledge from humans to robots. This dissertation addresses such a topic and its application to an unexplored field, namely force-based manipulation tasks learning. In this kind of scenarios, force signals can convey data about the stiffness of a given object, the inertial components acting on a tool, a desired force profile to be reached, etc. Therefore, if the user wants the robot to learn a manipulation skill successfully, it is essential that its cognitive system is able to deal with force perceptions. The first issue this thesis tackles is to extract the input information that is relevant for learning the task at hand, which is also known as the what to imitate? problem. Here, the proposed solution takes into consideration that the robot actions are a function of sensory signals, in other words the importance of each perception is assessed through its correlation with the robot movements. A Mutual Information analysis is used for selecting the most relevant inputs according to their influence on the output space. In this way, the robot can gather all the information coming from its sensory system, and the perception selection module proposed here automatically chooses the data the robot needs to learn a given task. Having selected the relevant input information for the task, it is necessary to represent the human demonstrations in a compact way, encoding the relevant characteristics of the data, for instance, sequential information, uncertainty, constraints, etc. This issue is the next problem addressed in this thesis. Here, a probabilistic learning framework based on hidden Markov models and Gaussian mixture regression is proposed for learning force-based manipulation skills. The outstanding features of such a framework are: (i) it is able to deal with the noise and uncertainty of force signals because of its probabilistic formulation, (ii) it exploits the sequential information embedded in the model for managing perceptual aliasing and time discrepancies, and (iii) it takes advantage of task variables to encode those force-based skills where the robot actions are modulated by an external parameter. Therefore, the resulting learning structure is able to robustly encode and reproduce different manipulation tasks. After, this thesis goes a step forward by proposing a novel whole framework for learning impedance-based behaviors from demonstrations. The key aspects here are that this new structure merges vision and force information for encoding the data compactly, and it allows the robot to have different behaviors by shaping its compliance level over the course of the task. This is achieved by a parametric probabilistic model, whose Gaussian components are the basis of a statistical dynamical system that governs the robot motion. From the force perceptions, the stiffness of the springs composing such a system are estimated, allowing the robot to shape its compliance. This approach permits to extend the learning paradigm to other fields different from the common trajectory following. The proposed frameworks are tested in three scenarios, namely, (a) the ball-in-box task, (b) drink pouring, and (c) a collaborative assembly, where the experimental results evidence the importance of using force perceptions as well as the usefulness and strengths of the methods

    Reusing a robot's behavioral mechanisms to model and manipulate human mental states

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 125-129).In a task domain characterized by physical actions and where information has value, competing teams gain advantage by spying on and deceiving an opposing team while cooperating teammates can help the team by secretly communicating new information. For a robot to thrive in this environment it must be able to perform actions in a manner to deceive opposing agents as well as to be able to secretly communicate with friendly agents. It must further be able to extract information from observing the actions of other agents. The goal of this research is to expand on current human robot interaction by creating a robot that can operate in the above scenario. To enable these behaviors, an architecture is created which provides the robot with mechanisms to work with hidden human mental states. The robot attempts to infer these hidden states from observable factors and use them to better understand and predict behavior. It also takes steps to alter them in order to change the future behavior of the other agent. It utilizes the knowledge that the human is performing analogous inferences about the robot's own internal states to predict the effect of its actions on the human's knowledge and perceptions of the robot. The research focuses on the implicit communication that is made possible by two embodied agents interacting in a shared space through nonverbal interaction. While the processes used by a robot differ significantly from the cognitive mechanisms employed by humans, each face the similar challenge of completing the loop from sensing to acting. This architecture employs a self-as-simulator strategy, reusing the robot's behavioral mechanisms to model aspects of the human's mental states. This reuse allows the robot to model human actions and the mental states behind them using the grammar of its own representations and actions.by Jesse Vail Gray.Ph.D
    corecore