7,544 research outputs found

    Force-based control for human-robot cooperative object manipulation

    Get PDF
    In Physical Human-Robot Interaction (PHRI), humans and robots share the workspace and physically interact and collaborate to perform a common task. However, robots do not have human levels of intelligence or the capacity to adapt in performing collaborative tasks. Moreover, the presence of humans in the vicinity of the robot requires ensuring their safety, both in terms of software and hardware. One of the aspects related to safety is the stability of the human-robot control system, which can be placed in jeopardy due to several factors such as internal time delays. Another aspect is the mutual understanding between humans and robots to prevent conflicts in performing a task. The kinesthetic transmission of the human intention is, in general, ambiguous when an object is involved, and the robot cannot distinguish the human intention to rotate from the intention to translate (the translation/rotation problem).This thesis examines the aforementioned issues related to PHRI. First, the instability arising due to a time delay is addressed. For this purpose, the time delay in the system is modeled with the exponential function, and the effect of system parameters on the stability of the interaction is examined analytically. The proposed method is compared with the state-of-the-art criteria used to study the stability of PHRI systems with similar setups and high human stiffness. Second, the unknown human grasp position is estimated by exploiting the interaction forces measured by a force/torque sensor at the robot end effector. To address cases where the human interaction torque is non-zero, the unknown parameter vector is augmented to include the human-applied torque. The proposed method is also compared via experimental studies with the conventional method, which assumes a contact point (i.e., that human torque is equal to zero). Finally, the translation/rotation problem in shared object manipulation is tackled by proposing and developing a new control scheme based on the identification of the ongoing task and the adaptation of the robot\u27s role, i.e., whether it is a passive follower or an active assistant. This scheme allows the human to transport the object independently in all degrees of freedom and also reduces human effort, which is an important factor in PHRI, especially for repetitive tasks. Simulation and experimental results clearly demonstrate that the force required to be applied by the human is significantly reduced once the task is identified

    Force-based Perception and Control Strategies for Human-Robot Shared Object Manipulation

    Get PDF
    Physical Human-Robot Interaction (PHRI) is essential for the future integration of robots in human-centered environments. In these settings, robots are expected to share the same workspace, interact physically, and collaborate with humans to achieve a common task. One of the primary tasks that require human-robot collaboration is object manipulation. The main challenges that need to be addressed to achieve a seamless cooperative object manipulation are related to uncertainties in human trajectory, grasp position, and intention. The object’s motion trajectory intended by the human is not always defined for the robot and the human may grasp any part of the object depending on the desired trajectory. In addition, the state-of-the-art object-manipulation control schemes suffer from the translation/rotation problem, where the human cannot move the object in all degrees of freedom, independently, and thus, needs to exert extra effort to accomplish the task. To address the challenges, first, we propose an estimation method for identifying the human grasp position. We extend the conventional contact point estimation method by formulating a new identification model with the human applied torque as an unknown parameter and employing empirical conditions to estimate the human grasp position. The proposed method is compared with a conventional contact point estimation using the experimental data collected for various collaboration scenarios. Second, given the human grasp position, a control strategy is suggested to transport the object in all degrees of freedom, independently. We employ the concept of “the instantaneous center of zero velocity” to reduce the human effort by minimizing the exerted human force. The stability of the interaction is evaluated using a passivity-based analysis of the closed-loop system, including the object and the robotic manipulator. The performance of the proposed control scheme is validated through simulation of scenarios containing rotations and translations of the object. Our study indicates that the exerted torque of the human has a significant effect on the human grasp position estimation. Besides, the knowledge of the human grasp position can be used in the control scheme design to avoid the translation/rotation problem and reduce the human effort

    Robust Cooperative Manipulation without Force/Torque Measurements: Control Design and Experiments

    Full text link
    This paper presents two novel control methodologies for the cooperative manipulation of an object by N robotic agents. Firstly, we design an adaptive control protocol which employs quaternion feedback for the object orientation to avoid potential representation singularities. Secondly, we propose a control protocol that guarantees predefined transient and steady-state performance for the object trajectory. Both methodologies are decentralized, since the agents calculate their own signals without communicating with each other, as well as robust to external disturbances and model uncertainties. Moreover, we consider that the grasping points are rigid, and avoid the need for force/torque measurements. Load distribution is also included via a grasp matrix pseudo-inverse to account for potential differences in the agents' power capabilities. Finally, simulation and experimental results with two robotic arms verify the theoretical findings

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Learning Robot Activities from First-Person Human Videos Using Convolutional Future Regression

    Full text link
    We design a new approach that allows robot learning of new activities from unlabeled human example videos. Given videos of humans executing the same activity from a human's viewpoint (i.e., first-person videos), our objective is to make the robot learn the temporal structure of the activity as its future regression network, and learn to transfer such model for its own motor execution. We present a new deep learning model: We extend the state-of-the-art convolutional object detection network for the representation/estimation of human hands in training videos, and newly introduce the concept of using a fully convolutional network to regress (i.e., predict) the intermediate scene representation corresponding to the future frame (e.g., 1-2 seconds later). Combining these allows direct prediction of future locations of human hands and objects, which enables the robot to infer the motor control plan using our manipulation network. We experimentally confirm that our approach makes learning of robot activities from unlabeled human interaction videos possible, and demonstrate that our robot is able to execute the learned collaborative activities in real-time directly based on its camera input
    corecore