130,559 research outputs found

    Object Motion Guided Human Motion Synthesis

    Full text link
    Modeling human behaviors in contextual environments has a wide range of applications in character animation, embodied AI, VR/AR, and robotics. In real-world scenarios, humans frequently interact with the environment and manipulate various objects to complete daily tasks. In this work, we study the problem of full-body human motion synthesis for the manipulation of large-sized objects. We propose Object MOtion guided human MOtion synthesis (OMOMO), a conditional diffusion framework that can generate full-body manipulation behaviors from only the object motion. Since naively applying diffusion models fails to precisely enforce contact constraints between the hands and the object, OMOMO learns two separate denoising processes to first predict hand positions from object motion and subsequently synthesize full-body poses based on the predicted hand positions. By employing the hand positions as an intermediate representation between the two denoising processes, we can explicitly enforce contact constraints, resulting in more physically plausible manipulation motions. With the learned model, we develop a novel system that captures full-body human manipulation motions by simply attaching a smartphone to the object being manipulated. Through extensive experiments, we demonstrate the effectiveness of our proposed pipeline and its ability to generalize to unseen objects. Additionally, as high-quality human-object interaction datasets are scarce, we collect a large-scale dataset consisting of 3D object geometry, object motion, and human motion. Our dataset contains human-object interaction motion for 15 objects, with a total duration of approximately 10 hours.Comment: SIGGRAPH Asia 202

    Design and Simulation of a Mechanical Hand

    Get PDF
    A variety of mechanical hand designs have been developed in the past few decades. The majority of the designs were made with the sole purpose of imitating the human hand and its capabilities; however, none of these designs have been equipped with all the motions and sensory capabilities of the human hand. The primary goal of this thesis project was to design a robotic hand with the required amount of degrees-of-freedom and necessary constraints to achieve all the motions of the human hand. Demonstration of the American Sign Language (ASL) alphabet, using a virtual design and controls platform, was used as a means of proving the dexterity of the designed hand. The objectives of the thesis were accomplished using a combination of computerized 3-D modeling, kinematic modeling, and LabView programming. A mechanical hand model was designed using SolidWorks. Actuation methods were incorporated into the design based on the structure of the connecting tendons in the human hand. To analyze the motions of the mechanical hand model, finger assemblies were manufactured at two different scales (full and ¼ size) using rapid prototyping. These finger assemblies were used to study the developed forces within the joints prone to failure when subjected to actuation and spring forces. A free body diagram and an Ansys model were created to quantify the force and stress concentrations at the contact point of the pin joint in the distal interphalangeal joint, a location of failure in the rapid prototype assembly. A complete kinematic model was then developed for the mechanical hand using the Denavit-Hartenberg principle to map all the joints of the hand and finger tips in a universal frame of reference. A program was developed using LabView and Matlab software tools to incorporate the developed kinematic model of the designed hand and plot the 3-D locations of all joints in the universal frame of reference for each letter of the ASL alphabet. The program was then interfaced with the SolidWorks hand assembly to virtually control the motions of the designed assembly and to optimize the hand motions. In summary, a mechanical human hand model and interacting software platform were developed to simulate the dexterity of a designed human hand and to implement virtual controls, based on kinematic modeling, to achieve the optimum motion patterns needed to demonstrate the ASL alphabet. The designed hand was capable of performing all the static gestures of the ASL alphabet

    On least-cost path for realistic simulation of human motion

    Get PDF
    We are interested in "human-like" automatic motion simulation with applications in ergonomics. The apparent redundancy of the humanoid wrt its explicit tasks leads to the problem of choosing a plausible movement in the framework of redundant kinematics. Some results have been obtained in the human motion literature for reach motion that involves the position of the hands. We discuss these results and a motion generation scheme associated. When orientation is also explicitly required, very few works are available and even the methods for analysis are not defined. We discuss the choice for metrics adapted to the orientation, and also the problems encountered in defining a proper metric in both position and orientation. Motion capture and simulations are provided in both cases. The main goals of this paper are: to provide a survey on human motion features at task level for both position and orientation, to propose a kinematic control scheme based on these features, to define properly the error between motion capture and automatic motion simulation

    Robust Execution of Contact-Rich Motion Plans by Hybrid Force-Velocity Control

    Full text link
    In hybrid force-velocity control, the robot can use velocity control in some directions to follow a trajectory, while performing force control in other directions to maintain contacts with the environment regardless of positional errors. We call this way of executing a trajectory hybrid servoing. We propose an algorithm to compute hybrid force-velocity control actions for hybrid servoing. We quantify the robustness of a control action and make trade-offs between different requirements by formulating the control synthesis as optimization problems. Our method can efficiently compute the dimensions, directions and magnitudes of force and velocity controls. We demonstrated by experiments the effectiveness of our method in several contact-rich manipulation tasks. Link to the video: https://youtu.be/KtSNmvwOenM.Comment: Proceedings of IEEE International Conference on Robotics and Automation (ICRA2019
    • …
    corecore