129,599 research outputs found

    Design and Simulation of a Mechanical Hand

    Get PDF
    A variety of mechanical hand designs have been developed in the past few decades. The majority of the designs were made with the sole purpose of imitating the human hand and its capabilities; however, none of these designs have been equipped with all the motions and sensory capabilities of the human hand. The primary goal of this thesis project was to design a robotic hand with the required amount of degrees-of-freedom and necessary constraints to achieve all the motions of the human hand. Demonstration of the American Sign Language (ASL) alphabet, using a virtual design and controls platform, was used as a means of proving the dexterity of the designed hand. The objectives of the thesis were accomplished using a combination of computerized 3-D modeling, kinematic modeling, and LabView programming. A mechanical hand model was designed using SolidWorks. Actuation methods were incorporated into the design based on the structure of the connecting tendons in the human hand. To analyze the motions of the mechanical hand model, finger assemblies were manufactured at two different scales (full and ¼ size) using rapid prototyping. These finger assemblies were used to study the developed forces within the joints prone to failure when subjected to actuation and spring forces. A free body diagram and an Ansys model were created to quantify the force and stress concentrations at the contact point of the pin joint in the distal interphalangeal joint, a location of failure in the rapid prototype assembly. A complete kinematic model was then developed for the mechanical hand using the Denavit-Hartenberg principle to map all the joints of the hand and finger tips in a universal frame of reference. A program was developed using LabView and Matlab software tools to incorporate the developed kinematic model of the designed hand and plot the 3-D locations of all joints in the universal frame of reference for each letter of the ASL alphabet. The program was then interfaced with the SolidWorks hand assembly to virtually control the motions of the designed assembly and to optimize the hand motions. In summary, a mechanical human hand model and interacting software platform were developed to simulate the dexterity of a designed human hand and to implement virtual controls, based on kinematic modeling, to achieve the optimum motion patterns needed to demonstrate the ASL alphabet. The designed hand was capable of performing all the static gestures of the ASL alphabet

    On least-cost path for realistic simulation of human motion

    Get PDF
    We are interested in "human-like" automatic motion simulation with applications in ergonomics. The apparent redundancy of the humanoid wrt its explicit tasks leads to the problem of choosing a plausible movement in the framework of redundant kinematics. Some results have been obtained in the human motion literature for reach motion that involves the position of the hands. We discuss these results and a motion generation scheme associated. When orientation is also explicitly required, very few works are available and even the methods for analysis are not defined. We discuss the choice for metrics adapted to the orientation, and also the problems encountered in defining a proper metric in both position and orientation. Motion capture and simulations are provided in both cases. The main goals of this paper are: to provide a survey on human motion features at task level for both position and orientation, to propose a kinematic control scheme based on these features, to define properly the error between motion capture and automatic motion simulation

    Robust Execution of Contact-Rich Motion Plans by Hybrid Force-Velocity Control

    Full text link
    In hybrid force-velocity control, the robot can use velocity control in some directions to follow a trajectory, while performing force control in other directions to maintain contacts with the environment regardless of positional errors. We call this way of executing a trajectory hybrid servoing. We propose an algorithm to compute hybrid force-velocity control actions for hybrid servoing. We quantify the robustness of a control action and make trade-offs between different requirements by formulating the control synthesis as optimization problems. Our method can efficiently compute the dimensions, directions and magnitudes of force and velocity controls. We demonstrated by experiments the effectiveness of our method in several contact-rich manipulation tasks. Link to the video: https://youtu.be/KtSNmvwOenM.Comment: Proceedings of IEEE International Conference on Robotics and Automation (ICRA2019

    Action Recognition in Videos: from Motion Capture Labs to the Web

    Full text link
    This paper presents a survey of human action recognition approaches based on visual data recorded from a single video camera. We propose an organizing framework which puts in evidence the evolution of the area, with techniques moving from heavily constrained motion capture scenarios towards more challenging, realistic, "in the wild" videos. The proposed organization is based on the representation used as input for the recognition task, emphasizing the hypothesis assumed and thus, the constraints imposed on the type of video that each technique is able to address. Expliciting the hypothesis and constraints makes the framework particularly useful to select a method, given an application. Another advantage of the proposed organization is that it allows categorizing newest approaches seamlessly with traditional ones, while providing an insightful perspective of the evolution of the action recognition task up to now. That perspective is the basis for the discussion in the end of the paper, where we also present the main open issues in the area.Comment: Preprint submitted to CVIU, survey paper, 46 pages, 2 figures, 4 table

    Speech-driven Animation with Meaningful Behaviors

    Full text link
    Conversational agents (CAs) play an important role in human computer interaction. Creating believable movements for CAs is challenging, since the movements have to be meaningful and natural, reflecting the coupling between gestures and speech. Studies in the past have mainly relied on rule-based or data-driven approaches. Rule-based methods focus on creating meaningful behaviors conveying the underlying message, but the gestures cannot be easily synchronized with speech. Data-driven approaches, especially speech-driven models, can capture the relationship between speech and gestures. However, they create behaviors disregarding the meaning of the message. This study proposes to bridge the gap between these two approaches overcoming their limitations. The approach builds a dynamic Bayesian network (DBN), where a discrete variable is added to constrain the behaviors on the underlying constraint. The study implements and evaluates the approach with two constraints: discourse functions and prototypical behaviors. By constraining on the discourse functions (e.g., questions), the model learns the characteristic behaviors associated with a given discourse class learning the rules from the data. By constraining on prototypical behaviors (e.g., head nods), the approach can be embedded in a rule-based system as a behavior realizer creating trajectories that are timely synchronized with speech. The study proposes a DBN structure and a training approach that (1) models the cause-effect relationship between the constraint and the gestures, (2) initializes the state configuration models increasing the range of the generated behaviors, and (3) captures the differences in the behaviors across constraints by enforcing sparse transitions between shared and exclusive states per constraint. Objective and subjective evaluations demonstrate the benefits of the proposed approach over an unconstrained model.Comment: 13 pages, 12 figures, 5 table

    On singular values decomposition and patterns for human motion analysis and simulation

    Get PDF
    We are interested in human motion characterization and automatic motion simulation. The apparent redun- dancy of the humanoid w.r.t its explicit tasks lead to the problem of choosing a plausible movement in the framework of redun- dant kinematics. This work explores the intrinsic relationships between singular value decomposition at kinematic level and optimization principles at task level and joint level. Two task- based schemes devoted to simulation of human motion are then proposed and analyzed. These results are illustrated by motion captures, analyses and task-based simulations. Pattern of singular values serve as a basis for a discussion concerning the similarity of simulated and real motions
    corecore