6 research outputs found

    Analysis of manipulator structures under joint-failure with respect to efficient control in task-specific contexts

    Full text link
    Abstract — Robots are meanwhile able to perform several tasks. But what happens, if one or multiple of the robot’s joints fail? Is the robot still able to perform the required tasks? Which capabilities of the robot get limited and which ones are lost? We propose an analysis of manipulator structures for the comparison of a robot’s capabilities with respect to efficient control. The comparison is processed (1) within a robot in the case of joint failures and (2) between robots with or without joint failures. It is important, that the analysis can be processed independently of the structure of the manipulator. The results have to be comparable between different manipulator structures. Therefore, an abstract representation of the robot’s dynamic capabilities is necessary. We introduce the Maneu-verability Volume and the Spinning Pencil for this purpose. The Maneuverability Volume shows, how efficiently the end-effector can be moved to any other position. The Spinning Pencil reflects the robot’s capability to change its end-effector orientation efficiently. Our experiments show not only the different capabilities of two manipulator structures, but also the change of the capabilities if one or multiple joints fail. I

    Understanding Everyday Hands in Action from RGB-D Images

    Get PDF
    International audienceWe analyze functional manipulations of handheld objects, formalizing the problem as one of fine-grained grasp classification. To do so, we make use of a recently developed fine-grained taxonomy of human-object grasps. We introduce a large dataset of 12000 RGB-D images covering 71 everyday grasps in natural interactions. Our dataset is different from past work (typically addressed from a robotics perspective) in terms of its scale, diversity, and combination of RGB and depth data. From a computer-vision perspective , our dataset allows for exploration of contact and force prediction (crucial concepts in functional grasp analysis) from perceptual cues. We present extensive experimental results with state-of-the-art baselines, illustrating the role of segmentation, object context, and 3D-understanding in functional grasp analysis. We demonstrate a near 2X improvement over prior work and a naive deep baseline, while pointing out important directions for improvement

    Posture similarity index: a method to compare hand postures in synergy space

    Get PDF
    Background The human hand can perform a range of manipulation tasks, from holding a pen to holding a hammer. The central nervous system (CNS) uses different strategies in different manipulation tasks based on task requirements. Attempts to compare postures of the hand have been made for use in robotics and animation industries. In this study, we developed an index called the posture similarity index to quantify the similarity between two human hand postures. Methods Twelve right-handed volunteers performed 70 postures, and lifted and held 30 objects (total of 100 different postures, each performed five times). A 16-sensor electromagnetic tracking system captured the kinematics of individual finger phalanges (segments). We modeled the hand as a 21-DoF system and computed the corresponding joint angles. We used principal component analysis to extract kinematic synergies from this 21-DoF data. We developed a posture similarity index (PSI), that represents the similarity between posture in the synergy (Principal component) space. First, we tested the performance of this index using a synthetic dataset. After confirming that it performs well with the synthetic dataset, we used it to analyze the experimental data. Further, we used PSI to identify postures that are “representative” in the sense that they have a greater overlap (in synergy space) with a large number of postures. Results Our results confirmed that PSI is a relatively accurate index of similarity in synergy space both with synthetic data and real experimental data. Also, more special postures than common postures were found among “representative” postures. Conclusion We developed an index for comparing posture similarity in synergy space and demonstrated its utility by using synthetic dataset and experimental dataset. Besides, we found that “special” postures are actually “special” in the sense that there are more of them in the “representative” postures as identified by our posture similarity index

    一人称視点映像からの手操作解析に関する研究

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)国立情報学研究所教授 佐藤 真一, 東京大学教授 佐藤 洋一, 東京大学教授 相澤 清晴, 東京大学准教授 山崎 俊彦, 東京大学准教授 大石 岳史University of Tokyo(東京大学

    A Continuous Grasp Representation for the Imitation Learning of Grasps on Humanoid Robots

    Get PDF
    Models and methods are presented which enable a humanoid robot to learn reusable, adaptive grasping skills. Mechanisms and principles in human grasp behavior are studied. The findings are used to develop a grasp representation capable of retaining specific motion characteristics and of adapting to different objects and tasks. Based on the representation a framework is proposed which enables the robot to observe human grasping, learn grasp representations, and infer executable grasping actions
    corecore