186 research outputs found

    Design method for an anthropomorphic hand able to gesture and grasp

    Get PDF
    This paper presents a numerical method to conceive and design the kinematic model of an anthropomorphic robotic hand used for gesturing and grasping. In literature, there are few numerical methods for the finger placement of human-inspired robotic hands. In particular, there are no numerical methods, for the thumb placement, that aim to improve the hand dexterity and grasping capabilities by keeping the hand design close to the human one. While existing models are usually the result of successive parameter adjustments, the proposed method determines the fingers placements by mean of empirical tests. Moreover, a surgery test and the workspace analysis of the whole hand are used to find the best thumb position and orientation according to the hand kinematics and structure. The result is validated through simulation where it is checked that the hand looks well balanced and that it meets our constraints and needs. The presented method provides a numerical tool which allows the easy computation of finger and thumb geometries and base placements for a human-like dexterous robotic hand.Comment: IEEE International Conference on Robotics and Automation, May 2015, Seattle, United States. IEEE, 2015, Proceeding IEEE International Conference on Robotics and Automatio

    感度調整可能な3軸マルチモーダルスキンセンサーモジュールの開発

    Get PDF
    早大学位記番号:新8538早稲田大

    Large Scale Capacitive Skin for Robots

    Get PDF
    Communications engineering / telecommunication

    Chapter Large Scale Capacitive Skin for Robots

    Get PDF
    Communications engineering / telecommunication

    Biomimetic Active Touch with Fingertips and Whiskers

    Get PDF

    Edge and plane classification with a biomimetic iCub fingertip sensor

    Get PDF
    The exploration and interaction of humanoid robots with the environment through tactile sensing is an important task for achieving truly autonomous agents. Recently much research has been focused on the development of new technologies for tactile sensors and new methods for tactile exploration. Edge detection is one of the tasks required in robots and humanoids to explore and recognise objects. In this work we propose a method for edge and plane classification with a biomimetic iCub fingertip using a probabilistic approach. The iCub fingertip mounted on an xy-table robot is able to tap and collect the data from the surface and edge of a plastic wall. Using a maximum likelihood classifier the xy-table knows when the iCub fingertip has reached the edge of the object. The study presented here is also biologically inspired by the tactile exploration performed in animals

    Active contour following to explore object shape with robot touch

    Get PDF
    In this work, we present an active tactile perception approach for contour following based on a probabilistic framework. Tactile data were collected using a biomimetic fingertip sensor. We propose a control architecture that implements a perception-action cycle for the exploratory procedure, which allows the fingertip to react to tactile contact whilst regulating the applied contact force. In addition' the fingertip is actively repositioned to an optimal position to ensure accurate perception. The method is trained off-line and then the testing performed on-line based on contour following around several different test shapes. We then implement object recognition based on the extracted shapes. Our active approach is compared with a passive approach, demonstrating that active perception is necessary for successful contour following and hence shape recognition

    Sense, Think, Grasp: A study on visual and tactile information processing for autonomous manipulation

    Get PDF
    Interacting with the environment using hands is one of the distinctive abilities of humans with respect to other species. This aptitude reflects on the crucial role played by objects\u2019 manipulation in the world that we have shaped for us. With a view of bringing robots outside industries for supporting people during everyday life, the ability of manipulating objects autonomously and in unstructured environments is therefore one of the basic skills they need. Autonomous manipulation is characterized by great complexity especially regarding the processing of sensors information to perceive the surrounding environment. Humans rely on vision for wideranging tridimensional information, prioprioception for the awareness of the relative position of their own body in the space and the sense of touch for local information when physical interaction with objects happens. The study of autonomous manipulation in robotics aims at transferring similar perceptive skills to robots so that, combined with state of the art control techniques, they could be able to achieve similar performance in manipulating objects. The great complexity of this task makes autonomous manipulation one of the open problems in robotics that has been drawing increasingly the research attention in the latest years. In this work of Thesis, we propose possible solutions to some key components of autonomous manipulation, focusing in particular on the perception problem and testing the developed approaches on the humanoid robotic platform iCub. When available, vision is the first source of information to be processed for inferring how to interact with objects. The object modeling and grasping pipeline based on superquadric functions we designed meets this need, since it reconstructs the object 3D model from partial point cloud and computes a suitable hand pose for grasping the object. Retrieving objects information with touch sensors only is a relevant skill that becomes crucial when vision is occluded, as happens for instance during physical interaction with the object. We addressed this problem with the design of a novel tactile localization algorithm, named Memory Unscented Particle Filter, capable of localizing and recognizing objects relying solely on 3D contact points collected on the object surface. Another key point of autonomous manipulation we report on in this Thesis work is bi-manual coordination. The execution of more advanced manipulation tasks in fact might require the use and coordination of two arms. Tool usage for instance often requires a proper in-hand object pose that can be obtained via dual-arm re-grasping. In pick-and-place tasks sometimes the initial and target position of the object do not belong to the same arm workspace, then requiring to use one hand for lifting the object and the other for locating it in the new position. At this regard, we implemented a pipeline for executing the handover task, i.e. the sequences of actions for autonomously passing an object from one robot hand on to the other. The contributions described thus far address specific subproblems of the more complex task of autonomous manipulation. This actually differs from what humans do, in that humans develop their manipulation skills by learning through experience and trial-and-error strategy. Aproper mathematical formulation for encoding this learning approach is given by Deep Reinforcement Learning, that has recently proved to be successful in many robotics applications. For this reason, in this Thesis we report also on the six month experience carried out at Berkeley Artificial Intelligence Research laboratory with the goal of studying Deep Reinforcement Learning and its application to autonomous manipulation
    corecore