3,092 research outputs found

    Virtual and rapid prototyping of an underactuated space end effector

    Get PDF
    A fast and reliable verification of an initial concept is an important need in the field of mechatronics. Usually, the steps for a successful design require multiple iterations involving a sequence of design phases-the initial one and several improvements-and the tests of the resulting prototypes, in a trial and error scheme. Now a day’s software and hardware tools allow for a faster approach, in which the iterations between design and prototyping are by far reduced, even to just one in favorable situation. This work presents the design, manufacturing and testing of a robotic end effector for space applications, realized through virtual prototyping, followed by rapid prototyping realization. The first process allows realizing a mathematical model of the robotic system that, once all the simulations confirm the effectiveness of the design, can be directly used for the rapid prototyping by means of 3D printing. The workflow and the results of the process are described in detail in this paper, showing the qualitative and quantitative evaluation of the performance of both the virtual end effector and the actual physical robotic hand

    Mechanical Design, Modelling and Control of a Novel Aerial Manipulator

    Full text link
    In this paper a novel aerial manipulation system is proposed. The mechanical structure of the system, the number of thrusters and their geometry will be derived from technical optimization problems. The aforementioned problems are defined by taking into consideration the desired actuation forces and torques applied to the end-effector of the system. The framework of the proposed system is designed in a CAD Package in order to evaluate the system parameter values. Following this, the kinematic and dynamic models are developed and an adaptive backstepping controller is designed aiming to control the exact position and orientation of the end-effector in the Cartesian space. Finally, the performance of the system is demonstrated through a simulation study, where a manipulation task scenario is investigated.Comment: Comments: 8 Pages, 2015 IEEE International Conference on Robotics and Automation (ICRA '15), Seattle, WA, US

    Perception and manipulation for robot-assisted dressing

    Get PDF
    Assistive robots have the potential to provide tremendous support for disabled and elderly people in their daily dressing activities. This thesis presents a series of perception and manipulation algorithms for robot-assisted dressing, including: garment perception and grasping prior to robot-assisted dressing, real-time user posture tracking during robot-assisted dressing for (simulated) impaired users with limited upper-body movement capability, and finally a pipeline for robot-assisted dressing for (simulated) paralyzed users who have lost the ability to move their limbs. First, the thesis explores learning suitable grasping points on a garment prior to robot-assisted dressing. Robots should be endowed with the ability to autonomously recognize the garment state, grasp and hand the garment to the user and subsequently complete the dressing process. This is addressed by introducing a supervised deep neural network to locate grasping points. To reduce the amount of real data required, which is costly to collect, the power of simulation is leveraged to produce large amounts of labeled data. Unexpected user movements should be taken into account during dressing when planning robot dressing trajectories. Tracking such user movements with vision sensors is challenging due to severe visual occlusions created by the robot and clothes. A probabilistic real-time tracking method is proposed using Bayesian networks in latent spaces, which fuses multi-modal sensor information. The latent spaces are created before dressing by modeling the user movements, taking the user's movement limitations and preferences into account. The tracking method is then combined with hierarchical multi-task control to minimize the force between the user and the robot. The proposed method enables the Baxter robot to provide personalized dressing assistance for users with (simulated) upper-body impairments. Finally, a pipeline for dressing (simulated) paralyzed patients using a mobile dual-armed robot is presented. The robot grasps a hospital gown naturally hung on a rail, and moves around the bed to finish the upper-body dressing of a hospital training manikin. To further improve simulations for garment grasping, this thesis proposes to update more realistic physical properties values for the simulated garment. This is achieved by measuring physical similarity in the latent space using contrastive loss, which maps physically similar examples to nearby points.Open Acces

    Pick and Place Without Geometric Object Models

    Full text link
    We propose a novel formulation of robotic pick and place as a deep reinforcement learning (RL) problem. Whereas most deep RL approaches to robotic manipulation frame the problem in terms of low level states and actions, we propose a more abstract formulation. In this formulation, actions are target reach poses for the hand and states are a history of such reaches. We show this approach can solve a challenging class of pick-place and regrasping problems where the exact geometry of the objects to be handled is unknown. The only information our method requires is: 1) the sensor perception available to the robot at test time; 2) prior knowledge of the general class of objects for which the system was trained. We evaluate our method using objects belonging to two different categories, mugs and bottles, both in simulation and on real hardware. Results show a major improvement relative to a shape primitives baseline

    Learning garment manipulation policies toward robot-assisted dressing.

    Get PDF
    Assistive robots have the potential to support people with disabilities in a variety of activities of daily living, such as dressing. People who have completely lost their upper limb movement functionality may benefit from robot-assisted dressing, which involves complex deformable garment manipulation. Here, we report a dressing pipeline intended for these people and experimentally validate it on a medical training manikin. The pipeline is composed of the robot grasping a hospital gown hung on a rail, fully unfolding the gown, navigating around a bed, and lifting up the user's arms in sequence to finally dress the user. To automate this pipeline, we address two fundamental challenges: first, learning manipulation policies to bring the garment from an uncertain state into a configuration that facilitates robust dressing; second, transferring the deformable object manipulation policies learned in simulation to real world to leverage cost-effective data generation. We tackle the first challenge by proposing an active pre-grasp manipulation approach that learns to isolate the garment grasping area before grasping. The approach combines prehensile and nonprehensile actions and thus alleviates grasping-only behavioral uncertainties. For the second challenge, we bridge the sim-to-real gap of deformable object policy transfer by approximating the simulator to real-world garment physics. A contrastive neural network is introduced to compare pairs of real and simulated garment observations, measure their physical similarity, and account for simulator parameters inaccuracies. The proposed method enables a dual-arm robot to put back-opening hospital gowns onto a medical manikin with a success rate of more than 90%
    • …
    corecore