17,254 research outputs found

    Folding Assembly by Means of Dual-Arm Robotic Manipulation

    Full text link
    In this paper, we consider folding assembly as an assembly primitive suitable for dual-arm robotic assembly, that can be integrated in a higher level assembly strategy. The system composed by two pieces in contact is modelled as an articulated object, connected by a prismatic-revolute joint. Different grasping scenarios were considered in order to model the system, and a simple controller based on feedback linearisation is proposed, using force torque measurements to compute the contact point kinematics. The folding assembly controller has been experimentally tested with two sample parts, in order to showcase folding assembly as a viable assembly primitive.Comment: 7 pages, accepted for ICRA 201

    Minimax Iterative Dynamic Game: Application to Nonlinear Robot Control Tasks

    Full text link
    Multistage decision policies provide useful control strategies in high-dimensional state spaces, particularly in complex control tasks. However, they exhibit weak performance guarantees in the presence of disturbance, model mismatch, or model uncertainties. This brittleness limits their use in high-risk scenarios. We present how to quantify the sensitivity of such policies in order to inform of their robustness capacity. We also propose a minimax iterative dynamic game framework for designing robust policies in the presence of disturbance/uncertainties. We test the quantification hypothesis on a carefully designed deep neural network policy; we then pose a minimax iterative dynamic game (iDG) framework for improving policy robustness in the presence of adversarial disturbances. We evaluate our iDG framework on a mecanum-wheeled robot, whose goal is to find a ocally robust optimal multistage policy that achieve a given goal-reaching task. The algorithm is simple and adaptable for designing meta-learning/deep policies that are robust against disturbances, model mismatch, or model uncertainties, up to a disturbance bound. Videos of the results are on the author's website, http://ecs.utdallas.edu/~opo140030/iros18/iros2018.html, while the codes for reproducing our experiments are on github, https://github.com/lakehanne/youbot/tree/rilqg. A self-contained environment for reproducing our results is on docker, https://hub.docker.com/r/lakehanne/youbotbuntu14/Comment: 2018 International Conference on Intelligent Robots and System

    Single-Shot Clothing Category Recognition in Free-Configurations with Application to Autonomous Clothes Sorting

    Get PDF
    This paper proposes a single-shot approach for recognising clothing categories from 2.5D features. We propose two visual features, BSP (B-Spline Patch) and TSD (Topology Spatial Distances) for this task. The local BSP features are encoded by LLC (Locality-constrained Linear Coding) and fused with three different global features. Our visual feature is robust to deformable shapes and our approach is able to recognise the category of unknown clothing in unconstrained and random configurations. We integrated the category recognition pipeline with a stereo vision system, clothing instance detection, and dual-arm manipulators to achieve an autonomous sorting system. To verify the performance of our proposed method, we build a high-resolution RGBD clothing dataset of 50 clothing items of 5 categories sampled in random configurations (a total of 2,100 clothing samples). Experimental results show that our approach is able to reach 83.2\% accuracy while classifying clothing items which were previously unseen during training. This advances beyond the previous state-of-the-art by 36.2\%. Finally, we evaluate the proposed approach in an autonomous robot sorting system, in which the robot recognises a clothing item from an unconstrained pile, grasps it, and sorts it into a box according to its category. Our proposed sorting system achieves reasonable sorting success rates with single-shot perception.Comment: 9 pages, accepted by IROS201

    Robust visual servoing in 3d reaching tasks

    Get PDF
    This paper describes a novel approach to the problem of reaching an object in space under visual guidance. The approach is characterized by a great robustness to calibration errors, such that virtually no calibration is required. Servoing is based on binocular vision: a continuous measure of the end-effector motion field, derived from real-time computation of the binocular optical flow over the stereo images, is compared with the actual position of the target and the relative error in the end-effector trajectory is continuously corrected. The paper outlines the general framework of the approach, shows how visual measures are obtained and discusses the synthesis of the controller along with its stability analysis. Real-time experiments are presented to show the applicability of the approach in real 3-D applications

    The fast contribution of visual-proprioceptive discrepancy to reach aftereffects and proprioceptive recalibration

    Get PDF
    Adapting reaches to altered visual feedback not only leads to motor changes, but also to shifts in perceived hand location; “proprioceptive recalibration”. These changes are robust to many task variations and can occur quite rapidly. For instance, our previous study found both motor and sensory shifts arise in as few as 6 rotated-cursor training trials. The aim of this study is to investigate one of the training signals that contribute to these rapid sensory and motor changes. We do this by removing the visuomotor error signals associated with classic visuomotor rotation training; and provide only experience with a visual-proprioceptive discrepancy for training. While a force channel constrains reach direction 30o away from the target, the cursor representing the hand unerringly moves straight to the target. The resulting visual-proprioceptive discrepancy drives significant and rapid changes in no-cursor reaches and felt hand position, again within only 6 training trials. The extent of the sensory change is unexpectedly larger following the visual-proprioceptive discrepancy training. Not surprisingly the size of the reach aftereffects is substantially smaller than following classic visuomotor rotation training. However, the time course by which both changes emerge is similar in the two training types. These results suggest that even the mere exposure to a discrepancy between felt and seen hand location is a sufficient training signal to drive robust motor and sensory plasticity.York University Librarie
    corecore