8,858 research outputs found

    Feedback-based Fabric Strip Folding

    Full text link
    Accurate manipulation of a deformable body such as a piece of fabric is difficult because of its many degrees of freedom and unobservable properties affecting its dynamics. To alleviate these challenges, we propose the application of feedback-based control to robotic fabric strip folding. The feedback is computed from the low dimensional state extracted from a camera image. We trained the controller using reinforcement learning in simulation which was calibrated to cover the real fabric strip behaviors. The proposed feedback-based folding was experimentally compared to two state-of-the-art folding methods and our method outperformed both of them in terms of accuracy.Comment: Submitted to IEEE/RSJ IROS201

    Unified computer codes: Properties data for low cost nozzle materials

    Get PDF
    The development of the analytic capability to predict the thermal ablation response of promising low cost materials for rocket nozzles is presented

    Connecting Look and Feel: Associating the visual and tactile properties of physical materials

    Full text link
    For machines to interact with the physical world, they must understand the physical properties of objects and materials they encounter. We use fabrics as an example of a deformable material with a rich set of mechanical properties. A thin flexible fabric, when draped, tends to look different from a heavy stiff fabric. It also feels different when touched. Using a collection of 118 fabric sample, we captured color and depth images of draped fabrics along with tactile data from a high resolution touch sensor. We then sought to associate the information from vision and touch by jointly training CNNs across the three modalities. Through the CNN, each input, regardless of the modality, generates an embedding vector that records the fabric's physical property. By comparing the embeddings, our system is able to look at a fabric image and predict how it will feel, and vice versa. We also show that a system jointly trained on vision and touch data can outperform a similar system trained only on visual data when tested purely with visual inputs

    Historical Costume Simulation

    Get PDF
    The aim of this study is to produce accurate reproductions of digital clothing from historical sources and to investigate the implications of developing it for online museum exhibits. In order to achieve this, the study is going through several stages. Firstly, the theoretical background of the main issues will be established through the review of various published papers on 3D apparel CAD, drape and digital curation. Next, using a 3D apparel CAD system, this study attempts the realistic visualization of the costumes based on the establishment of a valid simulation reference. This paper reports the pilot exercise carried out to scope the requirements for going forward

    Modelling the forming mechanics of engineering fabrics using a mutually constrained pantographic beam and membrane mesh

    Get PDF
    A method of combining 1-d and 2-d structural finite elements to capture the fundamental mechanical properties of engineering fabrics subject to finite strains is introduced. A mutually constrained pantographic beam and membrane mesh is presented and simple homogenisation theory is developed to relate the macro-scale properties of the mesh to the properties of the elements within the mesh. The theory shows that each of the macro-scale properties of the mesh can be independently controlled. An investigation into the performance of the technique is conducted using tensile, cantilever bending and uniaxial bias extension shear simulations. The simulations are first used to verify the accuracy of the homogenisation theory and then used to demonstrate the ability of the modelling approach in accurately predicting the shear force, shear kinematics and out-of-plane wrinkling behaviour of engineering fabrics

    Improved accuracy in the determination of flexural rigidity of textile fabrics by the Peirce cantilever test (ASTM D1388)

    Get PDF
    Within the field of composite manufacturing simulations, it is well known that the bending behavior of fabrics and prepregs has a significant influence on the drapeability and final geometry of a composite part. Due to sliding between reinforcements within a fabric, the bending properties cannot be determined from in-plane properties and a separate test is required. The Peirce cantilever test represents a popular way of determining the flexural rigidity for these materials, and is the preferred method in the ASTM D1388 standard. This work illustrates the severe inaccuracies (up to 72% error) in the current ASTM D1388 standard as well as the original formulation by Peirce, caused by ignoring higher-order effects. A modified approach accounting for higher-order effects and yielding significantly improved accuracy is presented. The method is validated using finite element simulations and experimental testing. Since no independent tests other than the ASTM D1388 standard are available to determine the bending stiffness of fabric materials, experimental validation is performed on an isotropic, homogeneous Upilex-50S foil for which the flexural rigidity and tensile stiffness are related. The flexural rigidity and elastic modulus are determined through both the cantilever test (ASTM D1388) and tensile testing. The results show that the proposed method measures an elastic modulus close to that determined through tensile testing (within 1%), while both the Peirce formulation (+18%) and ASTM standard (+72%) over-estimate the elastic modulus. The proposed methodology allows for a more accurate determination of flexural rigidity, and enables the more accurate simulation of composite forming processes

    Deep Haptic Model Predictive Control for Robot-Assisted Dressing

    Full text link
    Robot-assisted dressing offers an opportunity to benefit the lives of many people with disabilities, such as some older adults. However, robots currently lack common sense about the physical implications of their actions on people. The physical implications of dressing are complicated by non-rigid garments, which can result in a robot indirectly applying high forces to a person's body. We present a deep recurrent model that, when given a proposed action by the robot, predicts the forces a garment will apply to a person's body. We also show that a robot can provide better dressing assistance by using this model with model predictive control. The predictions made by our model only use haptic and kinematic observations from the robot's end effector, which are readily attainable. Collecting training data from real world physical human-robot interaction can be time consuming, costly, and put people at risk. Instead, we train our predictive model using data collected in an entirely self-supervised fashion from a physics-based simulation. We evaluated our approach with a PR2 robot that attempted to pull a hospital gown onto the arms of 10 human participants. With a 0.2s prediction horizon, our controller succeeded at high rates and lowered applied force while navigating the garment around a persons fist and elbow without getting caught. Shorter prediction horizons resulted in significantly reduced performance with the sleeve catching on the participants' fists and elbows, demonstrating the value of our model's predictions. These behaviors of mitigating catches emerged from our deep predictive model and the controller objective function, which primarily penalizes high forces.Comment: 8 pages, 12 figures, 1 table, 2018 IEEE International Conference on Robotics and Automation (ICRA
    corecore