9 research outputs found

    Learning to Navigate Cloth using Haptics

    Full text link
    We present a controller that allows an arm-like manipulator to navigate deformable cloth garments in simulation through the use of haptic information. The main challenge of such a controller is to avoid getting tangled in, tearing or punching through the deforming cloth. Our controller aggregates force information from a number of haptic-sensing spheres all along the manipulator for guidance. Based on haptic forces, each individual sphere updates its target location, and the conflicts that arise between this set of desired positions is resolved by solving an inverse kinematic problem with constraints. Reinforcement learning is used to train the controller for a single haptic-sensing sphere, where a training run is terminated (and thus penalized) when large forces are detected due to contact between the sphere and a simplified model of the cloth. In simulation, we demonstrate successful navigation of a robotic arm through a variety of garments, including an isolated sleeve, a jacket, a shirt, and shorts. Our controller out-performs two baseline controllers: one without haptics and another that was trained based on large forces between the sphere and cloth, but without early termination.Comment: Supplementary video available at https://youtu.be/iHqwZPKVd4A. Related publications http://www.cc.gatech.edu/~karenliu/Robotic_dressing.htm

    Autonomous Sweet Pepper Harvesting for Protected Cropping Systems

    Full text link
    In this letter, we present a new robotic harvester (Harvey) that can autonomously harvest sweet pepper in protected cropping environments. Our approach combines effective vision algorithms with a novel end-effector design to enable successful harvesting of sweet peppers. Initial field trials in protected cropping environments, with two cultivar, demonstrate the efficacy of this approach achieving a 46% success rate for unmodified crop, and 58% for modified crop. Furthermore, for the more favourable cultivar we were also able to detach 90% of sweet peppers, indicating that improvements in the grasping success rate would result in greatly improved harvesting performance

    Model-Based Control of Soft Actuators Using Learned Non-linear Discrete-Time Models

    Get PDF
    Soft robots have the potential to significantly change the way that robots interact with the environment and with humans. However, accurately modeling soft robot and soft actuator dynamics in order to perform model-based control can be extremely difficult. Deep neural networks are a powerful tool for modeling systems with complex dynamics such as the pneumatic, continuum joint, six degree-of-freedom robot shown in this paper. Unfortunately it is also difficult to apply standard model-based control techniques using a neural net. In this work, we show that the gradients used within a neural net to relate system states and inputs to outputs can be used to formulate a linearized discrete state space representation of the system. Using the state space representation, model predictive control (MPC) was developed with a six degree of freedom pneumatic robot with compliant plastic joints and rigid links. Using this neural net model, we were able to achieve an average steady state error across all joints of approximately 1 and 2° with and without integral control respectively. We also implemented a first-principles based model for MPC and the learned model performed better in terms of steady state error, rise time, and overshoot. Overall, our results show the potential of combining empirical modeling approaches with model-based control for soft robots and soft actuators

    Belief Representations for Planning with Contact Uncertainty

    Full text link
    While reaching for your morning coffee you may accidentally bump into the table, yet you reroute your motion with ease and grab your cup. An effective autonomous robot will need to have a similarly seamless recovery from unexpected contact. As simple as this may seem, even sensing this contact is a challenge for many robots, and when detected contact is often treated as an error that an operator is expected to resolve. Robots operating in our daily environments will need to reason about the information they have gained from contact and replan autonomously. This thesis examines planning under uncertainty with contact sensitive robot arms. Robots do not have skin and cannot precisely sense the location of contact. This leads to the proposed Collision Hypothesis Set model for representing a belief over the possible occupancy of the world sensed through contact. To capture the specifics of planning in an unknown world with this measurement model, this thesis develops a POMDP approach called the Blindfolded Traveler's Problem. A good prior over the possible obstacles the robot might encounter is key to effective planning. This thesis develops a neural network approach for sampling potential obstacles that are consistent with both what a robot sees from its camera and what it feels through contact.PHDRoboticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169845/1/bsaund_1.pd
    corecore