474 research outputs found

    Deep Visual Foresight for Planning Robot Motion

    Full text link
    A key challenge in scaling up robot learning to many skills and environments is removing the need for human supervision, so that robots can collect their own data and improve their own performance without being limited by the cost of requesting human feedback. Model-based reinforcement learning holds the promise of enabling an agent to learn to predict the effects of its actions, which could provide flexible predictive models for a wide range of tasks and environments, without detailed human supervision. We develop a method for combining deep action-conditioned video prediction models with model-predictive control that uses entirely unlabeled training data. Our approach does not require a calibrated camera, an instrumented training set-up, nor precise sensing and actuation. Our results show that our method enables a real robot to perform nonprehensile manipulation -- pushing objects -- and can handle novel objects not seen during training.Comment: ICRA 2017. Supplementary video: https://sites.google.com/site/robotforesight

    Alignment control using visual servoing and mobilenet single-shot multi-box detection (SSD): a review

    Get PDF
    The concept is highly critical for robotic technologies that rely on visual feedback. In this context, robot systems tend to be unresponsive due to reliance on pre-programmed trajectory and path, meaning the occurrence of a change in the environment or the absence of an object. This review paper aims to provide comprehensive studies on the recent application of visual servoing and DNN. PBVS and Mobilenet-SSD were chosen algorithms for alignment control of the film handler mechanism of the portable x-ray system. It also discussed the theoretical framework features extraction and description, visual servoing, and Mobilenet-SSD. Likewise, the latest applications of visual servoing and DNN was summarized, including the comparison of Mobilenet-SSD with other sophisticated models. As a result of a previous study presented, visual servoing and MobileNet-SSD provide reliable tools and models for manipulating robotics systems, including where occlusion is present. Furthermore, effective alignment control relies significantly on visual servoing and deep neural reliability, shaped by different parameters such as the type of visual servoing, feature extraction and description, and DNNs used to construct a robust state estimator. Therefore, visual servoing and MobileNet-SSD are parameterized concepts that require enhanced optimization to achieve a specific purpose with distinct tools

    Pose-Based Tactile Servoing: Controlled Soft Touch using Deep Learning

    Full text link
    This article describes a new way of controlling robots using soft tactile sensors: pose-based tactile servo (PBTS) control. The basic idea is to embed a tactile perception model for estimating the sensor pose within a servo control loop that is applied to local object features such as edges and surfaces. PBTS control is implemented with a soft curved optical tactile sensor (the BRL TacTip) using a convolutional neural network trained to be insensitive to shear. In consequence, robust and accurate controlled motion over various complex 3D objects is attained. First, we review tactile servoing and its relation to visual servoing, before formalising PBTS control. Then, we assess tactile servoing over a range of regular and irregular objects. Finally, we reflect on the relation to visual servo control and discuss how controlled soft touch gives a route towards human-like dexterity in robots.Comment: A summary video is available here https://youtu.be/12-DJeRcfn0 *NL and JL contributed equally to this wor

    Robotic Crop Interaction in Agriculture for Soft Fruit Harvesting

    Get PDF
    Autonomous tree crop harvesting has been a seemingly attainable, but elusive, robotics goal for the past several decades. Limiting grower reliance on uncertain seasonal labour is an economic driver of this, but the ability of robotic systems to treat each plant individually also has environmental benefits, such as reduced emissions and fertiliser use. Over the same time period, effective grasping and manipulation (G&M) solutions to warehouse product handling, and more general robotic interaction, have been demonstrated. Despite research progress in general robotic interaction and harvesting of some specific crop types, a commercially successful robotic harvester has yet to be demonstrated. Most crop varieties, including soft-skinned fruit, have not yet been addressed. Soft fruit, such as plums, present problems for many of the techniques employed for their more robust relatives and require special focus when developing autonomous harvesters. Adapting existing robotics tools and techniques to new fruit types, including soft skinned varieties, is not well explored. This thesis aims to bridge that gap by examining the challenges of autonomous crop interaction for the harvesting of soft fruit. Aspects which are known to be challenging include mixed obstacle planning with both hard and soft obstacles present, poor outdoor sensing conditions, and the lack of proven picking motion strategies. Positioning an actuator for harvesting requires solving these problems and others specific to soft skinned fruit. Doing so effectively means addressing these in the sensing, planning and actuation areas of a robotic system. Such areas are also highly interdependent for grasping and manipulation tasks, so solutions need to be developed at the system level. In this thesis, soft robotics actuators, with simplifying assumptions about hard obstacle planes, are used to solve mixed obstacle planning. Persistent target tracking and filtering is used to overcome challenging object detection conditions, while multiple stages of object detection are applied to refine these initial position estimates. Several picking motions are developed and tested for plums, with varying degrees of effectiveness. These various techniques are integrated into a prototype system which is validated in lab testing and extensive field trials on a commercial plum crop. Key contributions of this thesis include I. The examination of grasping & manipulation tools, algorithms, techniques and challenges for harvesting soft skinned fruit II. Design, development and field-trial evaluation of a harvester prototype to validate these concepts in practice, with specific design studies of the gripper type, object detector architecture and picking motion for this III. Investigation of specific G&M module improvements including: o Application of the autocovariance least squares (ALS) method to noise covariance matrix estimation for visual servoing tasks, where both simulated and real experiments demonstrated a 30% improvement in state estimation error using this technique. o Theory and experimentation showing that a single range measurement is sufficient for disambiguating scene scale in monocular depth estimation for some datasets. o Preliminary investigations of stochastic object completion and sampling for grasping, active perception for visual servoing based harvesting, and multi-stage fruit localisation from RGB-Depth data. Several field trials were carried out with the plum harvesting prototype. Testing on an unmodified commercial plum crop, in all weather conditions, showed promising results with a harvest success rate of 42%. While a significant gap between prototype performance and commercial viability remains, the use of soft robotics with carefully chosen sensing and planning approaches allows for robust grasping & manipulation under challenging conditions, with both hard and soft obstacles

    EARL: Eye-on-Hand Reinforcement Learner for Dynamic Grasping with Active Pose Estimation

    Full text link
    In this paper, we explore the dynamic grasping of moving objects through active pose tracking and reinforcement learning for hand-eye coordination systems. Most existing vision-based robotic grasping methods implicitly assume target objects are stationary or moving predictably. Performing grasping of unpredictably moving objects presents a unique set of challenges. For example, a pre-computed robust grasp can become unreachable or unstable as the target object moves, and motion planning must also be adaptive. In this work, we present a new approach, Eye-on-hAnd Reinforcement Learner (EARL), for enabling coupled Eye-on-Hand (EoH) robotic manipulation systems to perform real-time active pose tracking and dynamic grasping of novel objects without explicit motion prediction. EARL readily addresses many thorny issues in automated hand-eye coordination, including fast-tracking of 6D object pose from vision, learning control policy for a robotic arm to track a moving object while keeping the object in the camera's field of view, and performing dynamic grasping. We demonstrate the effectiveness of our approach in extensive experiments validated on multiple commercial robotic arms in both simulations and complex real-world tasks.Comment: Presented on IROS 2023 Corresponding author Siddarth Jai
    • …
    corecore