1,224 research outputs found

    Review of deep learning methods in robotic grasp detection

    Get PDF
    For robots to attain more general-purpose utility, grasping is a necessary skill to master. Such general-purpose robots may use their perception abilities to visually identify grasps for a given object. A grasp describes how a robotic end-effector can be arranged to securely grab an object and successfully lift it without slippage. Traditionally, grasp detection requires expert human knowledge to analytically form the task-specific algorithm, but this is an arduous and time-consuming approach. During the last five years, deep learning methods have enabled significant advancements in robotic vision, natural language processing, and automated driving applications. The successful results of these methods have driven robotics researchers to explore the use of deep learning methods in task-generalised robotic applications. This paper reviews the current state-of-the-art in regards to the application of deep learning methods to generalised robotic grasping and discusses how each element of the deep learning approach has improved the overall performance of robotic grasp detection. Several of the most promising approaches are evaluated and the most suitable for real-time grasp detection is identified as the one-shot detection method. The availability of suitable volumes of appropriate training data is identified as a major obstacle for effective utilisation of the deep learning approaches, and the use of transfer learning techniques is proposed as a potential mechanism to address this. Finally, current trends in the field and future potential research directions are discussed

    Learning to grasp in unstructured environments with deep convolutional neural networks using a Baxter Research Robot

    Get PDF
    Recent advancements in Deep Learning have accelerated the capabilities of robotic systems in terms of visual perception, object manipulation, automated navigation, and human-robot collaboration. The capability of a robotic system to manipulate objects in unstructured environments is becoming an increasingly necessary skill. Due to the dynamic nature of these environments, traditional methods, that require expert human knowledge, fail to adapt automatically. After reviewing the relevant literature a method was proposed to utilise deep transfer learning techniques to detect object grasps from coloured depth images. A grasp describes how a robotic end-effector can be arranged to securely grasp an object and successfully lift it without slippage. In this study, a ResNet-50 convolutional neural network (CNN) model is trained on the Cornell grasp dataset. The training was completed within 30 hours using a workstation PC with accelerated GPU support via an NVIDIA Titan X. The trained grasp detection model was further evaluated with a Baxter research robot and a Microsoft Kinect-v2 and a successful grasp detection accuracy of 93.91% was achieved on a diverse set of novel objects. Physical grasping trials were conducted on a set of 8 different objects. The overall system achieves an average grasp success rate of 65.0% while performing the grasp detection in under 25 milliseconds. The results analysis concluded that the objects with reasonably straight edges and moderately pronounced heights above the table are easily detected and grasped by the system

    Perception and manipulation for robot-assisted dressing

    Get PDF
    Assistive robots have the potential to provide tremendous support for disabled and elderly people in their daily dressing activities. This thesis presents a series of perception and manipulation algorithms for robot-assisted dressing, including: garment perception and grasping prior to robot-assisted dressing, real-time user posture tracking during robot-assisted dressing for (simulated) impaired users with limited upper-body movement capability, and finally a pipeline for robot-assisted dressing for (simulated) paralyzed users who have lost the ability to move their limbs. First, the thesis explores learning suitable grasping points on a garment prior to robot-assisted dressing. Robots should be endowed with the ability to autonomously recognize the garment state, grasp and hand the garment to the user and subsequently complete the dressing process. This is addressed by introducing a supervised deep neural network to locate grasping points. To reduce the amount of real data required, which is costly to collect, the power of simulation is leveraged to produce large amounts of labeled data. Unexpected user movements should be taken into account during dressing when planning robot dressing trajectories. Tracking such user movements with vision sensors is challenging due to severe visual occlusions created by the robot and clothes. A probabilistic real-time tracking method is proposed using Bayesian networks in latent spaces, which fuses multi-modal sensor information. The latent spaces are created before dressing by modeling the user movements, taking the user's movement limitations and preferences into account. The tracking method is then combined with hierarchical multi-task control to minimize the force between the user and the robot. The proposed method enables the Baxter robot to provide personalized dressing assistance for users with (simulated) upper-body impairments. Finally, a pipeline for dressing (simulated) paralyzed patients using a mobile dual-armed robot is presented. The robot grasps a hospital gown naturally hung on a rail, and moves around the bed to finish the upper-body dressing of a hospital training manikin. To further improve simulations for garment grasping, this thesis proposes to update more realistic physical properties values for the simulated garment. This is achieved by measuring physical similarity in the latent space using contrastive loss, which maps physically similar examples to nearby points.Open Acces

    White paper - Agricultural Robotics: The Future of Robotic Agriculture

    Get PDF
    Agri-Food is the largest manufacturing sector in the UK. It supports a food chain that generates over £108bn p.a., with 3.9m employees in a truly international industry and exports £20bn of UK manufactured goods. However, the global food chain is under pressure from population growth, climate change, political pressures affecting migration, population drift from rural to urban regions and the demographics of an aging global population. These challenges are recognised in the UK Industrial Strategy white paper and backed by significant investment via a wave 2 Industrial Challenge Fund Investment (“Transforming Food Production: from Farm to Fork”). RAS and associated digital technologies are now seen as enablers of this critical food chain transformation. To meet these challenges, here we review the state of the art of the application of RAS in Agri-Food production and explore research and innovation needs to ensure novel advanced robotic and autonomous reach their full potential and deliver necessary impacts. The opportunities for RAS range from; the development of field robots that can assist workers by carrying weights and conduct agricultural operations such as crop and animal sensing, weeding and drilling; integration of autonomous system technologies into existing farm operational equipment such as tractors; robotic systems to harvest crops and conduct complex dextrous operations; the use of collaborative and “human in the loop” robotic applications to augment worker productivity and advanced robotic applications, including the use of soft robotics, to drive productivity beyond the farm gate into the factory and retail environment. RAS technology has the potential to transform food production and the UK has the potential to establish global leadership within the domain. However, there are particular barriers to overcome to secure this vision: 1.The UK RAS community with an interest in Agri-Food is small and highly dispersed. There is an urgent need to defragment and then expand the community.2.The UK RAS community has no specific training paths or Centres for Doctoral Training to provide trained human resource capacity within Agri-Food.3.While there has been substantial government investment in translational activities at high Technology Readiness Levels (TRLs), there is insufficient ongoing basic research in Agri-Food RAS at low TRLs to underpin onward innovation delivery for industry.4.There is a concern that RAS for Agri-Food is not realising its full potential, as the projects being commissioned currently are too few and too small-scale. RAS challenges often involve the complex integration of multiple discrete technologies (e.g. navigation, safe operation, multimodal sensing, automated perception, grasping and manipulation, perception). There is a need to further develop these discrete technologies but also to deliver large-scale industrial applications that resolve integration and interoperability issues. The UK community needs to undertake a few well-chosen large-scale and collaborative “moon shot” projects.5.The successful delivery of RAS projects within Agri-Food requires close collaboration between the RAS community and with academic and industry practitioners. For example, the breeding of crops with novel phenotypes, such as fruits which are easy to see and pick by robots, may simplify and accelerate the application of RAS technologies. Therefore, there is an urgent need to seek new ways to create RAS and Agri-Food domain networks that can work collaboratively to address key challenges. This is especially important for Agri-Food since success in the sector requires highly complex cross-disciplinary activity. Furthermore, within UKRI most of the Research Councils (EPSRC, BBSRC, NERC, STFC, ESRC and MRC) and Innovate UK directly fund work in Agri-Food, but as yet there is no coordinated and integrated Agri-Food research policy per se. Our vision is a new generation of smart, flexible, robust, compliant, interconnected robotic systems working seamlessly alongside their human co-workers in farms and food factories. Teams of multi-modal, interoperable robotic systems will self-organise and coordinate their activities with the “human in the loop”. Electric farm and factory robots with interchangeable tools, including low-tillage solutions, novel soft robotic grasping technologies and sensors, will support the sustainable intensification of agriculture, drive manufacturing productivity and underpin future food security. To deliver this vision the research and innovation needs include the development of robust robotic platforms, suited to agricultural environments, and improved capabilities for sensing and perception, planning and coordination, manipulation and grasping, learning and adaptation, interoperability between robots and existing machinery, and human-robot collaboration, including the key issues of safety and user acceptance. Technology adoption is likely to occur in measured steps. Most farmers and food producers will need technologies that can be introduced gradually, alongside and within their existing production systems. Thus, for the foreseeable future, humans and robots will frequently operate collaboratively to perform tasks, and that collaboration must be safe. There will be a transition period in which humans and robots work together as first simple and then more complex parts of work are conducted by robots; driving productivity and enabling human jobs to move up the value chain

    Agricultural Robotics:The Future of Robotic Agriculture

    Get PDF
    • …
    corecore