8 research outputs found

    Grasping Robot Integration and Prototyping: The GRIP Software Framework

    Get PDF

    Real-Time Pressure Estimation and Localisation with Optical Tomography-inspired Soft Skin Sensors

    Get PDF

    Virtual Reality based Telerobotics Framework with Depth Cameras

    Get PDF
    This work describes a virtual reality (VR) based robot teleoperation framework which relies on scene visualization from depth cameras and implements human-robot and human-scene interaction gestures. We suggest that mounting a camera on a slave robot's end-effector (an in-hand camera) allows the operator to achieve better visualization of the remote scene and improve task performance. We compared experimentally the operator's ability to understand the remote environment in different visualization modes: single external static camera, in-hand camera, in-hand and external static camera, in-hand camera with OctoMap occupancy mapping. The latter option provided the operator with a better understanding of the remote environment whilst requiring relatively small communication bandwidth. Consequently, we propose suitable grasping methods compatible with the VR based teleoperation with the in-hand camera. Video demonstration: https://youtu.be/3vZaEykMS_E

    Robotic Grasping Of Unknown Objects in Real-world Conditions

    No full text
    Grasping arbitrary objects without prior knowledge of their properties (e.g. location, shape, mass distribution, etc.) is a crucial requirement for robots to operate autonomously in most environments. One core component to achieving autonomous grasping consists in estimating the desired pose of the robot’s end-effector to grasp the object robustly. For this reason, a growing number of works, referred to as grasp planning methods, have been proposed in the literature to infer appropriate grasp poses from data captured by sensors, and especially by commodity RGB-D sensors, using analytical models or Machine Learning. While recent works typically aim to improve the quality and reliability of grasp planning algorithms, how to deploy, evaluate and generalise them to different robotic setups remains an open challenge in the community. For this reason, the main contributions of this thesis consist of tools and methods to improve the generalisation and re-usability of grasp planning methods to a broader range of setups. First, this thesis presents the Grasping Robot Integration & Prototyping (GRIP) framework, a system for rapid and intuitive prototyping of grasping and manipulation tasks involving multiple integrated components, such as grasp planning algorithms. A user study demonstrates that this graphical software framework decreases the effort to deploy robotic components, even for naive users. Second, we propose generic protocols to evaluate different aspects of grasp planning algorithms. One is dedicated to quantifying their capability to cope with changes in the camera pose and workspace size (i.e. two variables that are likely to differ between robotic setups). In addition, we propose a generic and novel stratified evaluation of the performance of grasp planning algorithms. Coupled with relevant statistical methods, it is demonstrated that the presented approach leads to robust analysis of benchmark results, such as ranking a set of grasp planning algorithms, or studying their affinity to specific objects. Finally, this thesis presents a new training process, based on parametric 3D geometric shapes (superquadrics) that, unlike state-of-the-art methods, allows a Deep Learning architecture to learn features resulting in grasp configurations more reliable when the input data is sparse and noisy point clouds. We show that our approach allows the model to be more generic with respect to the resolution of the input point clouds

    Statistical Stratification and Benchmarking of Robotic Grasping Performance

    No full text

    Sliding Mode Control of the PUMA 560 Robot

    No full text
    The purpose of this article is to present the application of the sliding mode control and investigate its effectiveness when applied to a three-dimensional robotic manipulator model. The analysis is based on the application of the sliding mode control law for the PUMA 560 model, three degrees of freedom, through the development of a dynamic simulation model. The simulation results show the effectiveness of this proposed method for the automation of industrial applications, such as assembly, machining (deburring, trimming), and surface tracking (polishing). This technique provides a useful insight into the advantages of using sliding mode control laws in robotics applications

    A Suite of Robotic Solutions for Nuclear Waste Decommissioning

    Get PDF
    Dealing safely with nuclear waste is an imperative for the nuclear industry. Increasingly, robots are being developed to carry out complex tasks such as perceiving, grasping, cutting, and manipulating waste. Radioactive material can be sorted, and either stored safely or disposed of appropriately, entirely through the actions of remotely controlled robots. Radiological characterisation is also critical during the decommissioning of nuclear facilities. It involves the detection and labelling of radiation levels, waste materials, and contaminants, as well as determining other related parameters (e.g., thermal and chemical), with the data visualised as 3D scene models. This paper overviews work by researchers at the QMUL Centre for Advanced Robotics (ARQ), a partner in the UK EPSRC National Centre for Nuclear Robotics (NCNR), a consortium working on the development of radiation-hardened robots fit to handle nuclear waste. Three areas of nuclear-related research are covered here: human–robot interfaces for remote operations, sensor delivery, and intelligent robotic manipulation
    corecore