188 research outputs found

    Monocular Robust Depth Estimation Vision System for Robotic Tasks Interventions in Metallic Targets

    Get PDF
    Robotic interventions in hazardous scenarios need to pay special attention to safety, as in most cases it is necessary to have an expert operator in the loop. Moreover, the use of a multi-modal Human-Robot Interface allows the user to interact with the robot using manual control in critical steps, as well as semi-autonomous behaviours in more secure scenarios, by using, for example, object tracking and recognition techniques. This paper describes a novel vision system to track and estimate the depth of metallic targets for robotic interventions. The system has been designed for on-hand monocular cameras, focusing on solving lack of visibility and partial occlusions. This solution has been validated during real interventions at the Centre for Nuclear Research (CERN) accelerator facilities, achieving 95% success in autonomous mode and 100% in a supervised manner. The system increases the safety and efficiency of the robotic operations, reducing the cognitive fatigue of the operator during non-critical mission phases. The integration of such an assistance system is especially important when facing complex (or repetitive) tasks, in order to reduce the work load and accumulated stress of the operator, enhancing the performance and safety of the mission

    Affordance-based control of a variable-autonomy telerobot

    Get PDF
    Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2012.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis. "September 2012."Includes bibliographical references (pages 37-38).Most robot platforms operate in one of two modes: full autonomy, usually in the lab; or low-level teleoperation, usually in the field. Full autonomy is currently realizable only in narrow domains of robotics-like mapping an environment. Tedious teleoperation/joystick control is typical in military applications, like complex manipulation and navigation with bomb-disposal robots. This thesis describes a robot "surrogate" with an intermediate and variable level of autonomy. The robot surrogate accomplishes manipulation tasks by taking guidance and planning suggestions from a human "supervisor." The surrogate does not engage in high-level reasoning, but only in intermediate-level planning and low-level control. The human supervisor supplies the high-level reasoning and some intermediate control-leaving execution details for the surrogate. The supervisor supplies world knowledge and planning suggestions by "drawing" on a 3D view of the world constructed from sensor data. The surrogate conveys its own model of the world to the supervisor, to enable mental-model sharing between supervisor and surrogate. The contributions of this thesis include: (1) A novel partitioning of the manipulation task load between supervisor and surrogate, which side-steps problems in autonomous robotics by replacing them with problems in interfaces, perception, planning, control, and human-robot trust; and (2) The algorithms and software designed and built for mental model-sharing and supervisor-assisted manipulation. Using this system, we are able to command the PR2 to manipulate simple objects incorporating either a single revolute or prismatic joint.by Michael Fleder.M. Eng

    REAL TIME PEDESTRIAN DETECTION-BASED FASTER HOG/DPM AND DEEP LEARNING APPROACH

    Get PDF
    International audienceThe work presented aims to show the feasibility of scientific and technological concepts in embedded vision dedicated to the extraction of image characteristics allowing the detection and the recognition/localization of objects. Object and pedestrian detection are carried out by two methods: 1. Classical image processing approach, which are improved with Histogram Oriented Gradient (HOG) and Deformable Part Model (DPM) based detection and pattern recognition. We present how we have improved the HOG/DPM approach to make pedestrian detection as a real time task by reducing calculation time. The developed approach allows us not only a pedestrian detection but also calculates the distance between pedestrians and vehicle. 2. Pedestrian detection based Artificial Intelligence (AI) approaches such as Deep Learning (DL). This work has first been validated on a closed circuit and subsequently under real traffic conditions through mobile platforms (mobile robot, drone and vehicles). Several tests have been carried out in the city center of Rouen in order to validate the platform developed

    Object recognition applied to mobile robotics

    Get PDF
    Investigació de les possibilitats dels mètodes actuals de detecció i reconeixement d'objectes. Adaptació del millor mètode seleccionat (MOPED) per tal de solucionar els problemes de la competició de robots domèstics "Robocup @Home" amb el robot "REEM" de l'empresa "PAL Robotics"

    Direct Visual Servoing for Grasping Using Depth Maps

    Get PDF
    Visual servoing is extremely helpful for many applications such as tracking objects, controlling the position of end-effectors, grasping and many others. It has been helpful in industrial sites, academic projects and research. Visual servoing is a very challenging task in robotics and research has been done in order to address and improve the methods used for servoing and the grasping application in particular. Our goal is to use visual servoing to control the end-effector of a robotic arm bringing it to a grasping position for the object of interest. Gaining knowledge about depth was always a major challenge for visual servoing, yet necessary. Depth knowledge was either assumed to be available from a 3D model or was estimated using stereo vision or other methods. This process is computationally expensive and the results might be inaccurate because of its sensitivity to environmental conditions. Depth map usage has been recently more commonly used by researchers as it is an easy, fast and cheap way to capture depth information. This solved the problems faced estimating the 3-D information needed but the developed algorithms were only successful starting from small initial errors. An effective position controller capable of reaching the target location starting from large initial errors is needed. The thesis presented here uses Kinect depth maps to directly control a robotic arm to reach a determined grasping location specified by a target image. The algorithm consists of a 2-phase controller; the first phase is a feature based approach that provides a coarse alignment with the target image resulting in relatively small errors. The second phase is a depth map error minimization based control. The second-phase controller minimizes the difference in depth maps between the current and target images. This controller allows the system to achieve minimal steady state errors in translation and rotation starting from a relatively small initial error. To test the system's effectiveness, several experiments were conducted. The experimental setup consists of the Barrett WAM robotic arm with a Microsoft Kinect camera mounted on it in an eye-in-hand configuration. A defined goal scene taken from the grasping position is inputted to the system whose controller drives it to the target position starting from any initial condition. Our system outperforms previous work which tackled this subject. It functions successfully even with large initial errors. This successful operation is achieved by preceding the main control algorithm with a coarse image alignment achieved via a feature based control. Automating the system further by automatically detecting the best grasping position and making that location the robot's target would be a logical extension to improve and complete this work
    • …
    corecore