252 research outputs found

    Vision-based Global Path Planning and Trajectory Generation for Robotic Applications in Hazardous Environments

    Get PDF
    The aim of this study is to find an efficient global path planning algorithm and trajectory generation method using a vision system. Path planning is part of the more generic navigation function of mobile robots that consists of establishing an obstacle-free path, starting from the initial pose to the target pose in the robot workspace.In this thesis, special emphasis is placed on robotic applications in industrial and scientific infrastructure environments that are hazardous and inaccessible to humans, such as nuclear power plants and ITER1 and CERN2 LHC3 tunnel. Nuclear radiation can cause deadly damage to the human body, but we have to depend on nuclear energy to meet our great demands for energy resources. Therefore, the research and development of automatic transfer robots and manipulations under nuclear environment are regarded as a key technology by many countries in the world. Robotic applications in radiation environments minimize the danger of radiation exposure to humans. However, the robots themselves are also vulnerable to radiation. Mobility and maneuverability in such environments are essential to task success. Therefore, an efficient obstacle-free path and trajectory generation method are necessary for finding a safe path with maximum bounded velocities in radiation environments. High degree of freedom manipulators and maneuverable mobile robots with steerable wheels, such as non-holonomic omni-directional mobile robots make them suitable for inspection and maintenance tasks where the camera is the only source of visual feedback.In this thesis, a novel vision-based path planning method is presented by utilizing the artificial potential field, the visual servoing concepts and the CAD-based recognition method to deal with the problem of path and trajectory planning. Unlike the majority of conventional trajectory planning methods that consider a robot as only one point, the entire shape of a mobile robot is considered by taking into account all of the robot’s desired points to avoid obstacles. The vision-based algorithm generates synchronized trajectories for all of the wheels on omni-directional mobile robot. It provides the robot’s kinematic variables to plan maximum allowable velocities so that at least one of the actuators is always working at maximum velocity. The advantage of generated synchronized trajectories is to avoid slippage and misalignment in translation and rotation movement. The proposed method is further developed to propose a new vision-based path coordination method for multiple mobile robots with independently steerable wheels to avoid mutual collisions as well as stationary obstacles. The results of this research have been published to propose a new solution for path and trajectory generation in hazardous and inaccessible to human environments where the one camera is the only source of visual feedback

    Vision based leader-follower formation control for mobile robots

    Get PDF
    Creating systems with multiple autonomous vehicles places severe demands on the design of control schemes. Robot formation control plays a vital role in coordinating robots. As the number of members in a system rise, the complexity of each member increases. There is a proportional increase in the quantity and complexity of onboard sensing, control and computation. This thesis investigates the control of a group of mobile robots consisting of a leader and several followers to maintain a desired geometric formation --Abstract, page iii

    Teleoperating a mobile manipulator and a free-flying camera from a single haptic device

    Get PDF
    © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksThe paper presents a novel teleoperation system that allows the simultaneous and continuous command of a ground mobile manipulator and a free flying camera, implemented using an UAV, from which the operator can monitor the task execution in real-time. The proposed decoupled position and orientation workspace mapping allows the teleoperation from a single haptic device with bounded workspace of a complex robot with unbounded workspace. When the operator is reaching the position and orientation boundaries of the haptic workspace, linear and angular velocity components are respectively added to the inputs of the mobile manipulator and the flying camera. A user study on a virtual environment has been conducted to evaluate the performance and the workload on the user before and after proper training. Analysis on the data shows that the system complexity is not an obstacle for an efficient performance. This is a first step towards the implementation of a teleoperation system with a real mobile manipulator and a low-cost quadrotor as the free-flying camera.Accepted versio

    Visual control through narrow passages for an omnidirectional wheeled robot

    Get PDF
    Robotic systems are gradually replacing human intervention in dangerous facilities to improve human safety and prevent risky situations. In this domain, our work addresses the problem of autonomous crossing narrow passages in a semi-structured (i.e., partially-known) environment. In particular, we focus on the CERN’s Super Proton Synchrotron particle accelerator, where a mobile robot platform is equipped with a lightweight arm to perform measurements, inspection, and maintenance operations. The proposed approach leverages an image-based visual servoing strategy that exploits computer vision to detect and track known geometries defining narrow passage gates. The effectiveness of the proposed approach has been demonstrated in a realistic mock-up
    corecore