49 research outputs found

    Integration of a stereo vision system into an autonomous underwater vehicle for pipe manipulation tasks

    Get PDF
    Underwater object detection and recognition using computer vision are challenging tasks due to the poor light condition of submerged environments. For intervention missions requiring grasping and manipulation of submerged objects, a vision system must provide an Autonomous Underwater Vehicles (AUV) with object detection, localization and tracking capabilities. In this paper, we describe the integration of a vision system in the MARIS intervention AUV and its configuration for detecting cylindrical pipes, a typical artifact of interest in underwater operations. Pipe edges are tracked using an alpha-beta filter to achieve robustness and return a reliable pose estimation even in case of partial pipe visibility. Experiments in an outdoor water pool in different light conditions show that the adopted algorithmic approach allows detection of target pipes and provides a sufficiently accurate estimation of their pose even when they become partially visible, thereby supporting the AUV in several successful pipe grasping operations

    Nuclear Environments Inspection with Micro Aerial Vehicles: Algorithms and Experiments

    Full text link
    In this work, we address the estimation, planning, control and mapping problems to allow a small quadrotor to autonomously inspect the interior of hazardous damaged nuclear sites. These algorithms run onboard on a computationally limited CPU. We investigate the effect of varying illumination on the system performance. To the best of our knowledge, this is the first fully autonomous system of this size and scale applied to inspect the interior of a full scale mock-up of a Primary Containment Vessel (PCV). The proposed solution opens up new ways to inspect nuclear reactors and to support nuclear decommissioning, which is well known to be a dangerous, long and tedious process. Experimental results with varying illumination conditions show the ability to navigate a full scale mock-up PCV pedestal and create a map of the environment, while concurrently avoiding obstacles.Comment: 10 pages, ISER 201

    Autonomous Underwater Intervention: Experimental Results of the MARIS Project

    Get PDF
    open11noopenSimetti, E. ;Wanderlingh, F. ;Torelli, S. ;Bibuli, M. ;Odetti, A. ;Bruzzone, G. ; Lodi Rizzini, D. ;Aleotti, J. ;Palli, G. ;Moriello, L. ;Scarcia, U.Simetti, E.; Wanderlingh, F.; Torelli, S.; Bibuli, M.; Odetti, Angelo; Bruzzone, G.; Lodi Rizzini, D.; Aleotti, J.; Palli, G.; Moriello, L.; Scarcia, U

    Toward Future Automatic Warehouses: An Autonomous Depalletizing System Based on Mobile Manipulation and 3D Perception

    Get PDF
    This paper presents a mobile manipulation platform designed for autonomous depalletizing tasks. The proposed solution integrates machine vision, control and mechanical components to increase flexibility and ease of deployment in industrial environments such as warehouses. A collaborative robot mounted on a mobile base is proposed, equipped with a simple manipulation tool and a 3D in-hand vision system that detects parcel boxes on a pallet, and that pulls them one by one on the mobile base for transportation. The robot setup allows to avoid the cumbersome implementation of pick-and-place operations, since it does not require lifting the boxes. The 3D vision system is used to provide an initial estimation of the pose of the boxes on the top layer of the pallet, and to accurately detect the separation between the boxes for manipulation. Force measurement provided by the robot together with admittance control are exploited to verify the correct execution of the manipulation task. The proposed system was implemented and tested in a simplified laboratory scenario and the results of experimental trials are reported

    Underwater intervention robotics: An outline of the Italian national project Maris

    Get PDF
    The Italian national project MARIS (Marine Robotics for Interventions) pursues the strategic objective of studying, developing, and integrating technologies and methodologies to enable the development of autonomous underwater robotic systems employable for intervention activities. These activities are becoming progressively more typical for the underwater offshore industry, for search-and-rescue operations, and for underwater scientific missions. Within such an ambitious objective, the project consortium also intends to demonstrate the achievable operational capabilities at a proof-of-concept level by integrating the results with prototype experimental systems

    Visualization of AGV in Virtual Reality and Collision Detection with Large Scale Point Clouds

    No full text
    Virtual reality (VR) will play an important role in the factory of the future. In this paper, an immersive and interactive VR system is presented for3D visualization of automated guided vehicles (AGVs) moving in a warehouse. The environment model consists of a large scale point cloud obtained through a Terrestrial Laser Scanning (TLS) survey. Realistic AGV animation is achieved thanks to the extraction of an accurate model of the ground. Visualization of AGV safety zones is also supported.Moreover, the system enables real-time collision detection between the 3D vehicle model and the point cloud model of the environment. Collision detection is useful for checking the feasibility of a specified vehicle path. Efficient techniques for dynamic loading of massive point cloud data have been developed to speed up rendering and collision detection. The VR system can be used to assist the design of automated warehouses and to show customers what their future industrial plant would look like

    A 3D Robot Self Filter for Next Best View Planning

    No full text
    This paper investigates the use of a real-time self filter for a robot manipulator in next best view planning tasks. The robot is equipped with a depth sensor in eye-in-hand configuration. The goal of the next best view algorithm is to select at each iteration an optimal view pose for the sensor in order to optimize information gain to perform 3D reconstruction of a region of interest. An OpenGL-based filter was adopted, that is able to determine which pixels of the depth image are due to robot self observations. The filter was adapted to work with KinectFusion volumetric based 3D reconstruction. Experiments have been performed in a real scenario. Results indicate that removal of robot self observations prevents artifacts in the final 3D representation of the environment. Moreover, view poses where the robot would occlude the target regions can be successfully avoided. Finally, it is shown that a convex-hull robot model is preferable to a tight 3D CAD model, and that the filter can be integrated with a surfel-based next best view planner with negligible overhead
    corecore