105 research outputs found

    Automatic Annotation of Subsea Pipelines using Deep Learning

    Get PDF
    Regulatory requirements for sub-sea oil and gas operators mandates the frequent inspection of pipeline assets to ensure that their degradation and damage are maintained at acceptable levels. The inspection process is usually sub-contracted to surveyors who utilize sub-sea Remotely Operated Vehicles (ROVs), launched from a surface vessel and piloted over the pipeline. ROVs capture data from various sensors/instruments which are subsequently reviewed and interpreted by human operators, creating a log of event annotations; a slow, labor-intensive and costly process. The paper presents an automatic image annotation framework that identifies/classifies key events of interest in the video footage viz. exposure, burial, field joints, anodes, and free spans. The reported methodology utilizes transfer learning with a Deep Convolutional Neural Network (ResNet-50), fine-tuned on real-life, representative data from challenging sub-sea environments with low lighting conditions, sand agitation, sea-life and vegetation. The network outputs are configured to perform multi-label image classifications for critical events. The annotation performance varies between 95.1 and 99.7 in terms of accuracy and 90.4 and 99.4 in terms of F1-Score depending on event type. The performance results are on a per-frame basis and corroborate the potential of the algorithm to be the foundation for an intelligent decision support framework that automates the annotation process. The solution can execute annotations in real-time and is significantly more cost-effective than human-only approaches

    AI is enabling a transformation toward autonomous hydrographic operations

    Get PDF

    A comparison of the performance of 2D and 3D convolutional neural networks for subsea survey video classification

    Get PDF
    Utilising deep learning image classification to auto-matically annotate subsea pipeline video surveys can facilitate the tedious and labour-intensive process, resulting in significant time and cost savings. However, the classification of events on subsea survey videos (frame sequences) by models trained on individual frames have been proven to vary, leading to inaccuracies. The paper extends previous work on the automatic annotation of individual subsea survey frames by comparing the performance of 2D and 3D Convolutional Neural Networks (CNNs) in classifying frame sequences. The study explores the classification of burial, exposure, free span, field joint, and anode events. Sampling and regularization techniques are designed to address the challenges of an underwater inspection video dataset owing to the environment. Results show that a 2D CNN with rolling average can outperform a 3D CNN, achieving an Exact Match Ratio of 85% and F1-Score of 90%, whilst being more computationally efficient

    Autonomous subsea intervention (SEAVENTION)

    Get PDF
    This paper presents the main results and latest developments in a 4-year project called autonomous subsea intervention (SEAVENTION). In the project we have developed new methods for autonomous inspection, maintenance and repair (IMR) in subsea oil and gas operations with Unmanned Underwater Vehicles (UUVs). The results are also relevant for offshore wind, aquaculture and other industries. We discuss the trends and status for UUV-based IMR in the oil and gas industry and provide an overview of the state of the art in intervention with UUVs. We also present a 3-level taxonomy for UUV autonomy: mission-level, task-level and vehicle-level. To achieve robust 6D underwater pose estimation of objects for UUV intervention, we have developed marker-less approaches with input from 2D and 3D cameras, as well as marker-based approaches with associated uncertainty. We have carried out experiments with varying turbidity to evaluate full 6D pose estimates in challenging conditions. We have also devised a sensor autocalibration method for UUV localization. For intervention, we have developed methods for autonomous underwater grasping and a novel vision-based distance estimator. For high-level task planning, we have evaluated two frameworks for automated planning and acting (AI planning). We have implemented AI planning for subsea inspection scenarios which have been analyzed and formulated in collaboration with the industry partners. One of the frameworks, called T-REX demonstrates a reactive behavior to the dynamic and potentially uncertain nature of subsea operations. We have also presented an architecture for comparing and choosing between mission plans when new mission goals are introduced.publishedVersio

    Autonomous Grasping Using Novel Distance Estimator

    Get PDF
    This paper introduces a novel distance estimator using monocular vision for autonomous underwater grasping. The presented method is also applicable to topside grasping operations. The estimator is developed for robot manipulators with a monocular camera placed near the gripper. The fact that the camera is attached near the gripper makes it possible to design a method for capturing images from different positions, as the relative position change can be measured. The presented system can estimate relative distance to an object of unknown size with good precision. The manipulator applied in the presented work is the SeaArm-2, a fully electric underwater small modular manipulator. The manipulator is unique in its integrated monocular camera in the end-effector module, and its design facilitates the use of different end-effector tools. The camera is used for supervision, object detection, and tracking. The distance estimator was validated in a laboratory setting through autonomous grasping experiments. The manipulator was able to search for and find, estimate the relative distance of, grasp, and retrieve the relevant object in 12 out of 12 trials.publishedVersio

    New Interactive Machine Learning Tool for Marine Image Analysis

    Get PDF
    We would like to thank the Lofoten Vesterålen Ocean Observatory, and specifically Geir Pedersen,for supplying much of the data used in this study. We would also like to express gratitude to the insightfulcomments made during the review of this manuscript and the efforts of the editorial team during its publication.Peer reviewe

    Autonomous marine environmental monitoring: Application in decommissioned oil fields

    Get PDF
    Hundreds of Oil & Gas Industry structures in the marine environment are approaching decommissioning. In most areas decommissioning operations will need to be supported by environmental assessment and monitoring, potentially over the life of any structures left in place. This requirement will have a considerable cost for industry and the public. Here we review approaches for the assessment of the primary operating environments associated with decommissioning — namely structures, pipelines, cuttings piles, the general seabed environment and the water column — and show that already available marine autonomous systems (MAS) offer a wide range of solutions for this major monitoring challenge. Data of direct relevance to decommissioning can be collected using acoustic, visual, and oceanographic sensors deployed on MAS. We suggest that there is considerable potential for both cost savings and a substantial improvement in the temporal and spatial resolution of environmental monitoring. We summarise the trade-offs between MAS and current conventional approaches to marine environmental monitoring. MAS have the potential to successfully carry out much of the monitoring associated with decommissioning and to offer viable alternatives where a direct match for the conventional approach is not possible

    Detection-driven exposure-correction network for nighttime drone-view object detection.

    Get PDF
    Drone-view object detection (DroneDet) models typically suffer a significant performance drop when applied to nighttime scenes. Existing solutions attempt to employ an exposure-adjustment module to reveal objects hidden in dark regions before detection. However, most exposure-adjustment models are only optimized for human perception, where the exposure-adjusted images may not necessarily enhance recognition. To tackle this issue, we propose a novel Detection-driven Exposure-Correction network for nighttime DroneDet, called DEDet. The DEDet conducts adaptive, non-linear adjustment of pixel values in a spatially fine-grained manner to generate DroneDet-friendly images. Specifically, we develop a Fine-grained Parameter Predictor (FPP) to estimate pixel-wise parameter maps of the image filters. These filters, along with the estimated parameters, are used to adjust pixel values of the low-light image based on non-uniform illuminations in drone-captured images. In order to learn the non-linear transformation from the original nighttime images to their DroneDet-friendly counterparts, we propose a Progressive Filtering module that applies recursive filters to iteratively refine the exposed image. Furthermore, to evaluate the performance of the proposed DEDet, we have built a dataset NightDrone to address the scarcity of the datasets specifically tailored for this purpose. Extensive experiments conducted on four nighttime datasets show that DEDet achieves a superior accuracy compared with the state-of-the-art methods. Furthermore, ablation studies and visualizations demonstrate the validity and interpretability of our approach. Our NightDrone dataset can be downloaded from https://github.com/yuexiemail/NightDrone-Dataset

    Object Detection in Omnidirectional Images

    Get PDF
    Nowadays, computer vision (CV) is widely used to solve real-world problems, which pose increasingly higher challenges. In this context, the use of omnidirectional video in a growing number of applications, along with the fast development of Deep Learning (DL) algorithms for object detection, drives the need for further research to improve existing methods originally developed for conventional 2D planar images. However, the geometric distortion that common sphere-to-plane projections produce, mostly visible in objects near the poles, in addition to the lack of omnidirectional open-source labeled image datasets has made an accurate spherical image-based object detection algorithm a hard goal to achieve. This work is a contribution to develop datasets and machine learning models particularly suited for omnidirectional images, represented in planar format through the well-known Equirectangular Projection (ERP). To this aim, DL methods are explored to improve the detection of visual objects in omnidirectional images, by considering the inherent distortions of ERP. An experimental study was, firstly, carried out to find out whether the error rate and type of detection errors were related to the characteristics of ERP images. Such study revealed that the error rate of object detection using existing DL models with ERP images, actually, depends on the object spherical location in the image. Then, based on such findings, a new object detection framework is proposed to obtain a uniform error rate across the whole spherical image regions. The results show that the pre and post-processing stages of the implemented framework effectively contribute to reducing the performance dependency on the image region, evaluated by the above-mentioned metric
    corecore