920 research outputs found

    Generation and processing of simulated underwater images for infrastructure visual inspection with UUVs

    Get PDF
    The development of computer vision algorithms for navigation or object detection is one of the key issues of underwater robotics. However, extracting features from underwater images is challenging due to the presence of lighting defects, which need to be counteracted. This requires good environmental knowledge, either as a dataset or as a physic model. The lack of available data, and the high variability of the conditions, makes difficult the development of robust enhancement algorithms. A framework for the development of underwater computer vision algorithms is presented, consisting of a method for underwater imaging simulation, and an image enhancement algorithm, both integrated in the open-source robotics simulator UUV Simulator. The imaging simulation is based on a novel combination of the scattering model and style transfer techniques. The use of style transfer allows a realistic simulation of different environments without any prior knowledge of them. Moreover, an enhancement algorithm that successfully performs a correction of the imaging defects in any given scenario for either the real or synthetic images has been developed. The proposed approach showcases then a novel framework for the development of underwater computer vision algorithms for SLAM, navigation, or object detection in UUV

    Delineation of Surface Water Features Using RADARSAT-2 Imagery and a TOPAZ Masking Approach over the Prairie Pothole Region in Canada

    Get PDF
    The Prairie Pothole Region (PPR) is one of the most rapidly changing environments in the world. In the PPR of North America, topographic depressions are common, and they are an essential water storage element in the regional hydrological system. The accurate delineation of surface water bodies is important for a variety of reasons, including conservation, environmental management, and better understanding of hydrological and climate modeling. There are numerous surface water bodies across the northern Prairie Region, making it challenging to provide near-real-time monitoring and in situ measurements of the spatial and temporal variation in the surface water area. Satellite remote sensing is the only practical approach to delineating the surface water area of Prairie potholes on an ongoing and cost-effective basis. Optical satellite imagery is able to detect surface water but only under cloud-free conditions, a substantial limitation for operational monitoring of surface water variability. However, as an active sensor, RADARSAT-2 (RS-2) has the ability to provide data for surface water detection that can overcome the limitation of optical sensors. In this research, a threshold-based procedure was developed using Fine Wide (F0W3), Wide (W2) and Standard (S3) modes to delineate the extent of surface water areas in the St. Denis and Smith Creek study basins, Saskatchewan, Canada. RS-2 thresholding results yielded a higher number of apparent water surfaces than were visible in high-resolution optical imagery (SPOT) of comparable resolution acquired at nearly the same time. TOPAZ software was used to determine the maximum possible extent of water ponding on the surface by analyzing high-resolution LiDAR-based DEM data. Removing water bodies outside the depressions mapped by TOPAZ improved the resulting images, which corresponded more closely to the SPOT surface water images. The results demonstrate the potential of TOPAZ masking for RS-2 surface water mapping used for operational purposes

    Image enhancement for underwater mining applications

    Get PDF
    The exploration of water bodies from the sea to land filled water spaces has seen a continuous increase with new technologies such as robotics. Underwater images is one of the main sensor resources used but suffer from added problems due to the environment. Multiple methods and techniques have provided a way to correct the color, clear the poor quality and enhance the features. In this thesis work, we present the work of an Image Cleaning and Enhancement Technique which is based on performing color correction on images incorporated with Dark Channel Prior (DCP) and then taking the converted images and modifying them into the Long, Medium and Short (LMS) color space, as this space is the region in which the human eye perceives colour. This work is being developed at LSA (Laboratório de Sistema Autónomos) robotics and autonomous systems laboratory. Our objective is to improve the quality of images for and taken by robots with the particular emphasis on underwater flooded mines. This thesis work describes the architecture and the developed solution. A comparative analysis with state of the art methods and of our proposed solution is presented. Results from missions taken by the robot in operational mine scenarios are presented and discussed and allowing for the solution characterization and validation

    Crowdsourced quality assessment of enhanced underwater images: a pilot study.

    Get PDF
    Underwater image enhancement (UIE) is essential for a high-quality underwater optical imaging system. While a number of UIE algorithms have been proposed in recent years, there is little study on image quality assessment (IQA) of enhanced underwater images. In this paper, we conduct the first crowdsourced subjective IQA study on enhanced underwater images. We chose ten state-of-the-art UIE algorithms and applied them to yield enhanced images from an underwater image benchmark. Their latent quality scales were reconstructed from pair comparison. We demonstrate that the existing IQA metrics are not suitable for assessing the perceived quality of enhanced underwater images. In addition, the overall performance of 10 UIE algorithms on the benchmark is ranked by the newly proposed simulated pair comparison of the methods

    Single underwater image enhancement based on adaptive correction of channel differential and fusion

    Get PDF
    Clear underwater images are necessary in many underwater applications, while absorption, scattering, and different water conditions will lead to blurring and different color deviations. In order to overcome the limitations of the available color correction and deblurring algorithms, this paper proposed a fusion-based image enhancement method for various water areas. We proposed two novel image processing methods, namely, an adaptive channel deblurring method and a color correction method, by limiting the histogram mapping interval. Subsequently, using these two methods, we took two images from a single underwater image as inputs of the fusion framework. Finally, we obtained a satisfactory underwater image. To validate the effectiveness of the experiment, we tested our method using public datasets. The results showed that the proposed method can adaptively correct color casts and significantly enhance the details and quality of attenuated underwater images

    Sparse Coral Classification Using Deep Convolutional Neural Networks

    Get PDF
    Autonomous repair of deep-sea coral reefs is a recent proposed idea to support the oceans ecosystem in which is vital for commercial fishing, tourism and other species. This idea can be operated through using many small autonomous underwater vehicles (AUVs) and swarm intelligence techniques to locate and replace chunks of coral which have been broken off, thus enabling re-growth and maintaining the habitat. The aim of this project is developing machine vision algorithms to enable an underwater robot to locate a coral reef and a chunk of coral on the seabed and prompt the robot to pick it up. Although there is no literature on this particular problem, related work on fish counting may give some insight into the problem. The technical challenges are principally due to the potential lack of clarity of the water and platform stabilization as well as spurious artifacts (rocks, fish, and crabs). We present an efficient sparse classification for coral species using supervised deep learning method called Convolutional Neural Networks (CNNs). We compute Weber Local Descriptor (WLD), Phase Congruency (PC), and Zero Component Analysis (ZCA) Whitening to extract shape and texture feature descriptors, which are employed to be supplementary channels (feature-based maps) besides basic spatial color channels (spatial-based maps) of coral input image, we also experiment state-of-art preprocessing underwater algorithms for image enhancement and color normalization and color conversion adjustment. Our proposed coral classification method is developed under MATLAB platform, and evaluated by two different coral datasets (University of California San Diego's Moorea Labeled Corals, and Heriot-Watt University's Atlantic Deep Sea).Comment: Thesis Submitted for the Degree of MSc Erasmus Mundus in Vision and Robotics (VIBOT 2014

    Investigating best practices for Structure-from-Motion photogrammetry of turbid benthic environments

    Get PDF
    Turbid water environments represent 8-12% of the global continental shelf regions, representing a variety of benthic habitats with high ecosystem value. The aim of this thesis is to optimise Structure-from-Motion photogrammetry in turbid benthic environments. It was found that these environments require a camera with a large sensor size and resolution, custom settings to suit the conditions, photos taken at close range, and in certain cases image enhancement, to improve the accuracy of 3D models

    Veiling glare removal: synthetic dataset generation, metrics and neural network architecture

    Get PDF
    In photography, the presence of a bright light source often reduces the quality and readability of the resulting image. Light rays reflect and bounce off camera elements, sensor or diaphragm causing unwanted artifacts. These artifacts are generally known as "lens flare" and may have different influences on the photo: reduce contrast of the image (veiling glare), add circular or circular-like effects (ghosting flare), appear as bright rays spreading from light source (starburst pattern), or cause aberrations. All these effects are generally undesirable, as they reduce legibility and aesthetics of the image. In this paper we address the problem of removing or reducing the effect of veiling glare on the image. There are no available large-scale datasets for this problem and no established metrics, so we start by (i) proposing a simple and fast algorithm of generating synthetic veiling glare images necessary for training and (ii) studying metrics used in related image enhancement tasks (dehazing and underwater image enhancement). We select three such no-reference metrics (UCIQE, UIQM and CCF) and show that their improvement indicates better veil removal. Finally, we experiment on neural network architectures and propose a two-branched architecture and a training procedure utilizing structural similarity measure

    Physics-Aware Semi-Supervised Underwater Image Enhancement

    Full text link
    Underwater images normally suffer from degradation due to the transmission medium of water bodies. Both traditional prior-based approaches and deep learning-based methods have been used to address this problem. However, the inflexible assumption of the former often impairs their effectiveness in handling diverse underwater scenes, while the generalization of the latter to unseen images is usually weakened by insufficient data. In this study, we leverage both the physics-based underwater Image Formation Model (IFM) and deep learning techniques for Underwater Image Enhancement (UIE). To this end, we propose a novel Physics-Aware Dual-Stream Underwater Image Enhancement Network, i.e., PA-UIENet, which comprises a Transmission Estimation Steam (T-Stream) and an Ambient Light Estimation Stream (A-Stream). This network fulfills the UIE task by explicitly estimating the degradation parameters of the IFM. We also adopt an IFM-inspired semi-supervised learning framework, which exploits both the labeled and unlabeled images, to address the issue of insufficient data. Our method performs better than, or at least comparably to, eight baselines across five testing sets in the degradation estimation and UIE tasks. This should be due to the fact that it not only can model the degradation but also can learn the characteristics of diverse underwater scenes.Comment: 12 pages, 5 figure
    corecore