84 research outputs found

    水中イメージングシステムのための画質改善に関する研究

    Get PDF
    Underwater survey systems have numerous scientific or industrial applications in the fields of geology, biology, mining, and archeology. These application fields involve various tasks such as ecological studies, environmental damage assessment, and ancient prospection. During two decades, underwater imaging systems are mainly equipped by Underwater Vehicles (UV) for surveying in water or ocean. Challenges associated with obtaining visibility of objects have been difficult to overcome due to the physical properties of the medium. In the last two decades, sonar is usually used for the detection and recognition of targets in the ocean or underwater environment. However, because of the low quality of images by sonar imaging, optical vision sensors are then used instead of it for short range identification. Optical imaging provides short-range, high-resolution visual information of the ocean floor. However, due to the light transmission’s physical properties in the water medium, the optical imaged underwater images are usually performance as poor visibility. Light is highly attenuated when it travels in the ocean. Consequence, the imaged scenes result as poorly contrasted and hazy-like obstructions. The underwater imaging processing techniques are important to improve the quality of underwater images. As mentioned before, underwater images have poor visibility because of the medium scattering and light distortion. In contrast to common photographs, underwater optical images suffer from poor visibility owing to the medium, which causes scattering, color distortion, and absorption. Large suspended particles cause scattering similar to the scattering of light in fog or turbid water that contain many suspended particles. Color distortion occurs because different wavelengths are attenuated to different degrees in water; consequently, images of ambient in the underwater environments are dominated by a bluish tone, because higher wavelengths are attenuated more quickly. Absorption of light in water substantially reduces its intensity. The random attenuation of light causes a hazy appearance as the light backscattered by water along the line of sight considerably degrades image contrast. Especially, objects at a distance of more than 10 meters from the observation point are almost unreadable because colors are faded as characteristic wavelengths, which are filtered according to the distance traveled by light in water. So, traditional image processing methods are not suitable for processing them well. This thesis proposes strategies and solutions to tackle the above mentioned problems of underwater survey systems. In this thesis, we contribute image pre-processing, denoising, dehazing, inhomogeneities correction, color correction and fusion technologies for underwater image quality improvement. The main content of this thesis is as follows. First, comprehensive reviews of the current and most prominent underwater imaging systems are provided in Chapter 1. A main features and performance based classification criterion for the existing systems is presented. After that, by analyzing the challenges of the underwater imaging systems, a hardware based approach and non-hardware based approach is introduced. In this thesis, we are concerned about the image processing based technologies, which are one of the non-hardware approaches, and take most recent methods to process the low quality underwater images. As the different sonar imaging systems applied in much equipment, such as side-scan sonar, multi-beam sonar. The different sonar acquires different images with different characteristics. Side-scan sonar acquires high quality imagery of the seafloor with very high spatial resolution but poor locational accuracy. On the contrast, multi-beam sonar obtains high precision position and underwater depth in seafloor points. In order to fully utilize all information of these two types of sonars, it is necessary to fuse the two kinds of sonar data in Chapter 2. Considering the sonar image forming principle, for the low frequency curvelet coefficients, we use the maximum local energy method to calculate the energy of two sonar images. For the high frequency curvelet coefficients, we take absolute maximum method as a measurement. The main attributes are: firstly, the multi-resolution analysis method is well adapted the cured-singularities and point-singularities. It is useful for sonar intensity image enhancement. Secondly, maximum local energy is well performing the intensity sonar images, which can achieve perfect fusion result [42]. In Chapter 3, as analyzed the underwater laser imaging system, a Bayesian Contourlet Estimator of Bessel K Form (BCE-BKF) based denoising algorithm is proposed. We take the BCE-BKF probability density function (PDF) to model neighborhood of contourlet coefficients. After that, according to the proposed PDF model, we design a maximum a posteriori (MAP) estimator, which relies on a Bayesian statistics representation of the contourlet coefficients of noisy images. The denoised laser images have better contrast than the others. There are three obvious virtues of the proposed method. Firstly, contourlet transform decomposition prior to curvelet transform and wavelet transform by using ellipse sampling grid. Secondly, BCE-BKF model is more effective in presentation of the noisy image contourlet coefficients. Thirdly, the BCE-BKF model takes full account of the correlation between coefficients [107]. In Chapter 4, we describe a novel method to enhance underwater images by dehazing. In underwater optical imaging, absorption, scattering, and color distortion are three major issues in underwater optical imaging. Light rays traveling through water are scattered and absorbed according to their wavelength. Scattering is caused by large suspended particles that degrade optical images captured underwater. Color distortion occurs because different wavelengths are attenuated to different degrees in water; consequently, images of ambient underwater environments are dominated by a bluish tone. Our key contribution is to propose a fast image and video dehazing algorithm, to compensate the attenuation discrepancy along the propagation path, and to take the influence of the possible presence of an artificial lighting source into consideration [108]. In Chapter 5, we describe a novel method of enhancing underwater optical images or videos using guided multilayer filter and wavelength compensation. In certain circumstances, we need to immediately monitor the underwater environment by disaster recovery support robots or other underwater survey systems. However, due to the inherent optical properties and underwater complex environment, the captured images or videos are distorted seriously. Our key contributions proposed include a novel depth and wavelength based underwater imaging model to compensate for the attenuation discrepancy along the propagation path and a fast guided multilayer filtering enhancing algorithm. The enhanced images are characterized by a reduced noised level, better exposure of the dark regions, and improved global contrast where the finest details and edges are enhanced significantly [109]. The performance of the proposed approaches and the benefits are concluded in Chapter 6. Comprehensive experiments and extensive comparison with the existing related techniques demonstrate the accuracy and effect of our proposed methods.九州工業大学博士学位論文 学位記番号:工博甲第367号 学位授与年月日:平成26年3月25日CHAPTER 1 INTRODUCTION|CHAPTER 2 MULTI-SOURCE IMAGES FUSION|CHAPTER 3 LASER IMAGES DENOISING|CHAPTER 4 OPTICAL IMAGE DEHAZING|CHAPTER 5 SHALLOW WATER DE-SCATTERING|CHAPTER 6 CONCLUSIONS九州工業大学平成25年

    Visibility in underwater robotics: Benchmarking and single image dehazing

    Get PDF
    Dealing with underwater visibility is one of the most important challenges in autonomous underwater robotics. The light transmission in the water medium degrades images making the interpretation of the scene difficult and consequently compromising the whole intervention. This thesis contributes by analysing the impact of the underwater image degradation in commonly used vision algorithms through benchmarking. An online framework for underwater research that makes possible to analyse results under different conditions is presented. Finally, motivated by the results of experimentation with the developed framework, a deep learning solution is proposed capable of dehazing a degraded image in real time restoring the original colors of the image.Una de las dificultades más grandes de la robótica autónoma submarina es lidiar con la falta de visibilidad en imágenes submarinas. La transmisión de la luz en el agua degrada las imágenes dificultando el reconocimiento de objetos y en consecuencia la intervención. Ésta tesis se centra en el análisis del impacto de la degradación de las imágenes submarinas en algoritmos de visión a través de benchmarking, desarrollando un entorno de trabajo en la nube que permite analizar los resultados bajo diferentes condiciones. Teniendo en cuenta los resultados obtenidos con este entorno, se proponen métodos basados en técnicas de aprendizaje profundo para mitigar el impacto de la degradación de las imágenes en tiempo real introduciendo un paso previo que permita recuperar los colores originales

    Hyperspectral benthic mapping from underwater robotic platforms

    Get PDF
    We live on a planet of vast oceans; 70% of the Earth's surface is covered in water. They are integral to supporting life, providing 99% of the inhabitable space on Earth. Our oceans and the habitats within them are under threat due to a variety of factors. To understand the impacts and possible solutions, the monitoring of marine habitats is critically important. Optical imaging as a method for monitoring can provide a vast array of information however imaging through water is complex. To compensate for the selective attenuation of light in water, this thesis presents a novel light propagation model and illustrates how it can improve optical imaging performance. An in-situ hyperspectral system is designed which comprised of two upward looking spectrometers at different positions in the water column. The downwelling light in the water column is continuously sampled by the system which allows for the generation of a dynamic water model. In addition to the two upward looking spectrometers the in-situ system contains an imaging module which can be used for imaging of the seafloor. It consists of a hyperspectral sensor and a trichromatic stereo camera. New calibration methods are presented for the spatial and spectral co-registration of the two optical sensors. The water model is used to create image data which is invariant to the changing optical properties of the water and changing environmental conditions. In this thesis the in-situ optical system is mounted onboard an Autonomous Underwater Vehicle. Data from the imaging module is also used to classify seafloor materials. The classified seafloor patches are integrated into a high resolution 3D benthic map of the surveyed site. Given the limited imaging resolution of the hyperspectral sensor used in this work, a new method is also presented that uses information from the co-registered colour images to inform a new spectral unmixing method to resolve subpixel materials

    A Low-Complexity Mosaicing Algorithm for Stock Assessment of Seabed-Burrowing Species

    Get PDF
    Peer-reviewed This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. Manuscript received January 27, 2017; revised August 17, 2017 and December 27, 2017; accepted February 16, 2018. Published in: IEEE Journal of Oceanic Engineering ( Early Access ) DOI: 10.1109/JOE.2018.2808973This paper proposes an algorithm for mosaicing videos generated during stock assessment of seabed-burrowing species. In these surveys, video transects of the seabed are captured and the population is estimated by counting the number of burrows in the video. The mosaicing algorithm is designed to process a large amount of video data and summarize the relevant features for the survey in a single image. Hence, the algorithm is designed to be computationally inexpensive while maintaining a high degree of robustness. We adopt a registration algorithm that employs a simple translational motion model and generates a mapping to the mosaic coordinate system using a concatenation of frame-by-frame homographies. A temporal smoothness prior is used in a maximum a posteriori homography estimation algorithm to reduce noise in the motion parameters in images with small amounts of texture detail. A multiband blending scheme renders the mosaic and is optimized for the application requirements. Tests on a large data set show that the algorithm is robust enough to allow the use of mosaics as a medium for burrow counting. This will increase the verifiability of the stock assessments as well as generate a ground truth data set for the learning of an automated burrow counting algorithm.This work was supported by the Science Foundation Ireland under Award SFI-PI 08/IN.1/I2112

    Perceptual underwater image enhancement with deep learning and physical priors

    Get PDF
    Underwater image enhancement, as a pre-processing step to support the following object detection task, has drawn considerable attention in the field of underwater navigation and ocean exploration. However, most of the existing underwater image enhancement strategies tend to consider enhancement and detection as two fully independent modules with no interaction, and the practice of separate optimisation does not always help the following object detection task. In this paper, we propose two perceptual enhancement models, each of which uses a deep enhancement model with a detection perceptor. The detection perceptor provides feedback information in the form of gradients to guide the enhancement model to generate patch level visually pleasing or detection favourable images. In addition, due to the lack of training data, a hybrid underwater image synthesis model, which fuses physical priors and data-driven cues, is proposed to synthesise training data and generalise our enhancement model for real-world underwater images. Experimental results show the superiority of our proposed method over several state-of-the-art methods on both real-world and synthetic underwater datasets

    Automatic Analysis of Lens Distortions in Image Registration

    Get PDF
    Geometric image registration by estimating homographies is an important processing step in a wide variety of computer vision applications. The 2D registration of two images does not require an explicit reconstruction of intrinsic or extrinsic camera parameters. However, correcting images for non-linear lens distortions is highly recommended. Unfortunately, standard calibration techniques are sometimes difficult to apply and reliable estimations of lens distortions can only rarely be obtained. In this paper we present a new technique for automatically detecting and categorising lens distortions in pairs of images by analysing registration results. The approach is based on a new metric for registration quality assessment and facilitates a PCA-based statistical model for classifying distortion effects. In doing so the overall importance for lens calibration and image corrections can be checked, and a measure for the efficiency of accordant correction steps is given

    Joint Perceptual Learning for Enhancement and Object Detection in Underwater Scenarios

    Full text link
    Underwater degraded images greatly challenge existing algorithms to detect objects of interest. Recently, researchers attempt to adopt attention mechanisms or composite connections for improving the feature representation of detectors. However, this solution does \textit{not} eliminate the impact of degradation on image content such as color and texture, achieving minimal improvements. Another feasible solution for underwater object detection is to develop sophisticated deep architectures in order to enhance image quality or features. Nevertheless, the visually appealing output of these enhancement modules do \textit{not} necessarily generate high accuracy for deep detectors. More recently, some multi-task learning methods jointly learn underwater detection and image enhancement, accessing promising improvements. Typically, these methods invoke huge architecture and expensive computations, rendering inefficient inference. Definitely, underwater object detection and image enhancement are two interrelated tasks. Leveraging information coming from the two tasks can benefit each task. Based on these factual opinions, we propose a bilevel optimization formulation for jointly learning underwater object detection and image enhancement, and then unroll to a dual perception network (DPNet) for the two tasks. DPNet with one shared module and two task subnets learns from the two different tasks, seeking a shared representation. The shared representation provides more structural details for image enhancement and rich content information for object detection. Finally, we derive a cooperative training strategy to optimize parameters for DPNet. Extensive experiments on real-world and synthetic underwater datasets demonstrate that our method outputs visually favoring images and higher detection accuracy

    An Experimental-Based Review of Image Enhancement and Image Restoration Methods for Underwater Imaging

    Get PDF
    Underwater images play a key role in ocean exploration, but often suffer from severe quality degradation due to light absorption and scattering in water medium. Although major breakthroughs have been made recently in the general area of image enhancement and restoration, the applicability of new methods for improving the quality of underwater images has not specifically been captured. In this paper, we review the image enhancement and restoration methods that tackle typical underwater image impairments, including some extreme degradations and distortions. Firstly, we introduce the key causes of quality reduction in underwater images, in terms of the underwater image formation model (IFM). Then, we review underwater restoration methods, considering both the IFM-free and the IFM-based approaches. Next, we present an experimental-based comparative evaluation of state-of-the-art IFM-free and IFM-based methods, considering also the prior-based parameter estimation algorithms of the IFM-based methods, using both subjective and objective analysis (the used code is freely available at https://github.com/wangyanckxx/Single-Underwater-Image-Enhancement-and-Color-Restoration). Starting from this study, we pinpoint the key shortcomings of existing methods, drawing recommendations for future research in this area. Our review of underwater image enhancement and restoration provides researchers with the necessary background to appreciate challenges and opportunities in this important field

    Underwater Celestial Navigation Using the Polarization of Light Fields

    Get PDF
    Global-scale underwater navigation presents challenges that modern technology has not solved. Current technologies drift and accumulate errors over time (inertial measurement), are accurate but short-distance (acoustic), or do not sufficiently penetrate the air-water interface (radio and GPS). To address these issues, I have developed a new mode of underwater navigation based on the passive observation of patterns in the polarization of in-water light. These patterns can be used to infer the sun__s relative position, which enables the use of celestial navigation in the underwater environment. I have developed an underwater polarization video camera based on a bio-inspired polarization image sensor and the image processing and inference algorithms for estimating the sun__s position. My system estimates heading with RMS error of 6.02_ and global position with RMS error of 442 km. Averaging experimental results from a single site yielded a 0.38_ heading error and a 61 km error in global position. The instrument can detect changes in polarization due to a 0.31_ movement of the sun, which corresponds to 35.2 km of ground movement, with 99% confidence. This technique could be used by underwater vehicles for long-distance navigation and suggests additional ways that marine animals with polarization-sensitive vision could perform both local and long-distance navigation

    Domain-inspired image processing and computer vision to support deep-sea benthic ecology

    Get PDF
    Optical imagery is a necessary methodological tool for ecological research within marine environments, particularly in deeper waters. For benthic (seafloor) surveys, interpretation of image data is crucial to creating high-resolution maps of seabed habitats. This is fundamental to marine spatial planning and mitigating long-term damage of anthropogenic stressors such as growing resource demand, climate change and pollution. However there are numerous, and significant, issues in extracting a reliable ground-truth from imagery to support this process. Analysis of benthic images is difficult, due in part to the extreme variation and inconsistency in image quality - caused by complex interactions between light and water. It is also time-consuming. This thesis is dedicated to providing solutions to manage these challenges, from a strong perspective of the end-user. Specifically, we aim to improve the annotation of benthic habitats from imagery in terms of quality, consistency and efficiency. Throughout, we consider the purpose the imagery serves and work closely with end-users to best optimize our solutions. First, and for the majority of this thesis, we investigate image processing techniques to improve the appearance of image features important for habitat classification. We find that tone mapping is an effective and simple (and thus accessible) method through which to improve image quality for interpretation. We describe beneficial (expert-informed) properties for brightness distributions in underwater images and introduce a novel tone-mapping algorithm, Weibull Tone Mapping (WTM), to enhance benthic images. WTM theory operates within general constraints that model image requirements (properties) specified by image analysts, yet possesses a suitable degree of flexibility and customisation. As a tool, WTM provides analysts with a fast and ‘user-friendly’ method to improve benthic habitat classification. Second, we consider computer vision methods that could automatically identify benthic habitats in imagery, relieving the analysis bottleneck. We find that baseline transfer learning of machine learning models, with limited optimization, will better facilitate adoption by novice users, yet still provides a powerful means to swiftly extract and assess benthic data
    corecore