272 research outputs found

    Contrast Enhancement for Images in Turbid Water

    Get PDF
    Absorption, scattering, and color distortion are three major degradation factors in underwater optical imaging. Light rays are absorbed while passing through water, and absorption rates depend on the wavelength of the light. Scattering is caused by large suspended particles, which are always observed in an underwater environment. Color distortion occurs because the attenuation ratio is inversely proportional to the wavelength of light when light passes through a unit length in water. Consequently, underwater images are dark, low contrast, and dominated by a bluish tone. In this paper, we propose a novel underwater imaging model that compensates for the attenuation discrepancy along the propagation path. In addition, we develop a robust color lines-based ambient light estimator and a locally adaptive filtering algorithm for enhancing underwater images in shallow oceans. Furthermore, we propose a spectral characteristic-based color correction algorithm to recover the distorted color. The enhanced images have a reasonable noise level after the illumination compensation in the dark regions, and demonstrate an improved global contrast by which the finest details and edges are enhanced significantly

    水中イメージングシステムのための画質改善に関する研究

    Get PDF
    Underwater survey systems have numerous scientific or industrial applications in the fields of geology, biology, mining, and archeology. These application fields involve various tasks such as ecological studies, environmental damage assessment, and ancient prospection. During two decades, underwater imaging systems are mainly equipped by Underwater Vehicles (UV) for surveying in water or ocean. Challenges associated with obtaining visibility of objects have been difficult to overcome due to the physical properties of the medium. In the last two decades, sonar is usually used for the detection and recognition of targets in the ocean or underwater environment. However, because of the low quality of images by sonar imaging, optical vision sensors are then used instead of it for short range identification. Optical imaging provides short-range, high-resolution visual information of the ocean floor. However, due to the light transmission’s physical properties in the water medium, the optical imaged underwater images are usually performance as poor visibility. Light is highly attenuated when it travels in the ocean. Consequence, the imaged scenes result as poorly contrasted and hazy-like obstructions. The underwater imaging processing techniques are important to improve the quality of underwater images. As mentioned before, underwater images have poor visibility because of the medium scattering and light distortion. In contrast to common photographs, underwater optical images suffer from poor visibility owing to the medium, which causes scattering, color distortion, and absorption. Large suspended particles cause scattering similar to the scattering of light in fog or turbid water that contain many suspended particles. Color distortion occurs because different wavelengths are attenuated to different degrees in water; consequently, images of ambient in the underwater environments are dominated by a bluish tone, because higher wavelengths are attenuated more quickly. Absorption of light in water substantially reduces its intensity. The random attenuation of light causes a hazy appearance as the light backscattered by water along the line of sight considerably degrades image contrast. Especially, objects at a distance of more than 10 meters from the observation point are almost unreadable because colors are faded as characteristic wavelengths, which are filtered according to the distance traveled by light in water. So, traditional image processing methods are not suitable for processing them well. This thesis proposes strategies and solutions to tackle the above mentioned problems of underwater survey systems. In this thesis, we contribute image pre-processing, denoising, dehazing, inhomogeneities correction, color correction and fusion technologies for underwater image quality improvement. The main content of this thesis is as follows. First, comprehensive reviews of the current and most prominent underwater imaging systems are provided in Chapter 1. A main features and performance based classification criterion for the existing systems is presented. After that, by analyzing the challenges of the underwater imaging systems, a hardware based approach and non-hardware based approach is introduced. In this thesis, we are concerned about the image processing based technologies, which are one of the non-hardware approaches, and take most recent methods to process the low quality underwater images. As the different sonar imaging systems applied in much equipment, such as side-scan sonar, multi-beam sonar. The different sonar acquires different images with different characteristics. Side-scan sonar acquires high quality imagery of the seafloor with very high spatial resolution but poor locational accuracy. On the contrast, multi-beam sonar obtains high precision position and underwater depth in seafloor points. In order to fully utilize all information of these two types of sonars, it is necessary to fuse the two kinds of sonar data in Chapter 2. Considering the sonar image forming principle, for the low frequency curvelet coefficients, we use the maximum local energy method to calculate the energy of two sonar images. For the high frequency curvelet coefficients, we take absolute maximum method as a measurement. The main attributes are: firstly, the multi-resolution analysis method is well adapted the cured-singularities and point-singularities. It is useful for sonar intensity image enhancement. Secondly, maximum local energy is well performing the intensity sonar images, which can achieve perfect fusion result [42]. In Chapter 3, as analyzed the underwater laser imaging system, a Bayesian Contourlet Estimator of Bessel K Form (BCE-BKF) based denoising algorithm is proposed. We take the BCE-BKF probability density function (PDF) to model neighborhood of contourlet coefficients. After that, according to the proposed PDF model, we design a maximum a posteriori (MAP) estimator, which relies on a Bayesian statistics representation of the contourlet coefficients of noisy images. The denoised laser images have better contrast than the others. There are three obvious virtues of the proposed method. Firstly, contourlet transform decomposition prior to curvelet transform and wavelet transform by using ellipse sampling grid. Secondly, BCE-BKF model is more effective in presentation of the noisy image contourlet coefficients. Thirdly, the BCE-BKF model takes full account of the correlation between coefficients [107]. In Chapter 4, we describe a novel method to enhance underwater images by dehazing. In underwater optical imaging, absorption, scattering, and color distortion are three major issues in underwater optical imaging. Light rays traveling through water are scattered and absorbed according to their wavelength. Scattering is caused by large suspended particles that degrade optical images captured underwater. Color distortion occurs because different wavelengths are attenuated to different degrees in water; consequently, images of ambient underwater environments are dominated by a bluish tone. Our key contribution is to propose a fast image and video dehazing algorithm, to compensate the attenuation discrepancy along the propagation path, and to take the influence of the possible presence of an artificial lighting source into consideration [108]. In Chapter 5, we describe a novel method of enhancing underwater optical images or videos using guided multilayer filter and wavelength compensation. In certain circumstances, we need to immediately monitor the underwater environment by disaster recovery support robots or other underwater survey systems. However, due to the inherent optical properties and underwater complex environment, the captured images or videos are distorted seriously. Our key contributions proposed include a novel depth and wavelength based underwater imaging model to compensate for the attenuation discrepancy along the propagation path and a fast guided multilayer filtering enhancing algorithm. The enhanced images are characterized by a reduced noised level, better exposure of the dark regions, and improved global contrast where the finest details and edges are enhanced significantly [109]. The performance of the proposed approaches and the benefits are concluded in Chapter 6. Comprehensive experiments and extensive comparison with the existing related techniques demonstrate the accuracy and effect of our proposed methods.九州工業大学博士学位論文 学位記番号:工博甲第367号 学位授与年月日:平成26年3月25日CHAPTER 1 INTRODUCTION|CHAPTER 2 MULTI-SOURCE IMAGES FUSION|CHAPTER 3 LASER IMAGES DENOISING|CHAPTER 4 OPTICAL IMAGE DEHAZING|CHAPTER 5 SHALLOW WATER DE-SCATTERING|CHAPTER 6 CONCLUSIONS九州工業大学平成25年

    Roadmap on 3D integral imaging: Sensing, processing, and display

    Get PDF
    This Roadmap article on three-dimensional integral imaging provides an overview of some of the research activities in the field of integral imaging. The article discusses various aspects of the field including sensing of 3D scenes, processing of captured information, and 3D display and visualization of information. The paper consists of a series of 15 sections from the experts presenting various aspects of the field on sensing, processing, displays, augmented reality, microscopy, object recognition, and other applications. Each section represents the vision of its author to describe the progress, potential, vision, and challenging issues in this field

    Wavelet-Based Enhancement Technique for Visibility Improvement of Digital Images

    Get PDF
    Image enhancement techniques for visibility improvement of color digital images based on wavelet transform domain are investigated in this dissertation research. In this research, a novel, fast and robust wavelet-based dynamic range compression and local contrast enhancement (WDRC) algorithm to improve the visibility of digital images captured under non-uniform lighting conditions has been developed. A wavelet transform is mainly used for dimensionality reduction such that a dynamic range compression with local contrast enhancement algorithm is applied only to the approximation coefficients which are obtained by low-pass filtering and down-sampling the original intensity image. The normalized approximation coefficients are transformed using a hyperbolic sine curve and the contrast enhancement is realized by tuning the magnitude of the each coefficient with respect to surrounding coefficients. The transformed coefficients are then de-normalized to their original range. The detail coefficients are also modified to prevent edge deformation. The inverse wavelet transform is carried out resulting in a lower dynamic range and contrast enhanced intensity image. A color restoration process based on the relationship between spectral bands and the luminance of the original image is applied to convert the enhanced intensity image back to a color image. Although the colors of the enhanced images produced by the proposed algorithm are consistent with the colors of the original image, the proposed algorithm fails to produce color constant results for some pathological scenes that have very strong spectral characteristics in a single band. The linear color restoration process is the main reason for this drawback. Hence, a different approach is required for tackling the color constancy problem. The illuminant is modeled having an effect on the image histogram as a linear shift and adjust the image histogram to discount the illuminant. The WDRC algorithm is then applied with a slight modification, i.e. instead of using a linear color restoration, a non-linear color restoration process employing the spectral context relationships of the original image is applied. The proposed technique solves the color constancy issue and the overall enhancement algorithm provides attractive results improving visibility even for scenes with near-zero visibility conditions. In this research, a new wavelet-based image interpolation technique that can be used for improving the visibility of tiny features in an image is presented. In wavelet domain interpolation techniques, the input image is usually treated as the low-pass filtered subbands of an unknown wavelet-transformed high-resolution (HR) image, and then the unknown high-resolution image is produced by estimating the wavelet coefficients of the high-pass filtered subbands. The same approach is used to obtain an initial estimate of the high-resolution image by zero filling the high-pass filtered subbands. Detail coefficients are estimated via feeding this initial estimate to an undecimated wavelet transform (UWT). Taking an inverse transform after replacing the approximation coefficients of the UWT with initially estimated HR image, results in the final interpolated image. Experimental results of the proposed algorithms proved their superiority over the state-of-the-art enhancement and interpolation techniques

    Modélisation 3D du transfert raidatif pour simuler les images et données de spectroradiomètres et Lidars satellites et aéroportés de couverts végétaux et urbains

    Get PDF
    Les mesures de télédétection (MT) dépendent de l'interaction du rayonnement avec les paysages terrestres et l'atmosphère ainsi que des configurations instrumentales (bande spectrale, résolution spatiale, champ de vue: FOV,...) et expérimentales (structure et propriétés optiques du paysage et atmosphère,...). L'évolution rapide des techniques de télédétection requiert des outils appropriés pour valider leurs principes et améliorer l'emploi des MT. Les modèles de transfert radiatif (RTM) simulent des quantités (fonctions de distribution de la réflectance (BRDF) et température (BTDF), forme d'onde LiDAR, etc.) plus ou moins proches des MT. Ils constituent l'outil de référence pour simuler les MT, pour diverses applications : préparation et validation des systèmes d'observation, inversion de MT,... DART (Discrete Anisotropic Radiative Transfer) est reconnu comme le RTM le plus complet et efficace. J'ai encore nettement amélioré son réalisme via les travaux de modélisation indiqués ci-dessous. 1. Discrétisation de l'espace des directions de propagation des rayons. DART simule la propagation des rayons dans les paysages terrestres et l'atmosphère selon des directions discrètes. Les méthodes classiques définissent mal le centroïde et forme des angles solides de ces directions, si bien que le principe de conservation de l'énergie n'est pas vérifié et que l'obtention de résultats précis exige un grand nombre de directions. Pour résoudre ce problème, j'ai conçu une méthode originale qui crée des directions discrètes de formes définies. 2. Simulation d'images de spectroradiomètre avec FOV fini (caméra, pushbroom,...). Les RTMs sont de type "pixel" ou "image". Un modèle "pixel" calcule une quantité unique (BRDF, BTDF) de toute la scène simulée via sa description globale (indice foliaire, fraction d'ombre,...). Un modèle "image" donne une distribution spatiale de quantités (BRDF,...) par projection orthographique des rayons sur un plan image. Tous les RTMs supposent une acquisition monodirectionnelle (FOV nul), ce qui peut être très imprécis. Pour pouvoir simuler des capteurs à FOV fini (caméra, pushbroom,...), j'ai conçu un modèle original de suivi de rayons convergents avec projection perspective. 3. Simulation de données LiDAR. Beaucoup de RTMs simulent le signal LiDAR de manière rapide mais imprécise (paysage très simplifié, pas de diffusions multiples,...) ou de manière précis mais avec de très grands temps de calcul (e.g., modèles Monte-Carlo: MC). DART emploie une méthode "quasi-MC" originale, à la fois précise et rapide, adaptée à toute configuration instrumentale (altitude de la plateforme, attitude du LiDAR, taille de l'empreinte,...). Les acquisitions multi-impulsions LiDAR (satellite, avion, terrestre) sont simulées pour toute configuration (position du LiDAR, trajectoire de la plateforme,...). Elles sont converties dans un format industriel pour être traitées par des logiciels dédiés. Un post-traitement convertit les formes d'onde LiDAR simulées en données LiDAR de comptage de photons. 4. Bruit solaire et fusion de données LiDAR et d'images de spectroradiomètre. DART peut combiner des simulations de LiDAR multi-impulsions et d'image de spectro-radiomètre (capteur hyperspectral,...). C'est une configuration à 2 sources (soleil, laser LiDAR) et 1 capteur (télescope du LiDAR). Les régions mesurées par le LiDAR, dans le plan image du sol, sont segmentées dans l'image du spectro-radiomètre, elle aussi projetée sur le plan image du sol. Deux applications sont présentées : bruit solaire dans le signal LiDAR, et fusion de données LiDAR et d'images de spectro-radiomètre. Des configurations d'acquisition (trajectoire de plateforme, angle de vue par pixel du spectro-radiomètre et par impulsion LiDAR) peuvent être importées pour encore améliorer le réalisme des MT simulées, De plus, j'ai introduit la parallélisation multi-thread, ce qui accélère beaucoup les calculsRemote Sensing (RS) data depend on radiation interaction in Earth landscapes and atmosphere, and also on instrumental (spectral band, spatial resolution, field of view (FOV),...) and experimental (landscape/atmosphere architecture and optical properties,...) conditions. Fast developments in RS techniques require appropriate tools for validating their working principles and improving RS operational use. Radiative Transfer Models (RTM) simulate quantities (bidirectional reflectance; BRDF, directional brightness temperature: BTDF, LiDAR waveform...) that aim to approximate actual RS data. Hence, they are celebrated tools to simulate RS data for many applications: preparation and validation of RS systems, inversion of RS data... Discrete Anisotropic Radiative Transfer (DART) model is recognized as the most complete and efficient RTM. During my PhD work, I further improved its modeling in terms of accuracy and functionalities through the modeling work mentioned below. 1. Discretizing the space of radiation propagation directions.DART simulates radiation propagation along a finite number of directions in Earth/atmosphere scenes. Classical methods do not define accurately the solid angle centroids and geometric shapes of these directions, which results in non-conservative energy or imprecise modeling if few directions are used. I solved this problem by developing a novel method that creates discrete directions with well-defined shapes. 2. Simulating images of spectroradiometers with finite FOV.Existing RTMs are pixel- or image-level models. Pixel-level models use abstract landscape (scene) description (leaf area index, overall fraction of shadows,...) to calculate quantities (BRDF, BTDF,...) for the whole scene. Image-level models generate scene radiance, BRDF or BTDF images, with orthographic projection of rays that exit the scene onto an image plane. All models neglect the multi-directional acquisition in the sensor finite FOV, which is unrealistic. Hence, I implemented a sensor-level model, called converging tracking and perspective projection (CTPP), to simulate camera and cross-track sensor images, by coupling DART with classical perspective and parallel-perspective projection. 3. Simulating LiDAR data.Many RTMs simulate LiDAR waveform, but results are inaccurate (abstract scene description, account of first-order scattering only...) or require tremendous computation time for obtaining accurate results (e.g., Monte-Carlo (MC) models). With a novel quasi-MC method, DART can provide accurate results with fast processing speed, for any instrumental configuration (platform altitude, LiDAR orientation, footprint size...). It simulates satellite, airborne and terrestrial multi-pulse laser data for realistic configurations (LiDAR position, platform trajectory, scan angle range...). These data can be converted into industrial LiDAR format for being processed by LiDAR processing software. A post-processing method converts LiDAR waveform into photon counting LiDAR data, through modeling single photon detector acquisition. 4. In-flight Fusion of LiDAR and imaging spectroscopy.DART can combine multi-pulse LiDAR and cross-track imaging spectroscopy (hyperspectral sensor...). It is a 2 sources (sun, LiDAR laser) and 1 sensor (LiDAR telescope) system. First, a LiDAR multi-pulse acquisition and a sun-induced spectro-radiometer radiance image are simulated. Then, the LiDAR FOV regions projected onto the ground image plane are segmented in the spectro-radiometer image, which is also projected on the ground image plane. I applied it to simulate solar noise in LiDAR signal, and to the fusion of LiDAR data and spectro-radiometer images. To further improve accuracy when simulating actual LiDAR and spectro-radiometer, DART can also import actual acquisition configuration (platform trajectory, view angle per spectro-radiometer pixel / LiDAR pulse). Moreover, I introduced multi-thread parallelization, which greatly accelerates DART simulation

    Mapping and Deep Analysis of Image Dehazing: Coherent Taxonomy, Datasets, Open Challenges, Motivations, and Recommendations

    Get PDF
    Our study aims to review and analyze the most relevant studies in the image dehazing field. Many aspects have been deemed necessary to provide a broad understanding of various studies that have been examined through surveying the existing literature. These aspects are as follows: datasets that have been used in the literature, challenges that other researchers have faced, motivations, and recommendations for diminishing the obstacles in the reported literature. A systematic protocol is employed to search all relevant articles on image dehazing, with variations in keywords, in addition to searching for evaluation and benchmark studies. The search process is established on three online databases, namely, IEEE Xplore, Web of Science (WOS), and ScienceDirect (SD), from 2008 to 2021. These indices are selected because they are sufficient in terms of coverage. Along with definition of the inclusion and exclusion criteria, we include 152 articles to the final set. A total of 55 out of 152 articles focused on various studies that conducted image dehazing, and 13 out 152 studies covered most of the review papers based on scenarios and general overviews. Finally, most of the included articles centered on the development of image dehazing algorithms based on real-time scenario (84/152) articles. Image dehazing removes unwanted visual effects and is often considered an image enhancement technique, which requires a fully automated algorithm to work under real-time outdoor applications, a reliable evaluation method, and datasets based on different weather conditions. Many relevant studies have been conducted to meet these critical requirements. We conducted objective image quality assessment experimental comparison of various image dehazing algorithms. In conclusions unlike other review papers, our study distinctly reflects different observations on image dehazing areas. We believe that the result of this study can serve as a useful guideline for practitioners who are looking for a comprehensive view on image dehazing
    corecore