13 research outputs found

    Fast underwater color correction using integral images

    Get PDF
    Underwater image processing has to face the problem of loss of color and contrast that occurs when images are acquired at a certain depth and range. The longer wavelengths of sunlight such as red or orange are rapidly absorbed by the water body, while the shorter ones have a higher scattering. Thereby, at larger distance, the scene colors appear bluish-greenish, as well as blurry. The loss of color increases not only vertically through the water column, but also horizontally, so that the subjects further away from the camera appear colorless and indistinguishable, suffering from lack of visible details. This paper presents a fast enhancement method for color correction of underwater images. The method is based on the gray-world assumption applied in the Ruderman-opponent color space and is able to cope with non-uniformly illuminated scenes. Integral images are exploited by the proposed method to perform fast color correction, taking into account locally changing luminance and chrominance. Due to the low-complexity cost this method is suitable for real-time applications ensuring realistic colors of the objects, more visible details and enhanced visual quality.Peer Reviewe

    Exploring Dehazing Methods For Remote Sensing Imagery: A Review

    Get PDF
    Remote sensing imagery plays a pivotal role in numerous applications, from environmental monitoring to disaster management. However, the occurrence of haze which is atmospheric often reduces the quality and interpretability of these images.  Atmospheric Haze reduces visibility of remote sensed images by reducing contrast and causing colour distortions.  Dehazing techniques are employed to improve the perceptibility and clarity affected images by haze. In this review, we delve into the realm of dehazing methods specifically tailored for remote sensing imagery, aiming to shed light on their efficacy and applicability. We focus on a comprehensive comparison of four prominent dehazing techniques: Histogram Equalization (HE), Light Channel Prior (LCP), Contrast Enhancement Filters (CEF), and Dark Channel Prior (DCP). These methods, representing a spectrum of approaches, are evaluated based on key quality metrics of images, including PSNR, MSE and SSIM

    A Single Image Defogging Method Using Dark Channel Prior and

    Get PDF
    ์ด ๋…ผ๋ฌธ์€ ๋‹จ์ผ ์˜์ƒ์—์„œ ์•ˆ๊ฐœ๋ฅผ ์ œ๊ฑฐํ•˜๋Š” ๊ธฐ๋ฒ•(defogging method)์„ ์—ฐ๊ตฌํ•œ ๋…ผ๋ฌธ์ด๋‹ค. ๊ธฐ์กด์˜ DCP (dark channel prior) ๊ธฐ๋ฒ•์— ํžˆ์Šคํ† ๊ทธ๋žจ ๋ถ„์„์„ ๋„์ž…ํ•˜์—ฌ ๋น„์šฉํ•จ์ˆ˜๋ฅผ ์ œ์•ˆํ•˜์˜€๋‹ค. ๋˜ํ•œ ์•ˆ๊ฐœ ์ œ๊ฑฐ ๊ณผ์ • ์—์„œ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ์ƒ‰์ •๋ณด์˜ ์™œ๊ณก์„ ๋ฐฉ์ง€ํ•˜๊ธฐ ์œ„ํ•ด HSI ์ƒ‰๊ณต๊ฐ„์—์„œ ์ƒ‰์ƒ(hue)์™€ ์ฑ„๋„(saturation)๋ฅผ ๊ณ„์‚ฐํ•˜์—ฌ ๋น„์šฉํ•จ์ˆ˜์— ์ ์šฉํ•˜์˜€๋‹ค. ์ œ์•ˆํ•œ ์•ˆ๊ฐœ ์ œ๊ฑฐ ๊ธฐ๋ฒ•์€ DCP ๋น„์šฉ๊ณผ ์ง€์—ญ์  ํžˆ์Šคํ† ๊ทธ๋žจ ๊ท ๋“ฑํ™” ๋น„์šฉ์„ ์ด์šฉํ•˜์—ฌ ์ „๋‹ฌ๋Ÿ‰(transmiss -ion) ์ถ”์ • ํ•จ์ˆ˜๋ฅผ ๋ชจ๋ธ๋งํ•˜์˜€๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ๊ธฐ์กด์˜ DCP ๊ธฐ๋ฒ•์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋Š” ์•ˆ๊ฐœ ์ œ๊ฑฐ ๊ฒฐ๊ณผ์˜ ์ €๋Œ€ ๋น„ ํ˜„์ƒ ๋ฐ ํ›„๊ด‘ ํ˜„์ƒ(halo effect)์„ ๊ฐœ์„ ํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๋˜ํ•œ ์•ˆ๊ฐœ ์ œ๊ฑฐ ์ „ํ›„ ์ƒ‰๊ณต๊ฐ„ ์ฑ„๋„๋ณ„ ๋ณ€ํ™” ํŠน์„ฑ ์„ HSI ์ƒ‰๊ณต๊ฐ„์—์„œ ๋ถ„์„ํ•˜์—ฌ ์–ป์€ ์ƒ‰์ƒ ๋ฐ ์ฑ„๋„ ๊ฐ€์ค‘์น˜๋ฅผ ์ ์šฉํ•จ์œผ๋กœ์จ ์•ˆ๊ฐœ ์ œ๊ฑฐ ์˜์ƒ์—์„œ ๋ฐœ์ƒํ•˜๋Š” ์ƒ‰์ƒ ๋ฐ ์ฑ„๋„์˜ ์™œ๊ณก์„ ๋ฐฉ์ง€ํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ์ด ๋…ผ๋ฌธ์—์„œ ์ œ์•ˆํ•œ ๋ฐฉ๋ฒ•์€ ๊ธฐ์กด์˜ ๋ฐฉ๋ฒ•๋“ค์— ๋น„ํ•ด ์ „๋‹ฌ๋Ÿ‰ ์ถ”์ • ์„ฑ๋Šฅ์ด ์šฐ์ˆ˜ํ•˜๋ฉฐ, ํŠนํžˆ ์•ˆ๊ฐœ ์ œ๊ฑฐ ๊ณผ ์ •์—์„œ ์•ˆ๊ฐœ ์•„๋‹Œ ๋ฌด์ฑ„์ƒ‰ ๋ฌผ์ฒด์˜ ์˜์—ญ์—์„œ ์ „๋‹ฌ๋Ÿ‰์ด ํ•„์š” ์ด์ƒ์œผ๋กœ ํฌ๊ฒŒ ์ถ”์ •๋จ์œผ๋กœ์จ ๋ฐœ์ƒํ•˜๋Š” ์ƒ‰์ • ๋ณด ์™œ๊ณก์„ ๋ฐฉ์ง€ํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์—ฌ์ค€๋‹ค.์ œ 1 ์žฅ ์„œ ๋ก  ............................................................. 1 ์ œ 2 ์žฅ ์•ˆ๊ฐœ ๋ชจ๋ธ๋ง์„ ์ด์šฉํ•œ ์•ˆ๊ฐœ ์ œ๊ฑฐ ๋ฐฉ๋ฒ• .................. 4 2.1 ์•ˆ๊ฐœ ๋ชจ๋ธ๋ง ....................................................... 4 2.2์ง€์—ญ์  ๋Œ€๋น„ ํ–ฅ์ƒ ๋ฐฉ๋ฒ• .......................................... 5 2.3DCP๋ฅผ ์ด์šฉํ•œ ๋ฐฉ๋ฒ• ............................................. 11 ์ œ 3 ์žฅ ์ œ์•ˆํ•œ ์•ˆ๊ฐœ ์ œ๊ฑฐ ๋ฐฉ๋ฒ• ..................................... 17 3.1 ์•ˆ๊ฐœ๊ฐ’ ์ถ”์ • ...................................................... 17 3.2 ์ฑ„๋„ ๋ณด์ • ๊ฐ€์ค‘์น˜ ............................................... 20 3.3 ์•ˆ๊ฐœ ์ œ๊ฑฐ๋ฅผ ์œ„ํ•œ ๋น„์šฉ ํ•จ์ˆ˜ ................................. 24 3.3.1 DCP ๋น„์šฉํ•จ์ˆ˜ ํ•ญ ....................................... 24 3.3.2 ํžˆ์Šคํ† ๊ทธ๋žจ ๊ท ๋“ฑํ™” ๋น„์šฉํ•จ์ˆ˜ ํ•ญ .................... 24 3.3.3 ์ƒ‰์ƒ ๊ฐ€์ค‘์น˜ .............................................. 24 3.4 ์ „๋‹ฌ๋Ÿ‰ ์ •์ œ ..................................................... 26 ์ œ 4 ์žฅ ์‹คํ—˜ ๋ฐ ๊ณ ์ฐฐ ................................................. 29 ์ œ 5 ์žฅ ๊ฒฐ ๋ก  .......................................................... 35 ์ฐธ๊ณ  ๋ฌธํ—Œ ................................................................ 3

    Haze Effects On Satellite Remote Sensing Imagery And Their Corrections

    Get PDF
    Imagery recorded using satellite sensors operating at visible wavelengths can be contaminated by atmospheric haze that originates from large scale biomass burning. Such issue can reduce the reliability of the imagery and therefore having an effective method for removing such contamination is crucial. The principal aim of this study is to investigate the effects of haze on remote sensing imagery and develop a method for removing them. In order to get a better understanding on the behaviour of haze, the effects of haze on satellite imagery were initially studied. A methodology of removing haze based on haze subtraction and filtering was then developed. The developed haze removal method was then evaluated by means of signal-to-noise ratio (SNR) and classification accuracy. The results show that the haze removal method is able to improve the haze-affected imagery qualitatively and quantitatively

    Contrast Enhancement for Images in Turbid Water

    Get PDF
    Absorption, scattering, and color distortion are three major degradation factors in underwater optical imaging. Light rays are absorbed while passing through water, and absorption rates depend on the wavelength of the light. Scattering is caused by large suspended particles, which are always observed in an underwater environment. Color distortion occurs because the attenuation ratio is inversely proportional to the wavelength of light when light passes through a unit length in water. Consequently, underwater images are dark, low contrast, and dominated by a bluish tone. In this paper, we propose a novel underwater imaging model that compensates for the attenuation discrepancy along the propagation path. In addition, we develop a robust color lines-based ambient light estimator and a locally adaptive filtering algorithm for enhancing underwater images in shallow oceans. Furthermore, we propose a spectral characteristic-based color correction algorithm to recover the distorted color. The enhanced images have a reasonable noise level after the illumination compensation in the dark regions, and demonstrate an improved global contrast by which the finest details and edges are enhanced significantly

    Hierarchical rank-based veiling light estimation for underwater dehazing

    Full text link

    Switching GAN-based Image Filters to Improve Perception for Autonomous Driving

    Get PDF
    Autonomous driving holds the potential to increase human productivity, reduce accidents caused by human errors, allow better utilization of roads, reduce traffic accidents and congestion, free up parking space and provide many other advantages. Perception of Autonomous Vehicles (AV) refers to the use of sensors to perceive the world, e.g. using cameras to detect and classify objects. Traffic scene understanding is a key research problem in perception in autonomous driving, and semantic segmentation is a useful method to address this problem. Adverse weather conditions are a reality that AV must contend with. Conditions like rain, snow, haze, etc. can drastically reduce visibility and thus affect computer vision models. Models for perception for AVs are currently designed for and tested on predominantly ideal weather conditions under good illumination. The most complete solution may be to have the segmentation networks be trained on all possible adverse conditions. Thus a dataset to train a segmentation network to make it robust to rain would need to have adequate data that cover these conditions well. Moreover, labeling is an expensive task. It is particularly expensive for semantic segmentation, as each object in a scene needs to be identified and each pixel annotated in the right class. Thus, the adverse weather is a challenging problem for perception models in AVs. This thesis explores the use of Generative Adversarial Networks (GAN) in order to improve semantic segmentation. We design a framework and a methodology to evaluate the proposed approach. The framework consists of an Adversity Detector, and a series of denoising filters. The Adversity Detector is an image classifier that takes as input clear weather or adverse weather scenes, and attempts to predict whether the given image contains rain, or puddles, or other conditions that can adversely affect semantic segmentation. The filters are denoising generative adversarial networks that are trained to remove the adverse conditions from images in order to translate the image to a domain the segmentation network has been trained on, i.e. clear weather images. We use the prediction from the Adversity Detector to choose which GAN filter to use. The methodology we devise for evaluating our approach uses the trained filters to output sets of images that we can then run segmentation tasks on. This, we argue, is a better metric for evaluating the GANs than similarity measures such as SSIM. We also use synthetic data so we can perform systematic evaluation of our technique. We train two kinds of GANs, one that uses paired data (CycleGAN), and one that does not (Pix2Pix). We have concluded that GAN architectures that use unpaired data are not sufficiently good models for denoising. We train the denoising filters using the other architecture and we found them easy to train, and they show good results. While these filters do not show better performance than when we train our segmentation network with adverse weather data, we refer back to the point that training the segmentation network requires labelled data which is expensive to collect and annotate, particularly for adverse weather and lighting conditions. We implement our proposed framework and report a 17\% increase in performance in segmentation over the baseline results obtained when we do not use our framework
    corecore