1,623 research outputs found

    Multiple Linear Regression Haze-removal Model Based on Dark Channel Prior

    Full text link
    Dark Channel Prior (DCP) is a widely recognized traditional dehazing algorithm. However, it may fail in bright region and the brightness of the restored image is darker than hazy image. In this paper, we propose an effective method to optimize DCP. We build a multiple linear regression haze-removal model based on DCP atmospheric scattering model and train this model with RESIDE dataset, which aims to reduce the unexpected errors caused by the rough estimations of transmission map t(x) and atmospheric light A. The RESIDE dataset provides enough synthetic hazy images and their corresponding groundtruth images to train and test. We compare the performances of different dehazing algorithms in terms of two important full-reference metrics, the peak-signal-to-noise ratio (PSNR) as well as the structural similarity index measure (SSIM). The experiment results show that our model gets highest SSIM value and its PSNR value is also higher than most of state-of-the-art dehazing algorithms. Our results also overcome the weakness of DCP on real-world hazy imagesComment: IEEE CPS (CSCI 2018 Int'l Conference

    Sky detection and log illumination refinement for PDE-based hazy image contrast enhancement

    Full text link
    This report presents the results of a sky detection technique used to improve the performance of a previously developed partial differential equation (PDE)-based hazy image enhancement algorithm. Additionally, a proposed alternative method utilizes a function for log illumination refinement to improve de-hazing results while avoiding over-enhancement of sky or homogeneous regions. The algorithms were tested with several benchmark and calibration images and compared with several standard algorithms from the literature. Results indicate that the algorithms yield mostly consistent results and surpasses several of the other algorithms in terms of colour and contrast enhancement in addition to improved edge visibility.Comment: 22 pages, 13 figures, 5 table

    Fractional Multiscale Fusion-based De-hazing

    Full text link
    This report presents the results of a proposed multi-scale fusion-based single image de-hazing algorithm, which can also be used for underwater image enhancement. Furthermore, the algorithm was designed for very fast operation and minimal run-time. The proposed scheme is the faster than existing algorithms for both de-hazing and underwater image enhancement and amenable to digital hardware implementation. Results indicate mostly consistent and good results for both categories of images when compared with other algorithms from the literature.Comment: 23 pages, 13 figures, 2 table

    Image Dehazing using Bilinear Composition Loss Function

    Full text link
    In this paper, we introduce a bilinear composition loss function to address the problem of image dehazing. Previous methods in image dehazing use a two-stage approach which first estimate the transmission map followed by clear image estimation. The drawback of a two-stage method is that it tends to boost local image artifacts such as noise, aliasing and blocking. This is especially the case for heavy haze images captured with a low quality device. Our method is based on convolutional neural networks. Unique in our method is the bilinear composition loss function which directly model the correlations between transmission map, clear image, and atmospheric light. This allows errors to be back-propagated to each sub-network concurrently, while maintaining the composition constraint to avoid overfitting of each sub-network. We evaluate the effectiveness of our proposed method using both synthetic and real world examples. Extensive experiments show that our method outperfoms state-of-the-art methods especially for haze images with severe noise level and compressions

    Unsupervised Single Image Dehazing Using Dark Channel Prior Loss

    Full text link
    Single image dehazing is a critical stage in many modern-day autonomous vision applications. Early prior-based methods often involved a time-consuming minimization of a hand-crafted energy function. Recent learning-based approaches utilize the representational power of deep neural networks (DNNs) to learn the underlying transformation between hazy and clear images. Due to inherent limitations in collecting matching clear and hazy images, these methods resort to training on synthetic data; constructed from indoor images and corresponding depth information. This may result in a possible domain shift when treating outdoor scenes. We propose a completely unsupervised method of training via minimization of the well-known, Dark Channel Prior (DCP) energy function. Instead of feeding the network with synthetic data, we solely use real-world outdoor images and tune the network's parameters by directly minimizing the DCP. Although our "Deep DCP" technique can be regarded as a fast approximator of DCP, it actually improves its results significantly. This suggests an additional regularization obtained via the network and learning process. Experiments show that our method performs on par with large-scale supervised methods

    Measuring Visibility using Atmospheric Transmission and Digital Surface Model

    Full text link
    Reliable and exact assessment of visibility is essential for safe air traffic. In order to overcome the drawbacks of the currently subjective reports from human observers, we present an approach to automatically derive visibility measures by means of image processing. It first exploits image based estimation of the atmospheric transmission describing the portion of the light that is not scattered by atmospheric phenomena (e.g., haze, fog, smoke) and reaches the camera. Once the atmospheric transmission is estimated, a 3D representation of the vicinity (digital surface model: DMS) is used to compute depth measurements for the haze-free pixels and then derive a global visibility estimation for the airport. Results on foggy images demonstrate the validity of the proposed method.Comment: Presented at OAGM Workshop, 2015 (arXiv:1505.01065

    Analysis of Probabilistic multi-scale fractional order fusion-based de-hazing algorithm

    Full text link
    In this report, a de-hazing algorithm based on probability and multi-scale fractional order-based fusion is proposed. The proposed scheme improves on a previously implemented multiscale fraction order-based fusion by augmenting its local contrast and edge sharpening features. It also brightens de-hazed images, while avoiding sky region over-enhancement. The results of the proposed algorithm are analyzed and compared with existing methods from the literature and indicate better performance in most cases.Comment: 22 pages, 8 figures, journal preprin

    Gated Fusion Network for Single Image Dehazing

    Full text link
    In this paper, we propose an efficient algorithm to directly restore a clear image from a hazy input. The proposed algorithm hinges on an end-to-end trainable neural network that consists of an encoder and a decoder. The encoder is exploited to capture the context of the derived input images, while the decoder is employed to estimate the contribution of each input to the final dehazed result using the learned representations attributed to the encoder. The constructed network adopts a novel fusion-based strategy which derives three inputs from an original hazy image by applying White Balance (WB), Contrast Enhancing (CE), and Gamma Correction (GC). We compute pixel-wise confidence maps based on the appearance differences between these different inputs to blend the information of the derived inputs and preserve the regions with pleasant visibility. The final dehazed image is yielded by gating the important features of the derived inputs. To train the network, we introduce a multi-scale approach such that the halo artifacts can be avoided. Extensive experimental results on both synthetic and real-world images demonstrate that the proposed algorithm performs favorably against the state-of-the-art algorithms

    Challenges in video based object detection in maritime scenario using computer vision

    Get PDF
    This paper discusses the technical challenges in maritime image processing and machine vision problems for video streams generated by cameras. Even well documented problems of horizon detection and registration of frames in a video are very challenging in maritime scenarios. More advanced problems of background subtraction and object detection in video streams are very challenging. Challenges arising from the dynamic nature of the background, unavailability of static cues, presence of small objects at distant backgrounds, illumination effects, all contribute to the challenges as discussed here

    Optical Observations of Gamma-Ray Bursts, the Discovery of Supernovae 2005bv, 2005ee, and 2006ak, and Searches for Transients Using the "MASTER" Robotic Telescope

    Full text link
    We present the results of observations obtained using the MASTER robotic telescope in 2005 - 2006, including the earliest observations of the optical emission of the gamma-ray bursts GRB 050824 and GRB 060926. Together with later observations, these data yield the brightness-variation law t^{-0.55+-0.05} for GRB 050824. An optical flare was detected in GRB 060926 - a brightness enhancement that repeated the behavior observed in the X-ray variations. The spectrum of GRB 060926 is found to be F_E ~ E^-\beta, where \beta = 1.0+-0.2. Limits on the optical brightnesses of 26 gamma-ray bursts have been derived, 9 of these for the first time. Data for more than 90% of the accessible sky down to 19m19^m were taken and reduced in real time during the survey. A database has been composed based on these data. Limits have been placed on the rate of optical flares that are not associated with detected gamma-ray bursts, and on the opening angle for the beams of gamma-ray bursts. Three new supernovae have been discovered: SN 2005bv (type Ia) - the first to be discovered on Russian territory, SN 2005ee - one of the most powerful type II supernovae known, and SN 2006ak (type Ia). We have obtained an image of SN 2006X during the growth stage and a light curve that fully describes the brightness maximum and exponential decay. A new method for searching for optical transients of gamma-ray bursts detected using triangulation from various spacecraft is proposed and tested.Comment: 30 pages, 18 figures, 9 table
    • …
    corecore