126 research outputs found

    UDP-YOLO: High Efficiency and Real-Time Performance of Autonomous Driving Technology

    Get PDF
    In recent years, autonomous driving technology has gradually appeared in our field of vision. It senses the surrounding environment by using radar, laser, ultrasound, GPS, computer vision and other technologies, and then identifies obstacles and various signboards, and plans a suitable path to control the driving of vehicles. However, some problems occur when this technology is applied in foggy environment, such as the low probability of recognizing objects, or the fact that some objects cannot be recognized because the fog's fuzzy degree makes the planned path wrong. In view of this defect, and considering that automatic driving technology needs to respond quickly to objects when driving, this paper extends the prior defogging algorithm of dark channel, and proposes UDP-YOLO network to apply it to automatic driving technology. This paper is mainly divided into two parts: 1. Image processing: firstly, the data set is discriminated whether there is fog or not, then the fogged data set is defogged by defogging algorithm, and finally, the defogged data set is subjected to adaptive brightness enhancement; 2. Target detection: UDP-YOLO network proposed in this paper is used to detect the defogged data set. Through the observation results, it is found that the performance of the model proposed in this paper has been greatly improved while balancing the speed

    Fast single image defogging with robust sky detection

    Get PDF
    Haze is a source of unreliability for computer vision applications in outdoor scenarios, and it is usually caused by atmospheric conditions. The Dark Channel Prior (DCP) has shown remarkable results in image defogging with three main limitations: 1) high time-consumption, 2) artifact generation, and 3) sky-region over-saturation. Therefore, current work has focused on improving processing time without losing restoration quality and avoiding image artifacts during image defogging. Hence in this research, a novel methodology based on depth approximations through DCP, local Shannon entropy, and Fast Guided Filter is proposed for reducing artifacts and improving image recovery on sky regions with low computation time. The proposed-method performance is assessed using more than 500 images from three datasets: Hybrid Subjective Testing Set from Realistic Single Image Dehazing (HSTS-RESIDE), the Synthetic Objective Testing Set from RESIDE (SOTS-RESIDE) and the HazeRD. Experimental results demonstrate that the proposed approach has an outstanding performance over state-of-the-art methods in reviewed literature, which is validated qualitatively and quantitatively through Peak Signal-to-Noise Ratio (PSNR), Naturalness Image Quality Evaluator (NIQE) and Structural SIMilarity (SSIM) index on retrieved images, considering different visual ranges, under distinct illumination and contrast conditions. Analyzing images with various resolutions, the method proposed in this work shows the lowest processing time under similar software and hardware conditions.This work was supported in part by the Centro en Investigaciones en Óptica (CIO) and the Consejo Nacional de Ciencia y Tecnología (CONACYT), and in part by the Barcelona Supercomputing Center.Peer ReviewedPostprint (published version

    Removing Atmospheric Noise Using Channel Selective Processing For Visual Correction

    Get PDF
    In the presented paper; we propose an effective image fog removal technique with a color stabilization technique which is a total 2-level process for image restoration with a HSI (Hue Saturation Intensity) based evaluation process. The approach uses extraction of suppressed pixels from an RGB image affected by smoke, steam, fog which is form of white and Gaussian noise. From our observation of most images in fog environment contain some pixels which have low values of luminescence in every color channel (considering RGB image).Using this model, we can directly estimate the effective density of fog and recover the most affected parts in the image. The parameter of calculating the effective luminescence which is a form of intensity, and also gives the scattering estimates of the light, the combined Laplace of the luminescence-light and suppressed pixels values gives us the basic map of light spread which is further used in the restoration of intensity. The transmission of intensity between the calculated fog values in the image give the estimate for the local transition between the intensity values and color values. This factor helps in the color restoration of the affected image and estimates the proper restoration of image after removal of dense fog particles. After the removal of fog particles, we then restore the color balance in the image using an auto-color-contrast stabilization technique. This is the 2-level fog restoration method. The visibility is highly dependent on the saturation of color values and not over saturation, which accounts for image quality improvements. In order to evaluate in-depth the effectiveness, we have also introduced the HSI mapping of the images, as this will show the true restoration of intensity and saturation in the fog image. Results on various images demonstrate the power of the proposed algorithm. To measure the efficiency of the algorithm the parameter of visual index is also estimated which further evaluates the robustness of the proposed algorithm for the HVS (Human Visual System) for the de-fogged images

    Towards Artifacts-free Image Defogging

    Get PDF
    In this paper we present a novel defogging technique,named CurL-Defog, aimed at minimizing the creation of unwanted artifacts during the defogging process. The majority of learning based defogging approaches rely on paired data (i.e.,the same images with and without fog), where fog is artificially added to clear images: this often provides good results on mildly fogged images but does not generalize well to real difficult cases. On the other hand, the models trained with real unpaired data (e.g. CycleGAN) can provide visually impressive results but they often produce unwanted artifacts. In this paper we propose a curriculum learning strategy coupled with an enhanced CycleGAN model in order to reduce the number of produced artifacts, while maintaining state-of-the-art performance in terms of contrast enhancement and image reconstruction. We also introduce a new metric, called HArD (Hazy Artifact Detector) to numerically quantify the amount of artifacts in the defogged images, thus avoiding the tedious and subjective manual inspection of the results. The proposed approach compares favorably with state-of-the-art techniques on both real and synthetic datasets
    corecore