23 research outputs found

    Fast single image defogging with robust sky detection

    Get PDF
    Haze is a source of unreliability for computer vision applications in outdoor scenarios, and it is usually caused by atmospheric conditions. The Dark Channel Prior (DCP) has shown remarkable results in image defogging with three main limitations: 1) high time-consumption, 2) artifact generation, and 3) sky-region over-saturation. Therefore, current work has focused on improving processing time without losing restoration quality and avoiding image artifacts during image defogging. Hence in this research, a novel methodology based on depth approximations through DCP, local Shannon entropy, and Fast Guided Filter is proposed for reducing artifacts and improving image recovery on sky regions with low computation time. The proposed-method performance is assessed using more than 500 images from three datasets: Hybrid Subjective Testing Set from Realistic Single Image Dehazing (HSTS-RESIDE), the Synthetic Objective Testing Set from RESIDE (SOTS-RESIDE) and the HazeRD. Experimental results demonstrate that the proposed approach has an outstanding performance over state-of-the-art methods in reviewed literature, which is validated qualitatively and quantitatively through Peak Signal-to-Noise Ratio (PSNR), Naturalness Image Quality Evaluator (NIQE) and Structural SIMilarity (SSIM) index on retrieved images, considering different visual ranges, under distinct illumination and contrast conditions. Analyzing images with various resolutions, the method proposed in this work shows the lowest processing time under similar software and hardware conditions.This work was supported in part by the Centro en Investigaciones en Óptica (CIO) and the Consejo Nacional de Ciencia y Tecnología (CONACYT), and in part by the Barcelona Supercomputing Center.Peer ReviewedPostprint (published version

    Model Adaptation with Synthetic and Real Data for Semantic Dense Foggy Scene Understanding

    Full text link
    This work addresses the problem of semantic scene understanding under dense fog. Although considerable progress has been made in semantic scene understanding, it is mainly related to clear-weather scenes. Extending recognition methods to adverse weather conditions such as fog is crucial for outdoor applications. In this paper, we propose a novel method, named Curriculum Model Adaptation (CMAda), which gradually adapts a semantic segmentation model from light synthetic fog to dense real fog in multiple steps, using both synthetic and real foggy data. In addition, we present three other main stand-alone contributions: 1) a novel method to add synthetic fog to real, clear-weather scenes using semantic input; 2) a new fog density estimator; 3) the Foggy Zurich dataset comprising 38083808 real foggy images, with pixel-level semantic annotations for 1616 images with dense fog. Our experiments show that 1) our fog simulation slightly outperforms a state-of-the-art competing simulation with respect to the task of semantic foggy scene understanding (SFSU); 2) CMAda improves the performance of state-of-the-art models for SFSU significantly by leveraging unlabeled real foggy data. The datasets and code are publicly available.Comment: final version, ECCV 201

    Visibility Restoration for Single Hazy Image Using Dual Prior Knowledge

    Get PDF
    Single image haze removal has been a challenging task due to its super ill-posed nature. In this paper, we propose a novel single image algorithm that improves the detail and color of such degraded images. More concretely, we redefine a more reliable atmospheric scattering model (ASM) based on our previous work and the atmospheric point spread function (APSF). Further, by taking the haze density spatial feature into consideration, we design a scene-wise APSF kernel prediction mechanism to eliminate the multiple-scattering effect. With the redefined ASM and designed APSF, combined with the existing prior knowledge, the complex dehazing problem can be subtly converted into one-dimensional searching problem, which allows us to directly obtain the scene transmission and thereby recover visually realistic results via the proposed ASM. Experimental results verify that our algorithm outperforms several state-of-the-art dehazing techniques in terms of robustness, effectiveness, and efficiency

    이동 물체 감지 및 분진 영상 복원의 연구

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 자연과학대학 수리과학부, 2021. 2. 강명주.Robust principal component analysis(RPCA), a method used to decom- pose a matrix into the sum of a low-rank matrix and a sparse matrix, has been proven effective in modeling the static background of videos. However, because a dynamic background cannot be represented by a low-rank matrix, measures additional to the RPCA are required. In this thesis, we propose masked RPCA to process backgrounds containing moving textures. First- order Marcov random field (MRF) is used to generate a mask that roughly labels moving objects and backgrounds. To estimate the background, the rank minimization process is then applied with the mask multiplied. During the iteration, the background rank increases as the object mask expands, and the weight of the rank constraint term decreases, which increases the accuracy of the background. We compared the proposed method with state- of-art, end-to-end methods to demonstrate its advantages. Subsequently, we suggest novel dedusting method based on dust-optimized transmission map and deep image prior. This method consists of estimating atmospheric light and transmission in that order, which is similar to dark channel prior-based dehazing methods. However, existing atmospheric light estimating methods widely used in dehazing schemes give an overly bright estimation, which results in unrealistically dark dedusting results. To ad- dress this problem, we propose a segmentation-based method that gives new estimation in atmospheric light. Dark channel prior based transmission map with new atmospheric light gives unnatural intensity ordering and zero value at low transmission regions. Therefore, the transmission map is refined by scattering model based transformation and dark channel adaptive non-local total variation (NLTV) regularization. Parameter optimizing steps with deep image prior(DIP) gives the final dedusting result.강건 주성분 분석은 배경 감산을 통한 동영상의 전경 추출의 방법으로 이 용되어왔으나, 동적배경은저계수행렬로표현될수없기때문에동적배경 감산에성능적한계를가지고있었다. 우리는전경과배경을구분하는일계마 르코프연쇄를도입해정적배경을나타내는항과곱하고이것을이용한새로 운형태의강건주성분분석을제안하여동적배경감산문제를해결한다. 해당 최소화문제는반복적인교차최적화를통하여해결한다. 이어서대기중의미세 먼지에의해오염된영상을복원한다. 영상분할과암흑채널가정에기반하여 깊이지도를구하고, 비국소총변동최소화를통하여정제한다. 이후깊은영상 가정에기반한영상생성기를통하여최종적으로복원된영상을구한다. 실험을 통하여제안된방법을다른방법들과비교하고질적인측면과양적인측면모 두에서우수함을확인한다.Abstract i 1 Introduction 1 1.1 Moving Object Detection In Dynamic Backgrounds 1 1.2 Image Dedusting 2 2 Preliminaries 4 2.1 Moving Object Detection In Dynamic Backgrounds 4 2.1.1 Literature review 5 2.1.2 Robust principal component analysis(RPCA) and their application status 7 2.1.3 Graph cuts and α-expansion algorithm 14 2.2 Image Dedusting 16 2.2.1 Image dehazing methods 16 2.2.2 Dust model 18 2.2.3 Non-local total variation(NLTV) 19 3 Dynamic Background Subtraction With Masked RPCA 21 3.1 Motivation 21 3.1.1 Motivation of background modeling 21 3.1.2 Mask formulation 23 3.1.3 Model 24 3.2 Optimization 25 3.2.1 L-Subproblem 25 3.2.2 L˜-Subproblem 26 3.2.3 M-Subproblem 27 3.2.4 p-Subproblem 28 3.2.5 Adaptive parameter control 28 3.2.6 Convergence 29 3.3 Experimental results 31 3.3.1 Benchmark Algorithms And Videos 31 3.3.2 Implementation 32 3.3.3 Evaluation 32 4 Deep Image Dedusting With Dust-Optimized Transmission Map 41 4.1 Transmission estimation 41 4.1.1 Atmospheric light estimation 41 4.1.2 Transmission estimation 43 4.2 Scene radiance recovery 47 4.3 Experimental results 51 4.3.1 Implementation 51 4.3.2 Evaluation 52 5 Conclusion 58 Abstract (in Korean) 69 Acknowledgement (in Korean) 70Docto

    impact of dehazing on underwater marker detection for augmented reality

    Get PDF
    Underwater augmented reality is a very challenging task and amongst several issues, one of the most crucial aspects involves real-time tracking. Particles present in water combined with the uneven absorption of light decrease the visibility in the underwater environment. Dehazing methods are used in many areas to improve the quality of digital image data that is degraded by the influence of the environment. This paper describes the visibility conditions affecting underwater scenes and shows existing dehazing techniques that successfully improve the quality of underwater images. Four underwater dehazing methods are selected for evaluation of their capability of improving the success of square marker detection in underwater videos. Two reviewed methods represent approaches of image restoration: Multi-Scale Fusion, and Bright Channel Prior. Another two methods evaluated, the Automatic Color Enhancement and the Screened Poisson Equation, are methods of image enhancement. The evaluation uses diverse test data set to evaluate different environmental conditions. Results of the evaluation show an increased number of successful marker detections in videos pre-processed by dehazing algorithms and evaluate the performance of each compared method. The Screened Poisson method performs slightly better to other methods across various tested environments, while Bright Channel Prior and Automatic Color Enhancement shows similarly positive results

    Visibility recovery on images acquired in attenuating media. Application to underwater, fog, and mammographic imaging

    Get PDF
    136 p.When acquired in attenuating media, digital images of ten suffer from a particularly complex degradation that reduces their visual quality, hindering their suitability for further computational applications, or simply decreasing the visual pleasan tness for the user. In these cases, mathematical image processing reveals it self as an ideal tool to recover some of the information lost during the degradation process. In this dissertation,we deal with three of such practical scenarios in which this problematic is specially relevant, namely, underwater image enhancement, fogremoval and mammographic image processing. In the case of digital mammograms,X-ray beams traverse human tissue, and electronic detectorscapture them as they reach the other side. However, the superposition on a bidimensional image of three-dimensional structures produces low contraste dimages in which structures of interest suffer from a diminished visibility, obstructing diagnosis tasks. Regarding fog removal, the loss of contrast is produced by the atmospheric conditions, and white colour takes over the scene uniformly as distance increases, also reducing visibility.For underwater images, there is an added difficulty, since colour is not lost uniformly; instead, red colours decay the fastest, and green and blue colours typically dominate the acquired images. To address all these challenges,in this dissertation we develop new methodologies that rely on: a)physical models of the observed degradation, and b) the calculus of variations.Equipped with this powerful machinery, we design novel theoreticaland computational tools, including image-dependent functional energies that capture the particularities of each degradation model. These energie sare composed of different integral terms that are simultaneous lyminimized by means of efficient numerical schemes, producing a clean,visually-pleasant and use ful output image, with better contrast and increased visibility. In every considered application, we provide comprehensive qualitative (visual) and quantitative experimental results to validateour methods, confirming that the developed techniques out perform other existing approaches in the literature

    Computational Media Aesthetics for Media Synthesis

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Road Detection and Recognition from Monocular Images Using Neural Networks

    Get PDF
    Teede eristamine on oluline osa iseseisvatest navigatsioonisüsteemidest, mis aitavad robotitel ja autonoomsetel sõidukitel maapinnal liikuda. See on kasutusel erinevates seotud alamülesannetes, näiteks võimalike valiidsete liikumisteede leidmisel, takistusega kokkupõrke vältimisel ja teel asuvate objektide avastamisel.Selle töö eesmärk on uurida eksisteerivaid teede tuvastamise ja eristamise võtteid ning pakkuda välja alternatiivne lahendus selle teostamiseks.Töö jaoks loodi 5300-pildine andmestik ilma lisainfota teepiltidest. Lisaks tehti kokkuvõte juba eksisteerivatest teepiltide andmestikest. Töös pakume erinevates keskkondades asuvate teede piltide klassifitseerimiseks välja LeNet-5’l põhineva tehisnärvivõrgu. Samuti esitleme FCN-8’l põhinevat mudelit pikslipõhiseks pildituvastuseks.Road recognition is one of the important aspects in Autonomous Navigation Systems. These systems help to navigate the autonomous vehicle and robot on the ground. Further, road detection is useful in related sub-tasks such as finding valid road path where the robot/vehicle can go, for supportive driverless vehicles, preventing the collision with the obstacle, object detection on the road, and others.The goal of this thesis is to examine existing road detection and recognition techniques and propose an alternative solution for road classification and detection task.Our contribution consists of several parts. Firstly, we released the road images dataset with approximately 5,300 unlabeled road images. Secondly, we summarized the information about the existing road images datasets. Thirdly, we proposed the convolutional LeNet-5-based neural network for the road image classification for various environments. Finally, our FCN-8-based model for pixel-wise image recognition has been presented
    corecore