6,758 research outputs found
Model Adaptation with Synthetic and Real Data for Semantic Dense Foggy Scene Understanding
This work addresses the problem of semantic scene understanding under dense
fog. Although considerable progress has been made in semantic scene
understanding, it is mainly related to clear-weather scenes. Extending
recognition methods to adverse weather conditions such as fog is crucial for
outdoor applications. In this paper, we propose a novel method, named
Curriculum Model Adaptation (CMAda), which gradually adapts a semantic
segmentation model from light synthetic fog to dense real fog in multiple
steps, using both synthetic and real foggy data. In addition, we present three
other main stand-alone contributions: 1) a novel method to add synthetic fog to
real, clear-weather scenes using semantic input; 2) a new fog density
estimator; 3) the Foggy Zurich dataset comprising real foggy images,
with pixel-level semantic annotations for images with dense fog. Our
experiments show that 1) our fog simulation slightly outperforms a
state-of-the-art competing simulation with respect to the task of semantic
foggy scene understanding (SFSU); 2) CMAda improves the performance of
state-of-the-art models for SFSU significantly by leveraging unlabeled real
foggy data. The datasets and code are publicly available.Comment: final version, ECCV 201
Uniform Distorted Scene Reduction on Distribution of Colour Cast Correction
Scene in the photo occulated by uniform particles distribution can degrade the image quality accidently. State of the art pre-processing methods are able to enhance visibility by employing local and global filters on the image scene. Regardless of air light and transmission map right estimation, those methods unfortunately produce artifacts and halo effects because of uncorrelated problem between the global and local filter’s windows. Besides, previous approaches might abruptly eliminate the primary scene structure of an image like texture and colour. Therefore, this study aims not solely to improve scene image quality via a recovery method but also to overcome image content issues such as the artefacts and halo effects, and finally to reduce the light disturbance in the scene image. We introduce our proposed visibility enhancement method by using joint ambience distribution that improves the colour cast in the image. Furthermore, the method is able to balance the atmospheric light in correspondence to the depth map accordingly. Consequently, our method maintains the image texture structural information by calculating the lighting estimation and maintaining a range of colours simultaneously. The method is tested on images from the Benchmarking Single Image Dehazing research by assessing their clear edge ratio, gradient, range of saturated pixels, and structural similarity metric index. The scene image restoration assessment results show that our proposed method had outperformed resuls from the Tan, Tarel and He methods by gaining the highest score in the structural similarity index and colourfulness measurement. Furthermore, our proposed method also had achieved acceptable gradient ratio and percentage of the number of saturated pixels. The proposed approach enhances the visibility in the images without affecting them structurally
- …