151 research outputs found
Model Adaptation with Synthetic and Real Data for Semantic Dense Foggy Scene Understanding
This work addresses the problem of semantic scene understanding under dense
fog. Although considerable progress has been made in semantic scene
understanding, it is mainly related to clear-weather scenes. Extending
recognition methods to adverse weather conditions such as fog is crucial for
outdoor applications. In this paper, we propose a novel method, named
Curriculum Model Adaptation (CMAda), which gradually adapts a semantic
segmentation model from light synthetic fog to dense real fog in multiple
steps, using both synthetic and real foggy data. In addition, we present three
other main stand-alone contributions: 1) a novel method to add synthetic fog to
real, clear-weather scenes using semantic input; 2) a new fog density
estimator; 3) the Foggy Zurich dataset comprising real foggy images,
with pixel-level semantic annotations for images with dense fog. Our
experiments show that 1) our fog simulation slightly outperforms a
state-of-the-art competing simulation with respect to the task of semantic
foggy scene understanding (SFSU); 2) CMAda improves the performance of
state-of-the-art models for SFSU significantly by leveraging unlabeled real
foggy data. The datasets and code are publicly available.Comment: final version, ECCV 201
Holistic Attention-Fusion Adversarial Network for Single Image Defogging
Adversarial learning-based image defogging methods have been extensively
studied in computer vision due to their remarkable performance. However, most
existing methods have limited defogging capabilities for real cases because
they are trained on the paired clear and synthesized foggy images of the same
scenes. In addition, they have limitations in preserving vivid color and rich
textual details in defogging. To address these issues, we develop a novel
generative adversarial network, called holistic attention-fusion adversarial
network (HAAN), for single image defogging. HAAN consists of a Fog2Fogfree
block and a Fogfree2Fog block. In each block, there are three learning-based
modules, namely, fog removal, color-texture recovery, and fog synthetic, that
are constrained each other to generate high quality images. HAAN is designed to
exploit the self-similarity of texture and structure information by learning
the holistic channel-spatial feature correlations between the foggy image with
its several derived images. Moreover, in the fog synthetic module, we utilize
the atmospheric scattering model to guide it to improve the generative quality
by focusing on an atmospheric light optimization with a novel sky segmentation
network. Extensive experiments on both synthetic and real-world datasets show
that HAAN outperforms state-of-the-art defogging methods in terms of
quantitative accuracy and subjective visual quality.Comment: 13 pages, 10 figure
GridFormer: Residual Dense Transformer with Grid Structure for Image Restoration in Adverse Weather Conditions
Image restoration in adverse weather conditions is a difficult task in
computer vision. In this paper, we propose a novel transformer-based framework
called GridFormer which serves as a backbone for image restoration under
adverse weather conditions. GridFormer is designed in a grid structure using a
residual dense transformer block, and it introduces two core designs. First, it
uses an enhanced attention mechanism in the transformer layer. The mechanism
includes stages of the sampler and compact self-attention to improve
efficiency, and a local enhancement stage to strengthen local information.
Second, we introduce a residual dense transformer block (RDTB) as the final
GridFormer layer. This design further improves the network's ability to learn
effective features from both preceding and current local features. The
GridFormer framework achieves state-of-the-art results on five diverse image
restoration tasks in adverse weather conditions, including image deraining,
dehazing, deraining & dehazing, desnowing, and multi-weather restoration. The
source code and pre-trained models will be released.Comment: 17 pages, 12 figure
Restoring Images Captured in Arbitrary Hybrid Adverse Weather Conditions in One Go
Adverse conditions typically suffer from stochastic hybrid weather
degradations (e.g., rainy and hazy night), while existing image restoration
algorithms envisage that weather degradations occur independently, thus may
fail to handle real-world complicated scenarios. Besides, supervised training
is not feasible due to the lack of a comprehensive paired dataset to
characterize hybrid conditions. To this end, we have advanced the
aforementioned limitations with two tactics: framework and data. First, we
present a novel unified framework, dubbed RAHC, to Restore Arbitrary Hybrid
adverse weather Conditions in one go. Specifically, our RAHC leverages a
multi-head aggregation architecture to learn multiple degradation
representation subspaces and then constrains the network to flexibly handle
multiple hybrid adverse weather in a unified paradigm through a discrimination
mechanism in the output space. Furthermore, we devise a reconstruction vectors
aided scheme to provide auxiliary visual content cues for reconstruction, thus
can comfortably cope with hybrid scenarios with insufficient remaining image
constituents. Second, we construct a new dataset, termed HAC, for learning and
benchmarking arbitrary Hybrid Adverse Conditions restoration. HAC contains 31
scenarios composed of an arbitrary combination of five common weather, with a
total of ~316K adverse-weather/clean pairs. Extensive experiments yield
superior results and establish new state-of-the-art results on both HAC and
conventional datasets.Comment: In submissio
- …