1,296 research outputs found
Mask-ShadowGAN: Learning to Remove Shadows from Unpaired Data
This paper presents a new method for shadow removal using unpaired data,
enabling us to avoid tedious annotations and obtain more diverse training
samples. However, directly employing adversarial learning and cycle-consistency
constraints is insufficient to learn the underlying relationship between the
shadow and shadow-free domains, since the mapping between shadow and
shadow-free images is not simply one-to-one. To address the problem, we
formulate Mask-ShadowGAN, a new deep framework that automatically learns to
produce a shadow mask from the input shadow image and then takes the mask to
guide the shadow generation via re-formulated cycle-consistency constraints.
Particularly, the framework simultaneously learns to produce shadow masks and
learns to remove shadows, to maximize the overall performance. Also, we
prepared an unpaired dataset for shadow removal and demonstrated the
effectiveness of Mask-ShadowGAN on various experiments, even it was trained on
unpaired data.Comment: Accepted to ICCV 201
Shadow Removal by High-Quality Shadow Synthesis
Most shadow removal methods rely on the invasion of training images
associated with laborious and lavish shadow region annotations, leading to the
increasing popularity of shadow image synthesis. However, the poor performance
also stems from these synthesized images since they are often
shadow-inauthentic and details-impaired. In this paper, we present a novel
generation framework, referred to as HQSS, for high-quality pseudo shadow image
synthesis. The given image is first decoupled into a shadow region identity and
a non-shadow region identity. HQSS employs a shadow feature encoder and a
generator to synthesize pseudo images. Specifically, the encoder extracts the
shadow feature of a region identity which is then paired with another region
identity to serve as the generator input to synthesize a pseudo image. The
pseudo image is expected to have the shadow feature as its input shadow feature
and as well as a real-like image detail as its input region identity. To
fulfill this goal, we design three learning objectives. When the shadow feature
and input region identity are from the same region identity, we propose a
self-reconstruction loss that guides the generator to reconstruct an identical
pseudo image as its input. When the shadow feature and input region identity
are from different identities, we introduce an inter-reconstruction loss and a
cycle-reconstruction loss to make sure that shadow characteristics and detail
information can be well retained in the synthesized images. Our HQSS is
observed to outperform the state-of-the-art methods on ISTD dataset, Video
Shadow Removal dataset, and SRD dataset. The code is available at
https://github.com/zysxmu/HQSS
Towards Ghost-free Shadow Removal via Dual Hierarchical Aggregation Network and Shadow Matting GAN
Shadow removal is an essential task for scene understanding. Many studies
consider only matching the image contents, which often causes two types of
ghosts: color in-consistencies in shadow regions or artifacts on shadow
boundaries. In this paper, we tackle these issues in two ways. First, to
carefully learn the border artifacts-free image, we propose a novel network
structure named the dual hierarchically aggregation network~(DHAN). It contains
a series of growth dilated convolutions as the backbone without any
down-samplings, and we hierarchically aggregate multi-context features for
attention and prediction, respectively. Second, we argue that training on a
limited dataset restricts the textural understanding of the network, which
leads to the shadow region color in-consistencies. Currently, the largest
dataset contains 2k+ shadow/shadow-free image pairs. However, it has only 0.1k+
unique scenes since many samples share exactly the same background with
different shadow positions. Thus, we design a shadow matting generative
adversarial network~(SMGAN) to synthesize realistic shadow mattings from a
given shadow mask and shadow-free image. With the help of novel masks or
scenes, we enhance the current datasets using synthesized shadow images.
Experiments show that our DHAN can erase the shadows and produce high-quality
ghost-free images. After training on the synthesized and real datasets, our
network outperforms other state-of-the-art methods by a large margin. The code
is available: http://github.com/vinthony/ghost-free-shadow-removal/Comment: Accepted by AAAI 202
- …