609 research outputs found
Single-Channel Signal Separation and Deconvolution with Generative Adversarial Networks
Single-channel signal separation and deconvolution aims to separate and
deconvolve individual sources from a single-channel mixture and is a
challenging problem in which no prior knowledge of the mixing filters is
available. Both individual sources and mixing filters need to be estimated. In
addition, a mixture may contain non-stationary noise which is unseen in the
training set. We propose a synthesizing-decomposition (S-D) approach to solve
the single-channel separation and deconvolution problem. In synthesizing, a
generative model for sources is built using a generative adversarial network
(GAN). In decomposition, both mixing filters and sources are optimized to
minimize the reconstruction error of the mixture. The proposed S-D approach
achieves a peak-to-noise-ratio (PSNR) of 18.9 dB and 15.4 dB in image
inpainting and completion, outperforming a baseline convolutional neural
network PSNR of 15.3 dB and 12.2 dB, respectively and achieves a PSNR of 13.2
dB in source separation together with deconvolution, outperforming a
convolutive non-negative matrix factorization (NMF) baseline of 10.1 dB.Comment: 7 pages. Accepted by IJCAI 201
Style Separation and Synthesis via Generative Adversarial Networks
Style synthesis attracts great interests recently, while few works focus on
its dual problem "style separation". In this paper, we propose the Style
Separation and Synthesis Generative Adversarial Network (S3-GAN) to
simultaneously implement style separation and style synthesis on object
photographs of specific categories. Based on the assumption that the object
photographs lie on a manifold, and the contents and styles are independent, we
employ S3-GAN to build mappings between the manifold and a latent vector space
for separating and synthesizing the contents and styles. The S3-GAN consists of
an encoder network, a generator network, and an adversarial network. The
encoder network performs style separation by mapping an object photograph to a
latent vector. Two halves of the latent vector represent the content and style,
respectively. The generator network performs style synthesis by taking a
concatenated vector as input. The concatenated vector contains the style half
vector of the style target image and the content half vector of the content
target image. Once obtaining the images from the generator network, an
adversarial network is imposed to generate more photo-realistic images.
Experiments on CelebA and UT Zappos 50K datasets demonstrate that the S3-GAN
has the capacity of style separation and synthesis simultaneously, and could
capture various styles in a single model
- …