352 research outputs found
MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment
Generating music has a few notable differences from generating images and
videos. First, music is an art of time, necessitating a temporal model. Second,
music is usually composed of multiple instruments/tracks with their own
temporal dynamics, but collectively they unfold over time interdependently.
Lastly, musical notes are often grouped into chords, arpeggios or melodies in
polyphonic music, and thereby introducing a chronological ordering of notes is
not naturally suitable. In this paper, we propose three models for symbolic
multi-track music generation under the framework of generative adversarial
networks (GANs). The three models, which differ in the underlying assumptions
and accordingly the network architectures, are referred to as the jamming
model, the composer model and the hybrid model. We trained the proposed models
on a dataset of over one hundred thousand bars of rock music and applied them
to generate piano-rolls of five tracks: bass, drums, guitar, piano and strings.
A few intra-track and inter-track objective metrics are also proposed to
evaluate the generative results, in addition to a subjective user study. We
show that our models can generate coherent music of four bars right from
scratch (i.e. without human inputs). We also extend our models to human-AI
cooperative music generation: given a specific track composed by human, we can
generate four additional tracks to accompany it. All code, the dataset and the
rendered audio samples are available at https://salu133445.github.io/musegan/ .Comment: to appear at AAAI 201
Emergence of Object Segmentation in Perturbed Generative Models
We introduce a novel framework to build a model that can learn how to segment
objects from a collection of images without any human annotation. Our method
builds on the observation that the location of object segments can be perturbed
locally relative to a given background without affecting the realism of a
scene. Our approach is to first train a generative model of a layered scene.
The layered representation consists of a background image, a foreground image
and the mask of the foreground. A composite image is then obtained by
overlaying the masked foreground image onto the background. The generative
model is trained in an adversarial fashion against a discriminator, which
forces the generative model to produce realistic composite images. To force the
generator to learn a representation where the foreground layer corresponds to
an object, we perturb the output of the generative model by introducing a
random shift of both the foreground image and mask relative to the background.
Because the generator is unaware of the shift before computing its output, it
must produce layered representations that are realistic for any such random
perturbation. Finally, we learn to segment an image by defining an autoencoder
consisting of an encoder, which we train, and the pre-trained generator as the
decoder, which we freeze. The encoder maps an image to a feature vector, which
is fed as input to the generator to give a composite image matching the
original input image. Because the generator outputs an explicit layered
representation of the scene, the encoder learns to detect and segment objects.
We demonstrate this framework on real images of several object categories.Comment: 33rd Conference on Neural Information Processing Systems (NeurIPS
2019), Spotlight presentatio
Three-stage binarization of color document images based on discrete wavelet transform and generative adversarial networks
The efficient segmentation of foreground text information from the background
in degraded color document images is a hot research topic. Due to the imperfect
preservation of ancient documents over a long period of time, various types of
degradation, including staining, yellowing, and ink seepage, have seriously
affected the results of image binarization. In this paper, a three-stage method
is proposed for image enhancement and binarization of degraded color document
images by using discrete wavelet transform (DWT) and generative adversarial
network (GAN). In Stage-1, we use DWT and retain the LL subband images to
achieve the image enhancement. In Stage-2, the original input image is split
into four (Red, Green, Blue and Gray) single-channel images, each of which
trains the independent adversarial networks. The trained adversarial network
models are used to extract the color foreground information from the images. In
Stage-3, in order to combine global and local features, the output image from
Stage-2 and the original input image are used to train the independent
adversarial networks for document binarization. The experimental results
demonstrate that our proposed method outperforms many classical and
state-of-the-art (SOTA) methods on the Document Image Binarization Contest
(DIBCO) dataset. We release our implementation code at
https://github.com/abcpp12383/ThreeStageBinarization
SketchyGAN: Towards Diverse and Realistic Sketch to Image Synthesis
Synthesizing realistic images from human drawn sketches is a challenging
problem in computer graphics and vision. Existing approaches either need exact
edge maps, or rely on retrieval of existing photographs. In this work, we
propose a novel Generative Adversarial Network (GAN) approach that synthesizes
plausible images from 50 categories including motorcycles, horses and couches.
We demonstrate a data augmentation technique for sketches which is fully
automatic, and we show that the augmented data is helpful to our task. We
introduce a new network building block suitable for both the generator and
discriminator which improves the information flow by injecting the input image
at multiple scales. Compared to state-of-the-art image translation methods, our
approach generates more realistic images and achieves significantly higher
Inception Scores.Comment: Accepted to CVPR 201
- …