4,313 research outputs found
MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment
Generating music has a few notable differences from generating images and
videos. First, music is an art of time, necessitating a temporal model. Second,
music is usually composed of multiple instruments/tracks with their own
temporal dynamics, but collectively they unfold over time interdependently.
Lastly, musical notes are often grouped into chords, arpeggios or melodies in
polyphonic music, and thereby introducing a chronological ordering of notes is
not naturally suitable. In this paper, we propose three models for symbolic
multi-track music generation under the framework of generative adversarial
networks (GANs). The three models, which differ in the underlying assumptions
and accordingly the network architectures, are referred to as the jamming
model, the composer model and the hybrid model. We trained the proposed models
on a dataset of over one hundred thousand bars of rock music and applied them
to generate piano-rolls of five tracks: bass, drums, guitar, piano and strings.
A few intra-track and inter-track objective metrics are also proposed to
evaluate the generative results, in addition to a subjective user study. We
show that our models can generate coherent music of four bars right from
scratch (i.e. without human inputs). We also extend our models to human-AI
cooperative music generation: given a specific track composed by human, we can
generate four additional tracks to accompany it. All code, the dataset and the
rendered audio samples are available at https://salu133445.github.io/musegan/ .Comment: to appear at AAAI 201
Personalization of Saliency Estimation
Most existing saliency models use low-level features or task descriptions
when generating attention predictions. However, the link between observer
characteristics and gaze patterns is rarely investigated. We present a novel
saliency prediction technique which takes viewers' identities and personal
traits into consideration when modeling human attention. Instead of only
computing image salience for average observers, we consider the interpersonal
variation in the viewing behaviors of observers with different personal traits
and backgrounds. We present an enriched derivative of the GAN network, which is
able to generate personalized saliency predictions when fed with image stimuli
and specific information about the observer. Our model contains a generator
which generates grayscale saliency heat maps based on the image and an observer
label. The generator is paired with an adversarial discriminator which learns
to distinguish generated salience from ground truth salience. The discriminator
also has the observer label as an input, which contributes to the
personalization ability of our approach. We evaluate the performance of our
personalized salience model by comparison with a benchmark model along with
other un-personalized predictions, and illustrate improvements in prediction
accuracy for all tested observer groups
Learning Temporal Transformations From Time-Lapse Videos
Based on life-long observations of physical, chemical, and biologic phenomena
in the natural world, humans can often easily picture in their minds what an
object will look like in the future. But, what about computers? In this paper,
we learn computational models of object transformations from time-lapse videos.
In particular, we explore the use of generative models to create depictions of
objects at future times. These models explore several different prediction
tasks: generating a future state given a single depiction of an object,
generating a future state given two depictions of an object at different times,
and generating future states recursively in a recurrent framework. We provide
both qualitative and quantitative evaluations of the generated results, and
also conduct a human evaluation to compare variations of our models.Comment: ECCV201
- …