50,038 research outputs found
Characteristics of optical multi-peak solitons induced by higher-order effects in an erbium-doped fiber system
We study multi-peak solitons \textit{on a plane-wave background} in an
erbium-doped fiber system with some higher-order effects, which is governed by
a coupled Hirota and Maxwel-Bloch (H-MB) model. The important characteristics
of multi-peak solitons induced by the higher-order effects, such as the
velocity changes, localization or periodicity attenuation, and state
transitions, are revealed in detail. In particular, our results demonstrate
explicitly that a multi-peak soliton can be converted to an anti-dark soliton
when the periodicity vanishes; on the other hand, a multi-peak soliton is
transformed to a periodic wave when the localization vanishes. Numerical
simulations are performed to confirm the propagation stability of multi-peak
solitons riding on a plane-wave background. Finally, we compare and discuss the
similarity and difference of multi-peak solitons in special degenerate cases of
the H-MB system with general existence conditions.Comment: 7 pages, 4 figure
MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment
Generating music has a few notable differences from generating images and
videos. First, music is an art of time, necessitating a temporal model. Second,
music is usually composed of multiple instruments/tracks with their own
temporal dynamics, but collectively they unfold over time interdependently.
Lastly, musical notes are often grouped into chords, arpeggios or melodies in
polyphonic music, and thereby introducing a chronological ordering of notes is
not naturally suitable. In this paper, we propose three models for symbolic
multi-track music generation under the framework of generative adversarial
networks (GANs). The three models, which differ in the underlying assumptions
and accordingly the network architectures, are referred to as the jamming
model, the composer model and the hybrid model. We trained the proposed models
on a dataset of over one hundred thousand bars of rock music and applied them
to generate piano-rolls of five tracks: bass, drums, guitar, piano and strings.
A few intra-track and inter-track objective metrics are also proposed to
evaluate the generative results, in addition to a subjective user study. We
show that our models can generate coherent music of four bars right from
scratch (i.e. without human inputs). We also extend our models to human-AI
cooperative music generation: given a specific track composed by human, we can
generate four additional tracks to accompany it. All code, the dataset and the
rendered audio samples are available at https://salu133445.github.io/musegan/ .Comment: to appear at AAAI 201
- …