4,425 research outputs found
SongRewriter: A Chinese Song Rewriting System with Controllable Content and Rhyme Scheme
Although lyrics generation has achieved significant progress in recent years,
it has limited practical applications because the generated lyrics cannot be
performed without composing compatible melodies. In this work, we bridge this
practical gap by proposing a song rewriting system which rewrites the lyrics of
an existing song such that the generated lyrics are compatible with the rhythm
of the existing melody and thus singable. In particular, we propose
SongRewriter, a controllable Chinese lyric generation and editing system which
assists users without prior knowledge of melody composition. The system is
trained by a randomized multi-level masking strategy which produces a unified
model for generating entirely new lyrics or editing a few fragments. To improve
the controllabiliy of the generation process, we further incorporate a keyword
prompt to control the lexical choices of the content and propose novel decoding
constraints and a vowel modeling task to enable flexible end and internal rhyme
schemes. While prior rhyming metrics are mainly for rap lyrics, we propose
three novel rhyming evaluation metrics for song lyrics. Both automatic and
human evaluations show that the proposed model performs better than the
state-of-the-art models in both contents and rhyming quality. Our code and
models implemented in MindSpore Lite tool will be available
ReLyMe: Improving Lyric-to-Melody Generation by Incorporating Lyric-Melody Relationships
Lyric-to-melody generation, which generates melody according to given lyrics,
is one of the most important automatic music composition tasks. With the rapid
development of deep learning, previous works address this task with end-to-end
neural network models. However, deep learning models cannot well capture the
strict but subtle relationships between lyrics and melodies, which compromises
the harmony between lyrics and generated melodies. In this paper, we propose
ReLyMe, a method that incorporates Relationships between Lyrics and Melodies
from music theory to ensure the harmony between lyrics and melodies.
Specifically, we first introduce several principles that lyrics and melodies
should follow in terms of tone, rhythm, and structure relationships. These
principles are then integrated into neural network lyric-to-melody models by
adding corresponding constraints during the decoding process to improve the
harmony between lyrics and melodies. We use a series of objective and
subjective metrics to evaluate the generated melodies. Experiments on both
English and Chinese song datasets show the effectiveness of ReLyMe,
demonstrating the superiority of incorporating lyric-melody relationships from
the music domain into neural lyric-to-melody generation.Comment: Accepted by ACMMM 2022, ora
Unsupervised Melody-to-Lyric Generation
Automatic melody-to-lyric generation is a task in which song lyrics are
generated to go with a given melody. It is of significant practical interest
and more challenging than unconstrained lyric generation as the music imposes
additional constraints onto the lyrics. The training data is limited as most
songs are copyrighted, resulting in models that underfit the complicated
cross-modal relationship between melody and lyrics. In this work, we propose a
method for generating high-quality lyrics without training on any aligned
melody-lyric data. Specifically, we design a hierarchical lyric generation
framework that first generates a song outline and second the complete lyrics.
The framework enables disentanglement of training (based purely on text) from
inference (melody-guided text generation) to circumvent the shortage of
parallel data.
We leverage the segmentation and rhythm alignment between melody and lyrics
to compile the given melody into decoding constraints as guidance during
inference. The two-step hierarchical design also enables content control via
the lyric outline, a much-desired feature for democratizing collaborative song
creation. Experimental results show that our model can generate high-quality
lyrics that are more on-topic, singable, intelligible, and coherent than strong
baselines, for example SongMASS, a SOTA model trained on a parallel dataset,
with a 24% relative overall quality improvement based on human ratings. OComment: Accepted to ACL 23. arXiv admin note: substantial text overlap with
arXiv:2305.0776
Key Components of Musical Discourse Analysis
Musical discourse analysis is an interdisciplinary study which is incomplete without consideration of relevant social, linguistic, psychological, visual, gestural, ritual, technical, historical and musicological aspects. In the framework of Critical Discourse Analysis, musical discourse can be interpreted as social practice: it refers to specific means of representing specific aspects of the social (musical) sphere. The article introduces a general view of contemporary musical discourse, and analyses genres from the point of ‘semiosis’, ‘social agents’, ‘social relations’, ‘social context’, and ‘text’. These components of musical discourse analysis, in their various aspects and combinations, should help thoroughly examine the context of contemporary musical art, and determine linguistic features specific to different genres of musical discourse
- …