3,754 research outputs found
Harmony in Time: Memory, Consciousness, and Expectation in Beethoven\u27s Waldstein Sonata, Op. 53
Harmonic expectations in Western tonal music are formed throughout an individual\u27s lifetime, created by the encounter of commonly recurring patterns of relationships of chords within music. The recognition and identification of these patterns, particularly when the anticipated patterns are denied, are expressed on a conscious level. Although identified and articulated from the conscious experience, a listener\u27s attention may not be actively engaged in harmonic processing; moreover, the identification of deviations may arise from nonconscious processing of harmonic events. This paper identifies the processes in formulating and expressing harmonic expectation and its subsequent denial, as well as the nonconscious processing which influences this recognition. Additionally, this paper theorizes that expectations on a larger scale, beyond the chordal level, may be generated and fulfilled nonconsciously. This paper concludes with an analysis of Beethoven\u27s Waldstein Sonata, identifying moments of conflict between small-scale denials of expectations within the fulfillment of large-scale processes
Emotion-Conditioned Melody Harmonization with Hierarchical Variational Autoencoder
Existing melody harmonization models have made great progress in improving
the quality of generated harmonies, but most of them ignored the emotions
beneath the music. Meanwhile, the variability of harmonies generated by
previous methods is insufficient. To solve these problems, we propose a novel
LSTM-based Hierarchical Variational Auto-Encoder (LHVAE) to investigate the
influence of emotional conditions on melody harmonization, while improving the
quality of generated harmonies and capturing the abundant variability of chord
progressions. Specifically, LHVAE incorporates latent variables and emotional
conditions at different levels (piece- and bar-level) to model the global and
local music properties. Additionally, we introduce an attention-based melody
context vector at each step to better learn the correspondence between melodies
and harmonies. Experimental results of the objective evaluation show that our
proposed model outperforms other LSTM-based models. Through subjective
evaluation, we conclude that only altering the chords hardly changes the
overall emotion of the music. The qualitative analysis demonstrates the ability
of our model to generate variable harmonies.Comment: Accepted by IEEE SMC 202
NEUROPLASTIK
An artistic and scientific exploration in the field of artificial intelligence, and how computers might “choose” to create music.https://remix.berklee.edu/graduate-studies-production-technology/1100/thumbnail.jp
Emotional Processing in Music: Study in Affective Responses to Tonal Modulation in Controlled Harmonic Progressions and Real Music
Tonal modulation is one of the main structural and expressive aspects of music in the European musical tradition. Experiment 1 investigated affective responses to modulations to all eleven major and minor keys (relative to the starting tonality) in brief, specially constructed harmonic progressions, by using six bipolar scales related to valence, potency, and synaesthesia. The results indicated the dependence of affective response on degree of modulation in terms of key proximity, and of mode. Experiment 2 examined affective responses to the most common modulations in nineteenth-century piano music: to the subdominant, dominant, and minor sixth in the major mode. The stimuli were a balanced set of both harmonic progressions (as in Experiment 1) and real music excerpts. The results agreed with theoretical models of violations of expectancy and of proximity based on the circle of fifths, and demonstrated the influence of melodic direction and musical style on emotional response to tonal modulation
Video2Music: Suitable Music Generation from Videos using an Affective Multimodal Transformer model
Numerous studies in the field of music generation have demonstrated
impressive performance, yet virtually no models are able to directly generate
music to match accompanying videos. In this work, we develop a generative music
AI framework, Video2Music, that can match a provided video. We first curated a
unique collection of music videos. Then, we analysed the music videos to obtain
semantic, scene offset, motion, and emotion features. These distinct features
are then employed as guiding input to our music generation model. We transcribe
the audio files into MIDI and chords, and extract features such as note density
and loudness. This results in a rich multimodal dataset, called MuVi-Sync, on
which we train a novel Affective Multimodal Transformer (AMT) model to generate
music given a video. This model includes a novel mechanism to enforce affective
similarity between video and music. Finally, post-processing is performed based
on a biGRU-based regression model to estimate note density and loudness based
on the video features. This ensures a dynamic rendering of the generated chords
with varying rhythm and volume. In a thorough experiment, we show that our
proposed framework can generate music that matches the video content in terms
of emotion. The musical quality, along with the quality of music-video matching
is confirmed in a user study. The proposed AMT model, along with the new
MuVi-Sync dataset, presents a promising step for the new task of music
generation for videos
Using musical structures to communicate emotion
Includes bibliographical references.This study investigates the hypothesis that music has the ability to strongly influence emotions in listeners. It begins by challenging the accuracy of this presumption, provides a general psychological and philosophical overview of human emotions and their relation to music, and hypothesises a theory that accounts for the numerous different findings by authors around this topic. The study then attempts to investigate in what manner specific musical structures are linked to the expression of certain emotions; firstly through a literature review and secondly through the execution of empirical tests. These findings are summarised in the Conclusion. An Annexure to this study provides graphic representations of specific musical structures on valence x arousal diagrams that are of value to composers of music
Music Information Retrieval: An Inspirational Guide to Transfer from Related Disciplines
The emerging field of Music Information Retrieval (MIR) has been influenced by neighboring domains in signal processing and machine learning, including automatic speech recognition, image processing and text information retrieval. In this contribution, we start with concrete examples for methodology transfer between speech and music processing, oriented on the building blocks of pattern recognition: preprocessing, feature extraction, and classification/decoding. We then assume a higher level viewpoint when describing sources of mutual inspiration derived from text and image information retrieval. We conclude that dealing with the peculiarities of music in MIR research has contributed to advancing the state-of-the-art in other fields, and that many future challenges in MIR are strikingly similar to those that other research areas have been facing
Approaches to The Potential of Orchestration and Arranging Techniques in Music Education: What Can Be Done with A School Song?
This qualitative study systematically examines the transfer of chord progression patterns used in
different music genres to string instruments. The study emphasizes the art of orchestration, arranging
approaches, the visual representation of chord progressions on the piano staff, and the fundamental
principles of counterpoint. The chord progression patterns are supported by sample compositions,
while orchestration and arranging approaches are presented in a structured manner. Transferring
chord degrees and root positions quintintervals to piano and string instruments has been
demonstrated. Along with all the presented topics, an arrangement study has been conducted on a
school song. It is believed that all the organized musical ideas emerging on the subject can enhance
the field of music education. In the example orchestration works, the use of root voice positions
quintintervals, generally notating them on the violoncello two octaves below the first violin, lowering
the triadic voices of the chords by one octave, and the preferences in distributing the chord voices
have resulted in clear and rich tones. As a result of this study, it is recommended to uncover the
effects of orchestration and arranging approaches on students
- …