1,078 research outputs found
TUTTI! - Music Composition as Dialogue
As an engineer, when I could not comprehend a physical phenomenon, I turned to mathematics. As a mathematician, when I could not link sciences to humanity, I turned to music. As a music composer, I no longer see things, I see others.
The novel method of music composition presented herein is a first comprehensive framework, system and architectonic template relying on the ideologies of Mikhail Bakhtin's dialogism as well as on research in auditory perception and cognition to create music dialogue as a means of including and engaging participants in musical communication. Beyond immediate artistic intent, I strive to compose music that fosters inclusiveness and collaboration as a relational social gesture in hope that it might incite people and society to embrace their differences and collaborate with the 'others' around them.
After probing aesthetics, communication studies and sociology, I argue that dialogism reveals itself well-suited to the aims of the current research. With dialogism as a guiding philosophy, the chapters then look at the relationship between music and language, perception as authorship, intertextuality, the interplay of imagination and understanding, means of arousal in music, mimesis, motion in music and rhythmic entrainment. Employing findings from Gestalt psychology, psychoacoustics, auditory scene analysis, cognition and psychology of expectation, the remaining chapters propose a cognitively informed polyphonic music composition method capable of reproducing the different constituents of dialogic communication by creating and organizing melodic, harmonic, rhythmic and structural elements. Music theory and principles of orchestration then move to music composition as examples demonstrate how dialogue scored between voice-parts provides opportunities for performers to interact with each other and, consequently, engage listeners experiencing the collaboration.
As dialogue can be identified in various works, I postulate that the presented Dialogical Music Composition Method can also serve as a method of music analysis. This personal method of composition also supplies tools that other musicians can opt to employ when endeavouring to build balanced dialogue in music.
If visibility is key to identity, then composing music that potentially enters into dialogue which each and every voice promotes 'humanity' through inclusivity, yielding a united Tutti
Data-driven, memory-based computational models of human segmentation of musical melody
When listening to a piece of music, listeners often identify distinct sections or segments
within the piece. Music segmentation is recognised as an important process in the abstraction
of musical contents and researchers have attempted to explain how listeners
perceive and identify the boundaries of these segments.The present study seeks the development of a system that is capable of performing
melodic segmentation in an unsupervised way, by learning from non-annotated musical
data. Probabilistic learning methods have been widely used to acquire regularities in
large sets of data, with many successful applications in language and speech processing.
Some of these applications have found their counterparts in music research and have
been used for music prediction and generation, music retrieval or music analysis, but
seldom to model perceptual and cognitive aspects of music listening.We present some preliminary experiments on melodic segmentation, which highlight
the importance of memory and the role of learning in music listening. These experiments
have motivated the development of a computational model for melodic segmentation
based on a probabilistic learning paradigm.The model uses a Mixed-memory Markov Model to estimate sequence probabilities
from pitch and time-based parametric descriptions of melodic data. We follow the assumption
that listeners' perception of feature salience in melodies is strongly related
to expectation. Moreover, we conjecture that outstanding entropy variations of certain
melodic features coincide with segmentation boundaries as indicated by listeners.Model segmentation predictions are compared with results of a listening study on
melodic segmentation carried out with real listeners. Overall results show that changes
in prediction entropy along the pieces exhibit significant correspondence with the listeners'
segmentation boundaries.Although the model relies only on information theoretic principles to make predictions
on the location of segmentation boundaries, it was found that most predicted segments
can be matched with boundaries of groupings usually attributed to Gestalt rules.These results question previous research supporting a separation between learningbased
and innate bottom-up processes of melodic grouping, and suggesting that some
of these latter processes can emerge from acquired regularities in melodic data
- …