525 research outputs found

    A latent rhythm complexity model for attribute-controlled drum pattern generation

    Get PDF
    AbstractMost music listeners have an intuitive understanding of the notion of rhythm complexity. Musicologists and scientists, however, have long sought objective ways to measure and model such a distinctively perceptual attribute of music. Whereas previous research has mainly focused on monophonic patterns, this article presents a novel perceptually-informed rhythm complexity measure specifically designed for polyphonic rhythms, i.e., patterns in which multiple simultaneous voices cooperate toward creating a coherent musical phrase. We focus on drum rhythms relating to the Western musical tradition and validate the proposed measure through a perceptual test where users were asked to rate the complexity of real-life drumming performances. Hence, we propose a latent vector model for rhythm complexity based on a recurrent variational autoencoder tasked with learning the complexity of input samples and embedding it along one latent dimension. Aided by an auxiliary adversarial loss term promoting disentanglement, this effectively regularizes the latent space, thus enabling explicit control over the complexity of newly generated patterns. Trained on a large corpus of MIDI files of polyphonic drum recordings, the proposed method proved capable of generating coherent and realistic samples at the desired complexity value. In our experiments, output and target complexities show a high correlation, and the latent space appears interpretable and continuously navigable. On the one hand, this model can readily contribute to a wide range of creative applications, including, for instance, assisted music composition and automatic music generation. On the other hand, it brings us one step closer toward achieving the ambitious goal of equipping machines with a human-like understanding of perceptual features of music

    Interactive real-time musical systems

    Get PDF
    PhDThis thesis focuses on the development of automatic accompaniment systems. We investigate previous systems and look at a range of approaches that have been attempted for the problem of beat tracking. Most beat trackers are intended for the purposes of music information retrieval where a `black box' approach is tested on a wide variety of music genres. We highlight some of the diffculties facing offline beat trackers and design a new approach for the problem of real-time drum tracking, developing a system, B-Keeper, which makes reasonable assumptions on the nature of the signal and is provided with useful prior knowledge. Having developed the system with offline studio recordings, we look to test the system with human players. Existing offline evaluation methods seem less suitable for a performance system, since we also wish to evaluate the interaction between musician and machine. Although statistical data may reveal quantifiable measurements of the system's predictions and behaviour, we also want to test how well it functions within the context of a live performance. To do so, we devise an evaluation strategy to contrast a machine-controlled accompaniment with one controlled by a human. We also present recent work on a real-time multiple pitch tracking, which is then extended to provide automatic accompaniment for harmonic instruments such as guitar. By aligning salient notes in the output from a dual pitch tracking process, we make changes to the tempo of the accompaniment in order to align it with a live stream. By demonstrating the system's ability to align offline tracks, we can show that under restricted initial conditions, the algorithm works well as an alignment tool

    Efficacy of Rhythmic Acquisition on Gait Performance Among Individuals with Parkinson’s Disease

    Get PDF
    The purpose of this study was to identify the ability of individuals with Parkinson’s disease (PD) to acquire different rhythmic complexity levels through individual home-based Improvised Active Music Therapy (IAMT) sessions. The study aimed to identify whether higher acquisition of rhythmic complexity levels improved gait performance, as well as beat perception and production abilities. In this single subject multiple baseline design, the study measured the ability of four right-handed participants with PD to acquire greater density of syncopation, as a measure of rhythmic complexity levels, while playing uninterrupted improvised music on a simplified electronic drum-set. An accredited music therapist led each session with an acoustic guitar. The study described how higher density of syncopation levels presented in participants’ playing related to not only gait performance, and beat perception and production abilities, but also to other music measurements. The participants’ music content was transformed into digital music data in real time using Musical Instrument Digital Interface (MIDI). MIDI data was analyzed to determine density of syncopation, note count, velocity, and asynchrony during baseline and treatment IAMT intervention. Results from visual analyses and Pearson correlations indicated partial evidence for the ability of individuals with PD to acquire different rhythmic complexity levels through IAMT. Partial evidence was also found to support the overall effectiveness of IAMT sessions in increasing participant’s mean gait velocity and stride length, and reducing step time and stride length variability. The findings of the current study indicate that IAMT sessions could be an effective strategy to increase physical mobility among individuals with PD. Using MIDI in the IAMT approach can yield data to evaluate treatment effectiveness and assess patient progress, providing daily measures and analysis of data using statistical analyses alongside visual analysis. This method has the potential to lead to new evidence-based interventions modeled in music therapy

    Music adapting to the brain: From diffusion chains to neurophysiology

    Get PDF
    During the last decade, the use of experimental approaches on cultural evolution research has provided novel insights, and supported theoretical predictions, on the principles driving the evolution of human cultural systems. Laboratory simulations of language evolution showed how general-domain constraints on learning, in addition to pressures for language to be expressive, may be responsible for the emergence of linguistic structure. Languages change when culturally transmitted, adapting to fit, among all, the cognitive abilities of their users. As a result, they become regular and compressed, easier to acquire and reproduce. Although a similar theory has been recently extended to the musical domain, the empirical investigation in this field is still scarce. In addition, no study to our knowledge directly addressed the role of cognitive constraints in cultural transmission with neurophysiological investigation. In my thesis I addressed both these issues with a combination of behavioral and neurophysiological methods, in three experimental studies. In study 1 (Chapter 2), I examined the evolution of structural regularities in artificial melodic systems while they were being transmitted across individuals via coordination and alignment. To this purpose I used a new laboratory model of music transmission: the multi-generational signaling games (MGSGs), a variant of the signaling games. This model combines classical aspects of lab-based semiotic models of communication, coordination and interaction (horizontal transmission), with the vertical transmission across generations of the iterated learning model (vertical transmission). Here, two-person signaling games are organized in diffusion chains of several individuals (generations). In each game, the two players (a sender and a receiver) must agree on a common code - here a miniature system where melodic riffs refer to emotions. The receiver in one game becomes the sender in the next game, possibly retransmitting the code previously learned to another generation of participants, and so on to complete the diffusion chain. I observed the gradual evolution of several structures features of musical phrases over generations: proximity, continuity, symmetry, and melodic compression. Crucially, these features are found in most of musical cultures of the world. I argue that we tapped into universal processing mechanisms of structured sequence processing, possibly at work in the evolution of real music. In study 2 (Chapter 3), I explored the link between cultural adaptation and neural information processing. To this purpose, I combined behavioral and EEG study on 2 successive days. I show that the latency of the mismatch negativity (MMN) recorded in a pre-attentive auditory sequence processing task on day 1, predicts how well participants learn and transmit an artificial tone system with affective semantics in two signaling games on day 2. Notably, MMN latencies also predict which structural changes are introduced by participants into the artificial tone system. In study 3 (Chapter 4), I replicated and extended behavioral and neurophysiological findings on the temporal domain of music, with two independent experiments. In the first experiment, I used MGSGs as a laboratory model of cultural evolution of rhythmic equitone patterns referring to distinct emotions. As a result of transmission, rhythms developed a universal property of music structure, namely temporal regularity (or isochronicity). In the second experiment, I anchored this result with neural predictors. I showed that neural information processing capabilities of individuals, as measured with the MMN on day 1, can predict learning, transmission, and regularization of rhythmic patterns in signaling games on day 2. In agreement with study 2, I observe that MMN brain timing may reflect the efficiency of sensory systems to process auditory patterns. Functional differences in those systems, across individuals, may produce a different sensitivity to pressures for regularities in the cultural system. Finally, I argue that neural variability can be an important source of variability of cultural traits in a population. My work is the first to systematically describe the emergence of structural properties of melodic and rhythmic systems in the laboratory, using an explicit game-theoretic model of cultural transmission in which agents freely interact and exchange information. Critically, it provides the first demonstration that social learning, transmission, and cultural adaptation are constrained and driven by individual differences in the functional organization of sensory systems

    Extended emotions

    Get PDF
    Until recently, philosophers and psychologists conceived of emotions as brain- and body-bound affairs. But researchers have started to challenge this internalist and individualist orthodoxy. A rapidly growing body of work suggests that some emotions incorporate external resources and thus extend beyond the neurophysiological confines of organisms; some even argue that emotions can be socially extended and shared by multiple agents. Call this the extended emotions thesis. In this article, we consider different ways of understanding ExE in philosophy, psychology, and the cognitive sciences. First, we outline the background of the debate and discuss different argumentative strategies for ExE. In particular, we distinguish ExE from cognate but more moderate claims about the embodied and situated nature of cognition and emotion. We then dwell upon two dimensions of ExE: emotions extended by material culture and by the social factors. We conclude by defending ExE against some objections and point to desiderata for future research

    Musical Cities

    Get PDF
    Musical Cities represents an innovative approach to scholarly research and dissemination. A digital and interactive 'book', it explores the rhythms of our cities, and the role they play in our everyday urban lives, through the use of sound and music. Sara Adhitya first discusses why we should listen to urban rhythms in order to design more liveable and sustainable cities, before demonstrating how we can do so through various acoustic communication techniques. Using audio-visual examples, Musical Cities takes the ‘listener’ on an interactive journey, revealing how sound and music can be used to represent, compose, perform and interact with the city. Through case studies of urban projects developed in Paris, Perth, Venice and London, Adhitya demonstrates how the power of music, and the practice of listening, can help us to compose more accessible, inclusive, engaging, enjoyable, and ultimately more sustainable cities

    Meaning-making and creativity in musical entrainment

    Get PDF
    In this paper we suggest that basic forms of musical entrainment may be considered as intrinsically creative, enabling further creative behaviors which may flourish at different levels and timescales. Rooted in an agent's capacity to form meaningful couplings with their sonic, social, and cultural environment, musical entrainment favors processes of adaptation and exploration, where innovative and functional aspects are cultivated via active, bodily experience. We explore these insights through a theoretical lens that integrates findings from enactive cognitive science and creative cognition research. We center our examination on the realms of groove experience and the communicative and emotional dimensions of music, aiming to present a novel preliminary perspective on musical entrainment, rooted in the fundamental concepts of meaning-making and creativity. To do so, we draw from a suite of approaches that place particular emphasis on the role of situated experience and review a range of recent empirical work on entrainment (in musical and non-musical settings), emphasizing the latter's biological and cognitive foundations. We conclude that musical entrainment may be regarded as a building block for different musical creativities that shape one's musical development, offering a concrete example for how this theory could be empirically tested in the future

    Musical Cities

    Get PDF
    Musical Cities represents an innovative approach to scholarly research and dissemination. A digital and interactive 'book', it explores the rhythms of our cities, and the role they play in our everyday urban lives, through the use of sound and music. Sara Adhitya first discusses why we should listen to urban rhythms in order to design more liveable and sustainable cities, before demonstrating how we can do so through various acoustic communication techniques. Using audio-visual examples, Musical Cities takes the ‘listener’ on an interactive journey, revealing how sound and music can be used to represent, compose, perform and interact with the city. Through case studies of urban projects developed in Paris, Perth, Venice and London, Adhitya demonstrates how the power of music, and the practice of listening, can help us to compose more accessible, inclusive, engaging, enjoyable, and ultimately more sustainable cities
    corecore