87,686 research outputs found

    An Application of the Actor Model of Concurrency in Python: A Euclidean Rhythm Music Sequencer

    Get PDF
    We present a real-time sequencer, implementing the Euclidean rhythm algorithm, for creative generation of drum sequences by musicians or producers. We use the Actor model of concurrency to simplify the communication required for interactivity and musical timing, and generator comprehensions and higher-order functions to simplify the implementation of the Euclidean rhythm algorithm. The resulting application sends Musical Instrument Digital Interface (MIDI) data interactively to another application for sound generation

    Rhythmic complexity and predictive coding::A novel approach to modeling rhythm and meter perception in music

    Get PDF
    Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding (PC) as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of PC, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain’s Bayesian minimization of the error between the input to the brain and the brain’s prior expectations. Third, we develop a PC model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard (“rhythm”) and the brain’s anticipatory structuring of music (“meter”). Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the PC theory. We argue that musical rhythm exploits the brain’s general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms

    The Perception Of Rhythm And Tempo Modulation In Music

    Get PDF
    Research is presented on the perception of rhythm, specifically, the detection of changing or modulating inter-onset intervals in simple musical stimuli. The types of changes of rhythmic patterns examined are rhythm modulation and tempo modulation. These two terms are akin to the musician\u27s concepts of agogics, rubato, accelerando and ritardando, all being common expressive devices in musical performance. Rhythm modulation occurs when an initially even or isochronous rhythm becomes increasingly more uneven. Tempo modulation occurs when the beat rate of a rhythm accelerates or decelerates.;Two approaches are adopted to elucidate how such modulating patterns might be perceived, a theoretical one and an experimental one. Following a review of the pertinent literature, a theoretical model of time-interval perception in music is proposed that attempts to synthesize the findings of previous experimentation. The main thrust of the model is that rhythm perception is mediated by two complementary processes: (1) a so-called OSCILLATOR BANK that entrains to stimulus time-intervals on a note-to-note basis, and (2) a SHORT AUDITORY STORE that is responsible for integrating temporally separated events.;The model generates the hypothesis that rhythm modulation will be detected in the OSCILLATOR BANK, whereas tempo modulation will be detected in the SHORT AUDITORY STORE. This hypothesis is tested in three perceptual experiments. To compare the difficulty of detecting rhythm and tempo modulation under various conditions, certain variables are manipulated: the direction of modulation (whether a change onset occurs earlier or later than expected), the initial beat rate, the metrical location of modulation, and the presence or absence of beat subdivision. To measure perceptual difficulty, a type of reaction-time dependent variable and a modulation-type-identification dependent variable are used.;The following results are observed: (a) the direction of modulation is significant only for tempo modulation, (b) rhythm and tempo modulation exhibit contrasting trends across the musical initial-beat-rate range, (c) metrical location does not affect detection and (d) detection is easier with beat subdivision. These results are generally consistent with the hypothesis that rhythm and tempo modulation detection are mediated by contrasting perceptual processes

    Comparing timbre estimation using auditory models with and without hearing loss

    Get PDF
    We propose a concept for evaluating signal transformations for music signals with respect to an individual hearing deficit by using an auditory model. This deficit is simulated in the model by changing specific model parameters. Our idea is extracting the musical attributes rhythm, pitch, loudness and timbre and comparing the modified model output to the original one. While rhythm, pitch, and loudness estimation are studied in previous works the focus in this paper concentrates on timbre estimation. Results are shown for the original auditory model and three models, each simulating a specific hearing loss

    Moving in time: simulating how neural circuits enable rhythmic enactment of planned sequences

    Full text link
    Many complex actions are mentally pre-composed as plans that specify orderings of simpler actions. To be executed accurately, planned orderings must become active in working memory, and then enacted one-by-one until the sequence is complete. Examples include writing, typing, and speaking. In cases where the planned complex action is musical in nature (e.g. a choreographed dance or a piano melody), it appears to be possible to deploy two learned sequences at the same time, one composed from actions and a second composed from the time intervals between actions. Despite this added complexity, humans readily learn and perform rhythm-based action sequences. Notably, people can learn action sequences and rhythmic sequences separately, and then combine them with little trouble (Ullén & Bengtsson 2003). Related functional MRI data suggest that there are distinct neural regions responsible for the two different sequence types (Bengtsson et al. 2004). Although research on musical rhythm is extensive, few computational models exist to extend and inform our understanding of its neural bases. To that end, this article introduces the TAMSIN (Timing And Motor System Integration Network) model, a systems-level neural network model capable of performing arbitrary item sequences in accord with any rhythmic pattern that can be represented as a sequence of integer multiples of a base interval. In TAMSIN, two Competitive Queuing (CQ) modules operate in parallel. One represents and controls item order (the ORD module) and the second represents and controls the sequence of inter-onset-intervals (IOIs) that define a rhythmic pattern (RHY module). Further circuitry helps these modules coordinate their signal processing to enable performative output consistent with a desired beat and tempo.Accepted manuscrip

    Microtiming patterns and interactions with musical properties in Samba music

    Get PDF
    In this study, we focus on the interaction between microtiming patterns and several musical properties: intensity, meter and spectral characteristics. The data-set of 106 musical audio excerpts is processed by means of an auditory model and then divided into several spectral regions and metric levels. The resulting segments are described in terms of their musical properties, over which patterns of peak positions and their intensities are sought. A clustering algorithm is used to systematize the process of pattern detection. The results confirm previously reported anticipations of the third and fourth semiquavers in a beat. We also argue that these patterns of microtiming deviations interact with different profiles of intensities that change according to the metrical structure and spectral characteristics. In particular, we suggest two new findings: (i) a small delay of microtiming positions at the lower end of the spectrum on the first semiquaver of each beat and (ii) systematic forms of accelerando and ritardando at a microtiming level covering two-beat and four-beat phrases. The results demonstrate the importance of multidimensional interactions with timing aspects of music. However, more research is needed in order to find proper representations for rhythm and microtiming aspects in such contexts

    Chorusing, synchrony, and the evolutionary functions of rhythm

    No full text
    A central goal of biomusicology is to understand the biological basis of human musicality. One approach to this problem has been to compare core components of human musicality (relative pitch perception, entrainment, etc.) with similar capacities in other animal species. Here we extend and clarify this comparative approach with respect to rhythm. First, whereas most comparisons between human music and animal acoustic behavior have focused on spectral properties (melody and harmony), we argue for the central importance of temporal properties, and propose that this domain is ripe for further comparative research. Second, whereas most rhythm research in non-human animals has examined animal timing in isolation, we consider how chorusing dynamics can shape individual timing, as in human music and dance, arguing that group behavior is key to understanding the adaptive functions of rhythm. To illustrate the interdependence between individual and chorusing dynamics, we present a computational model of chorusing agents relating individual call timing with synchronous group behavior. Third, we distinguish and clarify mechanistic and functional explanations of rhythmic phenomena, often conflated in the literature, arguing that this distinction is key for understanding the evolution of musicality. Fourth, we expand biomusicological discussions beyond the species typically considered, providing an overview of chorusing and rhythmic behavior across a broad range of taxa (orthopterans, fireflies, frogs, birds, and primates). Finally, we propose an “Evolving Signal Timing” hypothesis, suggesting that similarities between timing abilities in biological species will be based on comparable chorusing behaviors. We conclude that the comparative study of chorusing species can provide important insights into the adaptive function(s) of rhythmic behavior in our “proto-musical” primate ancestors, and thus inform our understanding of the biology and evolution of rhythm in human music and language

    Music SketchNet: Controllable Music Generation via Factorized Representations of Pitch and Rhythm

    Full text link
    Drawing an analogy with automatic image completion systems, we propose Music SketchNet, a neural network framework that allows users to specify partial musical ideas guiding automatic music generation. We focus on generating the missing measures in incomplete monophonic musical pieces, conditioned on surrounding context, and optionally guided by user-specified pitch and rhythm snippets. First, we introduce SketchVAE, a novel variational autoencoder that explicitly factorizes rhythm and pitch contour to form the basis of our proposed model. Then we introduce two discriminative architectures, SketchInpainter and SketchConnector, that in conjunction perform the guided music completion, filling in representations for the missing measures conditioned on surrounding context and user-specified snippets. We evaluate SketchNet on a standard dataset of Irish folk music and compare with models from recent works. When used for music completion, our approach outperforms the state-of-the-art both in terms of objective metrics and subjective listening tests. Finally, we demonstrate that our model can successfully incorporate user-specified snippets during the generation process.Comment: 8 pages, 8 figures, Proceedings of the 21st International Society for Music Information Retrieval Conference, ISMIR 202

    Cortical Plasticity Induced by Short-Term Multimodal Musical Rhythm Training

    Get PDF
    Performing music is a multimodal experience involving the visual, auditory, and somatosensory modalities as well as the motor system. Therefore, musical training is an excellent model to study multimodal brain plasticity. Indeed, we have previously shown that short-term piano practice increase the magnetoencephalographic (MEG) response to melodic material in novice players. Here we investigate the impact of piano training using a rhythmic-focused exercise on responses to rhythmic musical material. Musical training with non musicians was conducted over a period of two weeks. One group (sensorimotor-auditory, SA) learned to play a piano sequence with a distinct musical rhythm, another group (auditory, A) listened to, and evaluated the rhythmic accuracy of the performances of the SA-group. Training-induced cortical plasticity was evaluated using MEG, comparing the mismatch negativity (MMN) in response to occasional rhythmic deviants in a repeating rhythm pattern before and after training. The SA-group showed a significantly greater enlargement of MMN and P2 to deviants after training compared to the A- group. The training-induced increase of the rhythm MMN was bilaterally expressed in contrast to our previous finding where the MMN for deviants in the pitch domain showed a larger right than left increase. The results indicate that when auditory experience is strictly controlled during training, involvement of the sensorimotor system and perhaps increased attentional recources that are needed in producing rhythms lead to more robust plastic changes in the auditory cortex compared to when rhythms are simply attended to in the auditory domain in the absence of motor production
    • …
    corecore