1,195 research outputs found

    Towards a general computational theory of musical structure

    Get PDF
    The General Computational Theory of Musical Structure (GCTMS) is a theory that may be employed to obtain a structural description (or set of descriptions) of a musical surface. This theory is based on general cognitive and logical principles, is independent of any specific musical style or idiom, and can be applied to any musical surface. The musical work is presented to GCTMS as a sequence of discrete symbolically represented events (e.g. notes) without higher-level structural elements (e.g. articulation marks, timesignature etc.)- although such information may be used to guide the analytic process. The aim of the application of the theory is to reach a structural description of the musical work that may be considered as 'plausible' or 'permissible' by a human music analyst. As styledependent knowledge is not embodied in the general theory, highly sophisticated analyses (similar to those an expert analyst may provide) are not expected. The theory gives, however, higher rating to descriptions that may be considered more reasonable or acceptable by human analysts and lower to descriptions that are less plausible

    Basic gestures as spatiotemporal reference frames for repetitive dance/music patterns in samba and charleston

    Get PDF
    THE GOAL OF THE PRESENT STUDY IS TO GAIN BETTER insight into how dancers establish, through dancing, a spatiotemporal reference frame in synchrony with musical cues. With the aim of achieving this, repetitive dance patterns of samba and Charleston were recorded using a three-dimensional motion capture system. Geometric patterns then were extracted from each joint of the dancer's body. The method uses a body-centered reference frame and decomposes the movement into non-orthogonal periodicities that match periods of the musical meter. Musical cues (such as meter and loudness) as well as action-based cues (such as velocity) can be projected onto the patterns, thus providing spatiotemporal reference frames, or 'basic gestures,' for action-perception couplings. Conceptually speaking, the spatiotemporal reference frames control minimum effort points in action-perception couplings. They reside as memory patterns in the mental and/or motor domains, ready to be dynamically transformed in dance movements. The present study raises a number of hypotheses related to spatial cognition that may serve as guiding principles for future dance/music studies

    Tackling the Toolkit. Plotting Poetry through Computational Literary Studies

    Get PDF
    In Tackling the Toolkit, we focus on the methodological innovations, challenges, obstacles and even shortcomings associated with applying quantitative methods to poetry specifically and poetics more broadly. Using tools including natural language processing, web ontologies, similarity detection devices and machine learning, our contributors explore not only metres, stanzas, stresses and rhythms but also genres, subgenres, lexical material and cognitive processes. Whether they are testing old theories and laws, making complex concepts machine-readable or developing new lines of textual analysis, their works challenge standard descriptions of norms and variations

    The Edit Distance as a Measure of Perceived Rhythmic Similarity

    Get PDF
    The ‘edit distance’ (or ‘Levenshtein distance’) measure of distance between two data sets is defined as the minimum number of editing operations – insertions, deletions, and substitutions – that are required to transform one data set to the other (Orpen and Huron, 1992). This measure of distance has been applied frequently and successfully in music information retrieval, but rarely in predicting human perception of distance. In this study, we investigate the effectiveness of the edit distance as a predictor of perceived rhythmic dissimilarity under simple rhythmic alterations. Approaching rhythms as a set of pulses that are either onsets or silences, we study two types of alterations. The first experiment is designed to test the model’s accuracy for rhythms that are relatively similar; whether rhythmic variations with the same edit distance to a source rhythm are also perceived as relatively similar by human subjects. In addition, we observe whether the salience of an edit operation is affected by its metric placement in the rhythm. Instead of using a rhythm that regularly subdivides a 4/4 meter, our source rhythm is a syncopated 16-pulse rhythm, the son. Results show a high correlation between the predictions by the edit distance model and human similarity judgments (r = 0.87); a higher correlation than for the well-known generative theory of tonal music (r = 0.64). In the second experiment, we seek to assess the accuracy of the edit distance model in predicting relatively dissimilar rhythms. The stimuli used are random permutations of the son’s inter-onset intervals: 3-3-4-2-4. The results again indicate that the edit distance correlates well with the perceived rhythmic dissimilarity judgments of the subjects (r = 0.76). To gain insight in the relationships between the individual rhythms, the results are also presented by means of graphic phylogenetic trees

    Hip-hop Rhymes Reiterate Phonological Typology

    Get PDF

    The role of metrical structure in tonal knowledge acquisition

    Full text link
    Experienced listeners possess a working knowledge of pitch structure in Western music, such as scale, key, harmony, and tonality, which develops gradually throughout childhood. It is commonly assumed that tonal representations are acquired through exposure to the statistics of music, but few studies have investigated potential learning mechanisms directly. In Western tonal music, tonally stable pitches not only have a higher overall frequency of occurrence, but they may occur more frequently at strong than weak metrical positions, providing two potential avenues for tonal learning. Two experiments employed an artificial grammar learning paradigm to examine tonal learning mechanisms. During a familiarization phase, we exposed nonmusician adult listeners to a long (whole tone scale) sequence with certain distributional properties. In a subsequent test phase we examined listeners\u27 learning using grammaticality or probe tone judgments. In the grammaticality task, participants indicated which of two short test sequences conformed to the familiarization sequence. In the probe tone task, participants provided fit ratings for individual probe tones following short reminder sequences. Experiment 1 examined learning from overall frequency of occurrence. Grammaticality judgments were significantly above chance (Exp. 1a), and probe tone ratings were predicted by frequency of occurrence (Exp. 1b). In Experiment 2 we presented a familiarization sequence containing one sub-set of pitches that occurred more frequently on strong than on weak metrical positions and another sub-set that did the opposite. Overall frequency of occurrence was balanced between both sub-sets. Grammaticality judgments were again above chance (Exp. 2a) and probe tone ratings were higher for pitches occurring on strong metrical positions (Exp. 2b). These findings implicate metrical structure in tonal knowledge acquisition

    Interaction features for prediction of perceptual segmentation:Effects of musicianship and experimental task

    Get PDF
    As music unfolds in time, structure is recognised and understood by listeners, regardless of their level of musical expertise. A number of studies have found spectral and tonal changes to quite successfully model boundaries between structural sections. However, the effects of musical expertise and experimental task on computational modelling of structure are not yet well understood. These issues need to be addressed to better understand how listeners perceive the structure of music and to improve automatic segmentation algorithms. In this study, computational prediction of segmentation by listeners was investigated for six musical stimuli via a real-time task and an annotation (non real-time) task. The proposed approach involved computation of novelty curve interaction features and a prediction model of perceptual segmentation boundary density. We found that, compared to non-musicians’, musicians’ segmentation yielded lower prediction rates, and involved more features for prediction, particularly more interaction features; also non-musicians required a larger time shift for optimal segmentation modelling. Prediction of the annotation task exhibited higher rates, and involved more musical features than for the real-time task; in addition, the real-time task required time shifting of the segmentation data for its optimal modelling. We also found that annotation task models that were weighted according to boundary strength ratings exhibited improvements in segmentation prediction rates and involved more interaction features. In sum, musical training and experimental task seem to have an impact on prediction rates and on musical features involved in novelty-based segmentation models. Musical training is associated with higher presence of schematic knowledge, attention to more dimensions of musical change and more levels of the structural hierarchy, and higher speed of musical structure processing. Real-time segmentation is linked with higher response delays, less levels of structural hierarchy attended and higher data noisiness than annotation segmentation. In addition, boundary strength weighting of density was associated with more emphasis given to stark musical changes and to clearer representation of a hierarchy involving high-dimensional musical changes.peerReviewe

    A Unified Theory of Musical Meaning

    Get PDF
    In this thesis, I present a novel theory of musical meaning. This theory posits a complementary relationship between the theories described in Lerdahl and Jackendoff’s Generative Theory of Tonal Music and Arnie Cox’s Music and Embodied Cognition. Each of these theories explain particular aspects of musical meaning (semantic and grammatical, respectively), though I argue that by unifying these theories into a broader framework, they can explain more about musical meaning than they could individually. This unification is performed via the novel theory I present: the Analogical Argument. This argument suggests that Lerdahl and Jackendoff’s Generative Theory of Tonal Music is theoretically analogous to Noam Chomsky’s theory of Generative Linguistic Grammar (Chomsky, 1966). Given the success of Chomsky’s theory, as well the cognitive approach he employs more generally, in explaining how we construe linguistic meaning, we should expect similar if not analogous processes to be responsible for the construal of musical meaning. Thus, the Generative Theory of Tonal Music is sufficient for explaining how the musical meanings explained by Arnie Cox’s mimetic hypothesis are cognized so as to give rise to the emergent musical meaning that is characteristic of musical experiences

    Computational methods for percussion music analysis : the afro-uruguayan candombe drumming as a case study

    Get PDF
    Most of the research conducted on information technologies applied to music has been largely limited to a few mainstream styles of the so-called `Western' music. The resulting tools often do not generalize properly or cannot be easily extended to other music traditions. So, culture-specific approaches have been recently proposed as a way to build richer and more general computational models for music. This thesis work aims at contributing to the computer-aided study of rhythm, with the focus on percussion music and in the search of appropriate solutions from a culture specifc perspective by considering the Afro-Uruguayan candombe drumming as a case study. This is mainly motivated by its challenging rhythmic characteristics, troublesome for most of the existing analysis methods. In this way, it attempts to push ahead the boundaries of current music technologies. The thesis o ers an overview of the historical, social and cultural context in which candombe drumming is embedded, along with a description of the rhythm. One of the specific contributions of the thesis is the creation of annotated datasets of candombe drumming suitable for computational rhythm analysis. Performances were purposely recorded, and received annotations of metrical information, location of onsets, and sections. A dataset of annotated recordings for beat and downbeat tracking was publicly released, and an audio-visual dataset of performances was obtained, which serves both documentary and research purposes. Part of the dissertation focused on the discovery and analysis of rhythmic patterns from audio recordings. A representation in the form of a map of rhythmic patterns based on spectral features was devised. The type of analyses that can be conducted with the proposed methods is illustrated with some experiments. The dissertation also systematically approached (to the best of our knowledge, for the first time) the study and characterization of the micro-rhythmical properties of candombe drumming. The ndings suggest that micro-timing is a structural component of the rhythm, producing a sort of characteristic "swing". The rest of the dissertation was devoted to the automatic inference and tracking of the metric structure from audio recordings. A supervised Bayesian scheme for rhythmic pattern tracking was proposed, of which a software implementation was publicly released. The results give additional evidence of the generalizability of the Bayesian approach to complex rhythms from diferent music traditions. Finally, the downbeat detection task was formulated as a data compression problem. This resulted in a novel method that proved to be e ective for a large part of the dataset and opens up some interesting threads for future research.La mayoría de la investigación realizada en tecnologías de la información aplicadas a la música se ha limitado en gran medida a algunos estilos particulares de la así llamada música `occidental'. Las herramientas resultantes a menudo no generalizan adecuadamente o no se pueden extender fácilmente a otras tradiciones musicales. Por lo tanto, recientemente se han propuesto enfoques culturalmente específicos como forma de construir modelos computacionales más ricos y más generales. Esta tesis tiene como objetivo contribuir al estudio del ritmo asistido por computadora, desde una perspectiva cultural específica, considerando el candombe Afro-Uruguayo como caso de estudio. Esto está motivado principalmente por sus características rítmicas, problemáticas para la mayoría de los métodos de análisis existentes. Así , intenta superar los límites actuales de estas tecnologías. La tesis ofrece una visión general del contexto histórico, social y cultural en el que el candombe está integrado, junto con una descripción de su ritmo. Una de las contribuciones específicas de la tesis es la creación de conjuntos de datos adecuados para el análisis computacional del ritmo. Se llevaron adelante sesiones de grabación y se generaron anotaciones de información métrica, ubicación de eventos y secciones. Se disponibilizó públicamente un conjunto de grabaciones anotadas para el seguimiento de pulso e inicio de compás, y se generó un registro audiovisual que sirve tanto para fines documentales como de investigación. Parte de la tesis se centró en descubrir y analizar patrones rítmicos a partir de grabaciones de audio. Se diseñó una representación en forma de mapa de patrones rítmicos basada en características espectrales. El tipo de análisis que se puede realizar con los métodos propuestos se ilustra con algunos experimentos. La tesis también abordó de forma sistemática (y por primera vez) el estudio y la caracterización de las propiedades micro rítmicas del candombe. Los resultados sugieren que las micro desviaciones temporales son un componente estructural del ritmo, dando lugar a una especie de "swing" característico. El resto de la tesis se dedicó a la inferencia automática de la estructura métrica a partir de grabaciones de audio. Se propuso un esquema Bayesiano supervisado para el seguimiento de patrones rítmicos, del cual se disponibilizó públicamente una implementación de software. Los resultados dan evidencia adicional de la capacidad de generalización del enfoque Bayesiano a ritmos complejos. Por último, la detección de inicio de compás se formuló como un problema de compresión de datos. Esto resultó en un método novedoso que demostró ser efectivo para una buena parte de los datos y abre varias líneas de investigación
    corecore