75 research outputs found

    Perception based approach on pattern discovery and organisation of point-set data

    Get PDF
    The general topic of the thesis is computer aided music analysis on point-set data utilising theories outlined in Timo Laiho’s Analytic-Generative Methodology (AGM) [19]. The topic is in the field of music information retrieval, and is related to previous work on both pattern discovery and computational models of music. The thesis aims to provide analysis results that can be compared to existing studies. AGM introduces two concepts based on perception, sensation and cognitive processing: interval–time complex (IntiC) and musical vectors (muV). These provide a mathematical framework for the analysis of music. IntiC is a value associated with the velocity, or rate of change, between musical notes. Musical vectors are the vector representations of these rates of change. Laiho explains these attributes as meaningful for both music analysis and as tools for music generation. Both of these attributes can be computed from a point-set representation of music data. The concepts in AGM can be viewed as being related to geometric methods for pattern discovery algorithmsof Meredith, Lemström et al.[24] whointroduce afamily of ‘Structure Induction Algorithms’. These algorithms are used to find repeating patterns in multidimensional point-set data. Algorithmic implementations of intiC and muV were made for this thesis and examined in the use of rating and selecting patterns output by the pattern discovery algorithms. In addition software tools for using these concepts of AGM were created. The concepts of AGM and pattern discovery were further related to existing work in computer aided musicology

    Algorithmic categorisation in formal music analysis

    Get PDF

    Data-driven, memory-based computational models of human segmentation of musical melody

    Get PDF
    When listening to a piece of music, listeners often identify distinct sections or segments within the piece. Music segmentation is recognised as an important process in the abstraction of musical contents and researchers have attempted to explain how listeners perceive and identify the boundaries of these segments.The present study seeks the development of a system that is capable of performing melodic segmentation in an unsupervised way, by learning from non-annotated musical data. Probabilistic learning methods have been widely used to acquire regularities in large sets of data, with many successful applications in language and speech processing. Some of these applications have found their counterparts in music research and have been used for music prediction and generation, music retrieval or music analysis, but seldom to model perceptual and cognitive aspects of music listening.We present some preliminary experiments on melodic segmentation, which highlight the importance of memory and the role of learning in music listening. These experiments have motivated the development of a computational model for melodic segmentation based on a probabilistic learning paradigm.The model uses a Mixed-memory Markov Model to estimate sequence probabilities from pitch and time-based parametric descriptions of melodic data. We follow the assumption that listeners' perception of feature salience in melodies is strongly related to expectation. Moreover, we conjecture that outstanding entropy variations of certain melodic features coincide with segmentation boundaries as indicated by listeners.Model segmentation predictions are compared with results of a listening study on melodic segmentation carried out with real listeners. Overall results show that changes in prediction entropy along the pieces exhibit significant correspondence with the listeners' segmentation boundaries.Although the model relies only on information theoretic principles to make predictions on the location of segmentation boundaries, it was found that most predicted segments can be matched with boundaries of groupings usually attributed to Gestalt rules.These results question previous research supporting a separation between learningbased and innate bottom-up processes of melodic grouping, and suggesting that some of these latter processes can emerge from acquired regularities in melodic data

    Entropy, Probabilistic Harmonic Space, and the Harmony of Antonio Carlos Jobim

    Get PDF
    This paper introduces a theoretical framework derived from a deep and detailed harmonic analysis of songs composed by Antonio Carlos Jobim, focusing on two components, namely, “semantic” (related to the idea of chord type) and “syntactic” (involving binary relations between contiguous chords). The research is mainly focused on investigating the correlations between compositional style (here related to the harmonic construction) and the concepts of probability, expectance, and, especially entropy, being the latter defined as a measure of uncertainty or “surprise” of events along time. After a bibliographical review of these topics and their applications to music, a section exposes Markov Chains, a mathematical tool used to formalize the “semantic-syntactic” harmonic relations statistically inferred in the analyzed corpus of Jobim’s works. Then it follows the formalization of a probabilistic harmonic space and the concept of probabilistic index, directly associated with the entropy of the observed binary relations. This approach opens a new analytical perspective, also allowing the generalization of the presented theoretical and methodological technology for the examination of other repertoires and posterior comparison, presenting then as a new mean of investigation on the nature of style

    Artificial Intelligence in Music Education: A Critical Review

    Get PDF
    This paper reviews the principal approaches to using Artificial Intelligence in Music Education. Music is a challenging domain for Artificial Intelligence in Education (AI-ED) because music is, in general, an open-ended domain demanding creativity and problem-seeking on the part of learners and teachers. In addition, Artificial Intelligence theories of music are far from complete, and music education typically emphasises factors other than the communication of ‘knowledge’ to students. This paper reviews critically some of the principal problems and possibilities in a variety of AI-ED approaches to music education. Approaches considered include: Intelligent Tutoring Systems for Music; Music Logo Systems; Cognitive Support Frameworks that employ models of creativity; highly interactive interfaces that employ AI theories; AI-based music tools; and systems to support negotiation and reflection. A wide variety of existing music AI-ED systems are used to illustrate the key issues, techniques and methods associated with these approaches to AI-ED in Music

    A systematic review of artificial intelligence-based music generation: Scope, applications, and future trends

    Get PDF
    Currently available reviews in the area of artificial intelligence-based music generation do not provide a wide range of publications and are usually centered around comparing very specific topics between a very limited range of solutions. Best surveys available in the field are bibliography sections of some papers and books which lack a systematic approach and limit their scope to only handpicked examples In this work, we analyze the scope and trends of the research on artificial intelligence-based music generation by performing a systematic review of the available publications in the field using the Prisma methodology. Furthermore, we discuss the possible implementations and accessibility of a set of currently available AI solutions, as aids to musical composition. Our research shows how publications are being distributed globally according to many characteristics, which provides a clear picture of the situation of this technology. Through our research it becomes clear that the interest of both musicians and computer scientists in AI-based automatic music generation has increased significantly in the last few years with an increasing participation of mayor companies in the field whose works we analyze. We discuss several generation architectures, both from a technical and a musical point of view and we highlight various areas were further research is needed

    Unsupervised Incremental Online Learning and Prediction of Musical Audio Signals

    Get PDF
    Guided by the idea that musical human-computer interaction may become more effective, intuitive, and creative when basing its computer part on cognitively more plausible learning principles, we employ unsupervised incremental online learning (i.e. clustering) to build a system that predicts the next event in a musical sequence, given as audio input. The flow of the system is as follows: 1) segmentation by onset detection, 2) timbre representation of each segment by Mel frequency cepstrum coefficients, 3) discretization by incremental clustering, yielding a tree of different sound classes (e.g. timbre categories/instruments) that can grow or shrink on the fly driven by the instantaneous sound events, resulting in a discrete symbol sequence, 4) extraction of statistical regularities of the symbol sequence, using hierarchical N-grams and the newly introduced conceptual Boltzmann machine that adapt to the dynamically changing clustering tree in 3) , and 5) prediction of the next sound event in the sequence, given the last n previous events. The system's robustness is assessed with respect to complexity and noisiness of the signal. Clustering in isolation yields an adjusted Rand index (ARI) of 82.7%/85.7% for data sets of singing voice and drums. Onset detection jointly with clustering achieve an ARI of 81.3%/76.3% and the prediction of the entire system yields an ARI of 27.2%/39.2%
    corecore