6 research outputs found

    Discovering simple rules in complex data: A meta-learning algorithm and some surprising musical discoveries

    Get PDF
    AbstractThis article presents a new rule discovery algorithm named PLCG that can find simple, robust partial rule models (sets of classification rules) in complex data where it is difficult or impossible to find models that completely account for all the phenomena of interest. Technically speaking, PLCG is an ensemble learning method that learns multiple models via some standard rule learning algorithm, and then combines these into one final rule set via clustering, generalization, and heuristic rule selection. The algorithm was developed in the context of an interdisciplinary research project that aims at discovering fundamental principles of expressive music performance from large amounts of complex real-world data (specifically, measurements of actual performances by concert pianists). It will be shown that PLCG succeeds in finding some surprisingly simple and robust performance principles, some of which represent truly novel and musically meaningful discoveries. A set of more systematic experiments shows that PLCG usually discovers significantly simpler theories than more direct approaches to rule learning (including the state-of-the-art learning algorithm Ripper), while striking a compromise between coverage and precision. The experiments also show how easy it is to use PLCG as a meta-learning strategy to explore different parts of the space of rule models

    Application of Intermediate Multi-Agent Systems to Integrated Algorithmic Composition and Expressive Performance of Music

    Get PDF
    We investigate the properties of a new Multi-Agent Systems (MAS) for computer-aided composition called IPCS (pronounced “ipp-siss”) the Intermediate Performance Composition System which generates expressive performance as part of its compositional process, and produces emergent melodic structures by a novel multi-agent process. IPCS consists of a small-medium size (2 to 16) collection of agents in which each agent can perform monophonic tunes and learn monophonic tunes from other agents. Each agent has an affective state (an “artificial emotional state”) which affects how it performs the music to other agents; e.g. a “happy” agent will perform “happier” music. The agent performance not only involves compositional changes to the music, but also adds smaller changes based on expressive music performance algorithms for humanization. Every agent is initialized with a tune containing the same single note, and over the interaction period longer tunes are built through agent interaction. Agents will only learn tunes performed to them by other agents if the affective content of the tune is similar to their current affective state; learned tunes are concatenated to the end of their current tune. Each agent in the society learns its own growing tune during the interaction process. Agents develop “opinions” of other agents that perform to them, depending on how much the performing agent can help their tunes grow. These opinions affect who they interact with in the future. IPCS is not a mapping from multi-agent interaction onto musical features, but actually utilizes music for the agents to communicate emotions. In spite of the lack of explicit melodic intelligence in IPCS, the system is shown to generate non-trivial melody pitch sequences as a result of emotional communication between agents. The melodies also have a hierarchical structure based on the emergent social structure of the multi-agent system and the hierarchical structure is a result of the emerging agent social interaction structure. The interactive humanizations produce micro-timing and loudness deviations in the melody which are shown to express its hierarchical generative structure without the need for structural analysis software frequently used in computer music humanization

    Logic-based Modelling of Musical Harmony for Automatic Characterisation and Classification

    Get PDF
    The copyright of this thesis rests with the author and no quotation from it or information derived from it may be published without the prior written consent of the authorMusic like other online media is undergoing an information explosion. Massive online music stores such as the iTunes Store1 or Amazon MP32, and their counterparts, the streaming platforms, such as Spotify3, Rdio4 and Deezer5, offer more than 30 million6 pieces of music to their customers, that is to say anybody with a smart phone. Indeed these ubiquitous devices offer vast storage capacities and cloud-based apps that can cater any music request. As Paul Lamere puts it7: “we can now have a virtually endless supply of music in our pocket. The ‘bottomless iPod’ will have as big an effect on how we listen to music as the original iPod had back in 2001. But with millions of songs to chose from, we will need help finding music that we want to hear [...]. We will need new tools that help us manage our listening experience.” Retrieval, organisation, recommendation, annotation and characterisation of musical data is precisely what the Music Information Retrieval (MIR) community has been working on for at least 15 years (Byrd and Crawford, 2002). It is clear from its historical roots in practical fields such as Information Retrieval, Information Systems, Digital Resources and Digital Libraries but also from the publications presented at the first International Symposium on Music Information Retrieval in 2000 that MIR has been aiming to build tools to help people to navigate, explore and make sense of music collections (Downie et al., 2009). That also includes analytical tools to suppor

    Biomechanical Modelling of Musical Performance: A Case Study of the Guitar

    Get PDF
    Merged with duplicate record 10026.1/2517 on 07.20.2017 by CS (TIS)Computer-generated musical performances are often criticised for being unable to match the expressivity found in performances by humans. Much research has been conducted in the past two decades in order to create computer technology able to perform a given piece music as expressively as humans, largely without success. Two approaches have been often adopted to research into modelling expressive music performance on computers. The first focuses on sound; that is, on modelling patterns of deviations between a recorded human performance and the music score. The second focuses on modelling the cognitive processes involved in a musical performance. Both approaches are valid and can complement each other. In this thesis we propose a third complementary approach, focusing on the guitar, which concerns the physical manipulation of the instrument by the performer: a biomechanical approach. The essence of this thesis is a study on capturing, analyzing and modelling information about motor and biomechanical processes of guitar performance. The focus is on speed, precision, and force of a guitarist's left-hand. The overarching questions behind our study are: 1) Do unintentional actions originating from motor and biomechanical functions during musical performance contribute a material "human feel" to the performance? 2) Would it be possible determine and quantify such unintentional actions? 3) Would it be possible to model and embed such information in a computer system? The contributionst o knowledgep ursued in this thesis include: a) An unprecedented study of guitar mechanics, ergonomics, and playability; b) A detailed study of how the human body performs actions when playing the guitar; c) A methodologyt o formally record quantifiable data about such actionsin performance; d) An approach to model such information, and e) A demonstration of how the above knowledge can be embeddedin a system for music performance
    corecore