28,161 research outputs found

    Playing with Cases: Rendering Expressive Music with Case-Based Reasoning

    Get PDF
    This article surveys long-term research on the problem of rendering expressive music by means of AI techniques with an emphasis on case-based reasoning (CBR). Following a brief overview discussing why people prefer listening to expressive music instead of nonexpressive synthesized music, we examine a representative selection of well-known approaches to expressive computer,music performance with an emphasis on AI-related approaches. In the main part of the article we focus on the existing CBR approaches to the problem of synthesizing expressive music, and particularly on Tempo-Express, a case-based reasoning system developed at our Institute, for applying musically acceptable tempo transformations to monophonic audio recordings of musical performances. Finally we briefly describe an ongoing extension of our previous work consisting of complementing audio information with information about the gestures of the musician. Music is played through our bodies, therefore capturing the gesture of the performer is a fundamental aspect that has to be taken into account in future expressive music renderings. This article is based on the >2011 Robert S. Engelmore Memorial Lecture> given by the first author at AAAI/IAAI 2011.This research is partially supported by the Ministry of Science and Innovation of Spain under the project NEXT-CBR (TIN2009-13692-C03-01) and the Generalitat de Catalunya AGAUR Grant 2009-SGR-1434Peer Reviewe

    Changing musical emotion: A computational rule system for modifying score and performance

    Get PDF
    CMERS system architecture has been implemented in the programming language scheme, and it uses the improvised music programming environment with the objective to provide researchers with a tool for testing the relationships between musical features and emotion. A music work represented in CMERS uses the music object hierarchy that is based on GTTM's grouping structure and is automatically generated from the phrase boundary markup and MIDI file. The Mode rule type of CMERS converts a note into those of the parallel mode and no change in pitch height occurs when converting to the parallel mode. It is reported that the odds of correctness with CMERS are approximately five times greater than that of DM. The repeated-measures analysis of variance for valence shows a significant difference between systems with F (1, 17) = 45.49, p < .0005 and the interaction between system and quadrant is significant with F (3, 51) = 4.23, p = .01, which indicates that CMERS is extensively more effective at correctly influencing valence than DM. c 2010 Massachusetts Institute of Technology

    Automatic execution of expressive music performance

    Get PDF
    The definition of computer models to represent the expressiveness of a musical performance, is useful to try to understand how and what way anyone can express expressive intentions in a music performance. The CaRo 2.0 is a computer model or software system that allows automatic computation in interactive way for rendering expressive musical scores. Initially, the exclusively on Microsoft environment, which limits the interest of the product. This thesis relates to the porting and integrationope

    CaRo 2.0: an interactive system for expressive music rendering

    Get PDF
    In several application contexts in multimedia field (educational, extreme gaming), the interaction with the user requests that system is able to render music in expressive way. The expressiveness is the added value of a performance and is part of the reason that music is interesting to listen. Understanding and modeling expressive content communication is important for many engineering applications in information technology (e.g., Music Information Retrieval, as well as several applications in the affective computing field). In this paper, we present an original approach to modify the expressive content of a performance in a gradual way, applying a smooth morphing among performances with different expressive content in order to adapt the audio expressive character to the user's desires. The system won the final stage of Rencon 2011. This performance RENdering CONtest is a research project that organizes contests for computer systems generating expressive musical performances

    Discovering simple rules in complex data: A meta-learning algorithm and some surprising musical discoveries

    Get PDF
    AbstractThis article presents a new rule discovery algorithm named PLCG that can find simple, robust partial rule models (sets of classification rules) in complex data where it is difficult or impossible to find models that completely account for all the phenomena of interest. Technically speaking, PLCG is an ensemble learning method that learns multiple models via some standard rule learning algorithm, and then combines these into one final rule set via clustering, generalization, and heuristic rule selection. The algorithm was developed in the context of an interdisciplinary research project that aims at discovering fundamental principles of expressive music performance from large amounts of complex real-world data (specifically, measurements of actual performances by concert pianists). It will be shown that PLCG succeeds in finding some surprisingly simple and robust performance principles, some of which represent truly novel and musically meaningful discoveries. A set of more systematic experiments shows that PLCG usually discovers significantly simpler theories than more direct approaches to rule learning (including the state-of-the-art learning algorithm Ripper), while striking a compromise between coverage and precision. The experiments also show how easy it is to use PLCG as a meta-learning strategy to explore different parts of the space of rule models
    corecore