17,081 research outputs found

    Neural Translation of Musical Style

    Full text link
    Music is an expressive form of communication often used to convey emotion in scenarios where "words are not enough". Part of this information lies in the musical composition where well-defined language exists. However, a significant amount of information is added during a performance as the musician interprets the composition. The performer injects expressiveness into the written score through variations of different musical properties such as dynamics and tempo. In this paper, we describe a model that can learn to perform sheet music. Our research concludes that the generated performances are indistinguishable from a human performance, thereby passing a test in the spirit of a "musical Turing test"

    Lisp, Jazz, Aikido -- Three Expressions of a Single Essence

    Full text link
    The relation between Science (what we can explain) and Art (what we can't) has long been acknowledged and while every science contains an artistic part, every art form also needs a bit of science. Among all scientific disciplines, programming holds a special place for two reasons. First, the artistic part is not only undeniable but also essential. Second, and much like in a purely artistic discipline, the act of programming is driven partly by the notion of aesthetics: the pleasure we have in creating beautiful things. Even though the importance of aesthetics in the act of programming is now unquestioned, more could still be written on the subject. The field called "psychology of programming" focuses on the cognitive aspects of the activity, with the goal of improving the productivity of programmers. While many scientists have emphasized their concern for aesthetics and the impact it has on their activity, few computer scientists have actually written about their thought process while programming. What makes us like or dislike such and such language or paradigm? Why do we shape our programs the way we do? By answering these questions from the angle of aesthetics, we may be able to shed some new light on the art of programming. Starting from the assumption that aesthetics is an inherently transversal dimension, it should be possible for every programmer to find the same aesthetic driving force in every creative activity they undertake, not just programming, and in doing so, get deeper insight on why and how they do things the way they do. On the other hand, because our aesthetic sensitivities are so personal, all we can really do is relate our own experiences and share it with others, in the hope that it will inspire them to do the same. My personal life has been revolving around three major creative activities, of equal importance: programming in Lisp, playing Jazz music, and practicing Aikido. But why so many of them, why so different ones, and why these specifically? By introspecting my personal aesthetic sensitivities, I eventually realized that my tastes in the scientific, artistic, and physical domains are all motivated by the same driving forces, hence unifying Lisp, Jazz, and Aikido as three expressions of a single essence, not so different after all. Lisp, Jazz, and Aikido are governed by a limited set of rules which remain simple and unobtrusive. Conforming to them is a pleasure. Because Lisp, Jazz, and Aikido are inherently introspective disciplines, they also invite you to transgress the rules in order to find your own. Breaking the rules is fun. Finally, if Lisp, Jazz, and Aikido unify so many paradigms, styles, or techniques, it is not by mere accumulation but because they live at the meta-level and let you reinvent them. Working at the meta-level is an enlightening experience. Understand your aesthetic sensitivities and you may gain considerable insight on your own psychology of programming. Mine is perhaps common to most lispers. Perhaps also common to other programming communities, but that, is for the reader to decide..

    Comparative Study of Musical Performance by Machine Learning

    Get PDF
    This paper deals with the very special domains from computer science viz. Machine learning, g enetic algorithms, rule based systems, music and various intelligent systems . Most of the musicians use m achine l earning approach to improve accuracy of the musical note . Intelligent systems use databases to store monophonic audio recordings performed by the musician of jazz standards. Howeve r, these approach use to obtain a model which explai n and generate performances of expressive music. Rule based approach gives note level information containing time, dynamics and melody alteration . In this paper, we investigate how all these machine learning techniques work . We also compare their featu res and performance with evolutionary approach which will help user to get Rule based incremental model . Finally, output will be in a summarized format which gives reference solution. Comparative analysis shows that methods used by Incremental Rule b ased Appro ach provide full functionality and effectiveness as compared with previous machine learning techniques

    Playing with Cases: Rendering Expressive Music with Case-Based Reasoning

    Get PDF
    This article surveys long-term research on the problem of rendering expressive music by means of AI techniques with an emphasis on case-based reasoning (CBR). Following a brief overview discussing why people prefer listening to expressive music instead of nonexpressive synthesized music, we examine a representative selection of well-known approaches to expressive computer,music performance with an emphasis on AI-related approaches. In the main part of the article we focus on the existing CBR approaches to the problem of synthesizing expressive music, and particularly on Tempo-Express, a case-based reasoning system developed at our Institute, for applying musically acceptable tempo transformations to monophonic audio recordings of musical performances. Finally we briefly describe an ongoing extension of our previous work consisting of complementing audio information with information about the gestures of the musician. Music is played through our bodies, therefore capturing the gesture of the performer is a fundamental aspect that has to be taken into account in future expressive music renderings. This article is based on the >2011 Robert S. Engelmore Memorial Lecture> given by the first author at AAAI/IAAI 2011.This research is partially supported by the Ministry of Science and Innovation of Spain under the project NEXT-CBR (TIN2009-13692-C03-01) and the Generalitat de Catalunya AGAUR Grant 2009-SGR-1434Peer Reviewe

    Logic-based Modelling of Musical Harmony for Automatic Characterisation and Classification

    Get PDF
    The copyright of this thesis rests with the author and no quotation from it or information derived from it may be published without the prior written consent of the authorMusic like other online media is undergoing an information explosion. Massive online music stores such as the iTunes Store1 or Amazon MP32, and their counterparts, the streaming platforms, such as Spotify3, Rdio4 and Deezer5, offer more than 30 million6 pieces of music to their customers, that is to say anybody with a smart phone. Indeed these ubiquitous devices offer vast storage capacities and cloud-based apps that can cater any music request. As Paul Lamere puts it7: “we can now have a virtually endless supply of music in our pocket. The ‘bottomless iPod’ will have as big an effect on how we listen to music as the original iPod had back in 2001. But with millions of songs to chose from, we will need help finding music that we want to hear [...]. We will need new tools that help us manage our listening experience.” Retrieval, organisation, recommendation, annotation and characterisation of musical data is precisely what the Music Information Retrieval (MIR) community has been working on for at least 15 years (Byrd and Crawford, 2002). It is clear from its historical roots in practical fields such as Information Retrieval, Information Systems, Digital Resources and Digital Libraries but also from the publications presented at the first International Symposium on Music Information Retrieval in 2000 that MIR has been aiming to build tools to help people to navigate, explore and make sense of music collections (Downie et al., 2009). That also includes analytical tools to suppor
    • 

    corecore