3,506 research outputs found

    Algorithms for an Automatic Transcription of Live Music Performances into Symbolic Format

    Get PDF
    This paper addresses the problem of the real-time automatic transcription of a live music performance into a symbolic format. The source data are given by any music instrument or other device able to communicate through a performance protocol. During a performance, music events are parsed and their parameters are evaluated thanks to rhythm and pitch detection algorithms. The final step is the creation of a well-formed XML document, validated against the new international standard known as IEEE 1599. This work will shortly describe both the software environment and the XML format, but the main analysis will involve the real-time recognition of music events. Finally, a case study will be presented: PureMX, a set of Pure Data externals, able to perform the automatic transcription of MIDI events

    PureMX: Automatic transcription of MIDI live music performances into XML format

    Get PDF
    This paper addresses the problem of the real-time automatic transcription of a live music performance into a symbolic format based on XML. The source data are given by any music instrument or other device able to communicate with Pure Data by MIDI. Pure Data is a free, multi-platform, real-time programming environment for graphical, audio, and video processing. During a performance, music events are parsed and their parameters are evaluated thanks to rhythm and pitch detection algorithms. The final step is the creation of a well-formed XML document, validated against the new international standard known as IEEE 1599. This work will shortly describe both the software environment and the XML format, but the main analysis will involve the realtime recognition of music events. Finally, a case study will be presented: PureMX, an application able to perform such an automatic transcription

    PiJAMA: Piano Jazz with Automatic MIDI Annotations

    Get PDF
    Recent advances in automatic piano transcription have enabled large scale analysis of piano music in the symbolic domain. However, the research has largely focused on classical piano music. We present PiJAMA (Piano Jazz with Automatic MIDI Annotations): a dataset of over 200 hours of solo jazz piano performances with automatically transcribed MIDI. In total there are 2,777 unique performances by 120 different pianists across 244 recorded albums. The dataset contains a mixture of studio recordings and live performances. We use automatic audio tagging to identify applause, spoken introductions, and other non-piano audio to facilitate downstream music information retrieval tasks. We explore descriptive statistics of the MIDI data, including pitch histograms and chromaticism. We then demonstrate two experimental benchmarks on the data: performer identification and generative modeling. The dataset, including a link to the associated source code is available at https://almostimplemented.github.io/PiJAMA/

    Extracting expressive performance information from recorded music

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1995.Includes bibliographical references (leaves 55-56).by Eric David Scheirer.M.S

    Music Information Retrieval for Irish Traditional Music Automatic Analysis of Harmonic, Rhythmic, and Melodic Features for Efficient Key-Invariant Tune Recognition

    Get PDF
    Music making and listening practices increasingly rely on techno logy,and,asaconsequence,techniquesdevelopedinmusicinformation retrieval (MIR) research are more readily available to end users, in par ticular via online tools and smartphone apps. However, the majority of MIRresearchfocusesonWesternpopandclassicalmusic,andthusdoes not address specificities of other musical idioms. Irishtraditionalmusic(ITM)ispopularacrosstheglobe,withregular sessionsorganisedonallcontinents. ITMisadistinctivemusicalidiom, particularly in terms of heterophony and modality, and these character istics can constitute challenges for existing MIR algorithms. The bene fitsofdevelopingMIRmethodsspecificallytailoredtoITMisevidenced by Tunepal, a query-by-playing tool that has become popular among ITM practitioners since its release in 2009. As of today, Tunepal is the state of the art for tune recognition in ITM. The research in this thesis addresses existing limitations of Tunepal. The main goal is to find solutions to add key-invariance to the tune re cognitionsystem,animportantfeaturethatiscurrentlymissinginTune pal. Techniques from digital signal processing and machine learning are used and adapted to the specificities of ITM to extract harmonic iv and temporal features, respectively with improvements on existing key detection methods, and a novel method for rhythm classification. These featuresarethenusedtodevelopakey-invarianttunerecognitionsystem that is computationally efficient while maintaining retrieval accuracy to a comparable level to that of the existing system

    ^muzicode$: composing and performing musical codes

    Get PDF
    We present muzicodes, an approach to incorporating machine-readable ‘codes’ into music that allows the performer and/or composer to flexibly define what constitutes a code, and to perform around it. These codes can then act as triggers, for example to control an accompaniment or visuals during a performance. The codes can form an integral part of the music (composition and/or performance), and may be more or less obviously present. This creates a rich space of playful interaction with a system that recognises and responds to the codes. Our proof of concept implementation works with audio or MIDI as input. Muzicodes are represented textually and regular expressions are used to flexibly define them. We present two contrasting demonstration applications and summarise the findings from two workshops with potential users which highlight opportunities and challenges, especially in relation to specifying and matching codes and playing and performing with the system
    corecore