12 research outputs found

    Piper: Audio Feature Extraction in Browser and Mobile Applications

    Get PDF
    Piper is a protocol for audio analysis and feature extraction. We propose a data schema and API that can be used to support both remote audio feature extraction services and feature extractors loaded directly into a host application. We provide a means of using existing audio feature extractor implementations with this protocol. In this talk we demonstrate several use-cases for Piper, including an “audio notebook” mobile application using Piper modules to analyse recordings; a web service for remote feature extraction; and the refactoring of an existing desktop application, Sonic Visualiser, to communicate with a Piper service using a simple IPC mechanism

    ECOLM and Lute Tablature

    Get PDF
    ECOLM or “Electronic Corpus of Lute Music” (1999-2002) was a project led by Tim Crawford at King’s College London which developed and populated a database of lute tablature encodings with metadata, for scholarly use, queried using a web interface. Subsequent projects ECOLM II (2002-2006) and ECOLM III (2012) expanded the database and used it for some computational musicological investigations. The resulting database was hosted on a public-facing web server at Goldsmiths, University of London. It is still running today, although nobody is formally responsible for maintaining it. We consider the status of ECOLM and a number of related lute tablature resources, discuss their audience and challenges for sustainability, and identify three alternative directions for sustainable development

    Linked Data and you: Bringing music research software into the Semantic Web

    Get PDF
    The promise of the Semantic Web is to democratize access to data, allowing anyone to make use of and contribute back to the global store of knowledge. Within the scope of the OMRAS2 Music Information Retrieval project, we have made use of and contributed to Semantic Web technologies for purposes ranging from the publication of music recording metadata to the online dissemination of results from audio analysis algorithms. In this paper, we assess the extent to which our tools and frameworks can assist in research and facilitate distrib- uted work among audio and music researchers, and enumerate and motivate further steps to improve collaborative efforts in music informatics using the Semantic Web. To this end, we review some of the tools developed by the OMRAS2 project, examine the extent to which our work reflects the Semantic Web paradigm, and discuss some of the remaining work needed to fulfil the promise of online music informatics research

    Playing fast and loose with music recognition

    Get PDF
    We report lessons from iteratively developing a music recognition system to enable a wide range of musicians to embed musical codes into their typical performance practice. The musician composes fragments of music that can be played back with varying levels of embellishment, disguise and looseness to trigger digital interactions. We collaborated with twenty-three musicians, spanning professionals to amateurs and working with a variety of instruments. We chart the rapid evolution of the system to meet their needs as they strove to integrate music recognition technology into their performance practice, introducing multiple features to enable them to trade-off reliability with musical expression. Collectively, these support the idea of deliberately introducing ‘looseness’ into interactive systems by addressing the three key challenges of control, feedback and attunement, and highlight the potential role for written notations in other recognition-based systems

    SOUND SOFTWARE: TOWARDS SOFTWARE REUSE IN AUDIO AND MUSIC RESEARCH

    No full text
    Although researchers are increasingly aware of the need to publish and maintain software code alongside their results, practical barriers prevent this from happening in many cases. We examine these barriers, propose an incremental approach to overcoming some of them, and describe the Sound Software project, an effort to support software development practice in the UK audio and music research community. Finally we make some recommendations for research groups seeking to improve their own researchers ’ software practice

    The Sonic Visualiser: A Visualisation Platform for Semantic Descriptors from Musical Signals

    No full text
    Sonic Visualiser is the name for an implementation of a system to assist study and comprehension of the contents of audio data, particularly of musical recordings. It is a C++ application with a Qt4 GUI that runs on Windows, Mac, and Linux. It embodies a number of concepts which are intended to improve interaction with audio data and features, most notably with respect to the representation of time-synchronous information. The architecture of the application allows for easy integration of third party algorithms for the extraction of low and mid-level features from musical audio data. This paper describes some basic principles and functionalities o

    MedleyDB - Pitch Tracking Subset

    No full text
    Subset of MedleyDB: 103 solo monophonic stem audio files and corresponding manually annotated pitch (f0) annotations. For further details, refer to the MedleyDB website. Further Annotation and Metadata files are version controlled and are available in the MedleyDB github repository: Metadata can be found here, Annotations can be found here. For detailed information about the dataset, please visit MedleyDB's website. If you make use of MedleyDB for academic purposes, please cite the following publication: R. Bittner, J. Salamon, M. Tierney, M. Mauch, C. Cannam and J. P. Bello, "MedleyDB: A Multitrack Dataset for Annotation-Intensive MIR Research", in 15th International Society for Music Information Retrieval Conference, Taipei, Taiwan, Oct. 2014
    corecore