22 research outputs found

    Grateful Live: Mixing Multiple Recordings of a Dead Performance into an Immersive Experience

    Get PDF
    date-added: 2016-08-23 17:17:44 +0000 date-modified: 2016-08-23 17:22:38 +0000date-added: 2016-08-23 17:17:44 +0000 date-modified: 2016-08-23 17:22:38 +0000date-added: 2016-08-23 17:17:44 +0000 date-modified: 2016-08-23 17:22:38 +0000date-added: 2016-08-23 17:17:44 +0000 date-modified: 2016-08-23 17:22:38 +000

    AUFX-O: Novel Methods for the Representation of Audio Processing Workflows

    Get PDF

    A History of Audio Effects

    Get PDF
    Audio effects are an essential tool that the field of music production relies upon. The ability to intentionally manipulate and modify a piece of sound has opened up considerable opportunities for music making. The evolution of technology has often driven new audio tools and effects, from early architectural acoustics through electromechanical and electronic devices to the digitisation of music production studios. Throughout time, music has constantly borrowed ideas and technological advancements from all other fields and contributed back to the innovative technology. This is defined as transsectorial innovation and fundamentally underpins the technological developments of audio effects. The development and evolution of audio effect technology is discussed, highlighting major technical breakthroughs and the impact of available audio effects

    Exploration of grateful dead concerts and memorabilia on the semantic Web

    Get PDF
    © 2018 CEUR-WS. All rights reserved. With the increasing importance attributed to intangible cultural heritage, of which music performance is an important part, public archive collections contain a growing proportion of audio and video material. Currently used models have only limited capabilities for their representation. This demo illustrates our proposal for a unified ontological model of live music recordings and associated tangible artefacts with a Web application for the exploration of live music events of the Grateful Dead

    Alignment and Timeline Construction for Incomplete Analogue Audience Recordings of Historical Live Music Concerts

    Get PDF
    Analogue recordings pose specific problems during automatic alignment, such as distortion due to physical degradation, or differences in tape speed during recording, copying, and digitisation. Oftentimes, recordings are incomplete, exhibiting gaps with different lengths. In this paper we propose a method to align multiple digitised analogue recordings of same concerts of varying quality and song segmentations. The process includes the automatic construction of a reference concert timeline. We evaluate alignment methods on a synthetic dataset and apply our algorithm to real-world data

    Linked Data Publication of Live Music Archives and Analyses

    Get PDF
    date-added: 2017-12-22 15:39:21 +0000 date-modified: 2017-12-22 15:53:18 +0000 keywords: Linked Data, Semantic Audio, Semantic Web, live music archive local-url: https://link.springer.com/chapter/10.1007/978-3-319-68204-4_3 bdsk-url-1: https://iswc2017.semanticweb.org/wp-content/uploads/papers/MainProceedings/221.pdf bdsk-url-2: https://dx.doi.org/10.1007/978-3-319-68204-4_3date-added: 2017-12-22 15:39:21 +0000 date-modified: 2017-12-22 15:53:18 +0000 keywords: Linked Data, Semantic Audio, Semantic Web, live music archive local-url: https://link.springer.com/chapter/10.1007/978-3-319-68204-4_3 bdsk-url-1: https://iswc2017.semanticweb.org/wp-content/uploads/papers/MainProceedings/221.pdf bdsk-url-2: https://dx.doi.org/10.1007/978-3-319-68204-4_3We describe the publication of a linked data set exposing metadata from the Internet Archive Live Music Archive along with detailed feature analysis data of the audio files contained in the archive. The collection is linked to existing musical and geographical resources allowing for the extraction of useful or nteresting subsets of data using additional metadata. The collection is published using a ‘layered’ approach, aggregating the original information with links and specialised analyses, and forms a valuable resource for those investigating or developing audio analysis tools and workflows
    corecore