885 research outputs found

    Information Delivery on Mobile Devices Using Contour Icon Sonification

    Get PDF
    This paper examines the use of musical patterns to convey information, specifically in the context of mobile devices. Existing mechanisms (such as the popularity of the Morse code SMS alert) suggest that the use of musical patterns on mobile devices can be a very efficient and powerful method of data delivery. Unique musical patterns based on templates known as Contour Icons are used to represent specific data variables, with the output rendering of these patterns being referred to as a Sonification of that data. Contour Icon patterns mimic basic shapes and structures, thus providing listeners with a means of categorising them in a high level manner. Potential Sonification applications involving mobile devices are already in testing, with the aim of delivering data to mobile users in a fast, efficient and hands-free manner. It is the goal of this research to provide greater functionality on mobile devices using Sonification

    Information Delivery on Mobile Devices Using Contour Icon Sonification

    Get PDF
    This paper examines the use of musical patterns to convey information, specifically in the context of mobile devices. Existing mechanisms (such as the popularity of the Morse code SMS alert) suggest that the use of musical patterns on mobile devices can be a very efficient and powerful method of data delivery. Unique musical patterns based on templates known as Contour Icons are used to represent specific data variables, with the output rendering of these patterns being referred to as a Sonification of that data. Contour Icon patterns mimic basic shapes and structures, thus providing listeners with a means of categorising them in a high level manner. Potential Sonification applications involving mobile devices are already in testing, with the aim of delivering data to mobile users in a fast, efficient and hands-free manner. It is the goal of this research to provide greater functionality on mobile devices using Sonification

    hpDJ: An automated DJ with floorshow feedback

    No full text
    Many radio stations and nightclubs employ Disk-Jockeys (DJs) to provide a continuous uninterrupted stream or “mix” of dance music, built from a sequence of individual song-tracks. In the last decade, commercial pre-recorded compilation CDs of DJ mixes have become a growth market. DJs exercise skill in deciding an appropriate sequence of tracks and in mixing 'seamlessly' from one track to the next. Online access to large-scale archives of digitized music via automated music information retrieval systems offers users the possibility of discovering many songs they like, but the majority of consumers are unlikely to want to learn the DJ skills of sequencing and mixing. This paper describes hpDJ, an automatic method by which compilations of dance-music can be sequenced and seamlessly mixed by computer, with minimal user involvement. The user may specify a selection of tracks, and may give a qualitative indication of the type of mix required. The resultant mix can be presented as a continuous single digital audio file, whether for burning to CD, or for play-out from a personal playback device such as an iPod, or for play-out to rooms full of dancers in a nightclub. Results from an early version of this system have been tested on an audience of patrons in a London nightclub, with very favourable results. Subsequent to that experiment, we designed technologies which allow the hpDJ system to monitor the responses of crowds of dancers/listeners, so that hpDJ can dynamically react to those responses from the crowd. The initial intention was that hpDJ would monitor the crowd’s reaction to the song-track currently being played, and use that response to guide its selection of subsequent song-tracks tracks in the mix. In that version, it’s assumed that all the song-tracks existed in some archive or library of pre-recorded files. However, once reliable crowd-monitoring technology is available, it becomes possible to use the crowd-response data to dynamically “remix” existing song-tracks (i.e, alter the track in some way, tailoring it to the response of the crowd) and even to dynamically “compose” new song-tracks suited to that crowd. Thus, the music played by hpDJ to any particular crowd of listeners on any particular night becomes a direct function of that particular crowd’s particular responses on that particular night. On a different night, the same crowd of people might react in a different way, leading hpDJ to create different music. Thus, the music composed and played by hpDJ could be viewed as an “emergent” property of the dynamic interaction between the computer system and the crowd, and the crowd could then be viewed as having collectively collaborated on composing the music that was played on that night. This en masse collective composition raises some interesting legal issues regarding the ownership of the composition (i.e.: who, exactly, is the author of the work?), but revenue-generating businesses can nevertheless plausibly be built from such technologies

    Adaptive Scattering Transforms for Playing Technique Recognition

    Get PDF
    Playing techniques contain distinctive information about musical expressivity and interpretation. Yet, current research in music signal analysis suffers from a scarcity of computational models for playing techniques, especially in the context of live performance. To address this problem, our paper develops a general framework for playing technique recognition. We propose the adaptive scattering transform, which refers to any scattering transform that includes a stage of data-driven dimensionality reduction over at least one of its wavelet variables, for representing playing techniques. Two adaptive scattering features are presented: frequency-adaptive scattering and direction-adaptive scattering. We analyse seven playing techniques: vibrato, tremolo, trill, flutter-tongue, acciaccatura, portamento, and glissando. To evaluate the proposed methodology, we create a new dataset containing full-length Chinese bamboo flute performances (CBFdataset) with expert playing technique annotations. Once trained on the proposed scattering representations, a support vector classifier achieves state-of-the-art results. We provide explanatory visualisations of scattering coefficients for each technique and verify the system over three additional datasets with various instrumental and vocal techniques: VPset, SOL, and VocalSet

    AUDIO PROCESSING ANALYZER

    Get PDF
    The project emphasizes simulation of various DSP effects using elementary phenomenon of audio processing, and by manipulating audio using various filters in order to enhance the quality. There are many commercially available systems, which provide facilities such as channel equalizers, karaoke systems, and a few audio processors based on Digital Signal Processing. Software systems are also available which provide a fairly good and cost effective solution to audio enhancement. Yet they are limited due to resources issues and thus make a trade-off between performance and quality. The project at first studies and analyses proceeds as study and analysis of audio processing phenomena and various effects involved in it. In the second phase algorithms have been developed for these phenomena and their simulation in MATLAB.

    Proceedings of the 6th International Workshop on Folk Music Analysis, 15-17 June, 2016

    Get PDF
    The Folk Music Analysis Workshop brings together computational music analysis and ethnomusicology. Both symbolic and audio representations of music are considered, with a broad range of scientific approaches being applied (signal processing, graph theory, deep learning). The workshop features a range of interesting talks from international researchers in areas such as Indian classical music, Iranian singing, Ottoman-Turkish Makam music scores, Flamenco singing, Irish traditional music, Georgian traditional music and Dutch folk songs. Invited guest speakers were Anja Volk, Utrecht University and Peter Browne, Technological University Dublin

    Automatic characterization and generation of music loops and instrument samples for electronic music production

    Get PDF
    Repurposing audio material to create new music - also known as sampling - was a foundation of electronic music and is a fundamental component of this practice. Currently, large-scale databases of audio offer vast collections of audio material for users to work with. The navigation on these databases is heavily focused on hierarchical tree directories. Consequently, sound retrieval is tiresome and often identified as an undesired interruption in the creative process. We address two fundamental methods for navigating sounds: characterization and generation. Characterizing loops and one-shots in terms of instruments or instrumentation allows for organizing unstructured collections and a faster retrieval for music-making. The generation of loops and one-shot sounds enables the creation of new sounds not present in an audio collection through interpolation or modification of the existing material. To achieve this, we employ deep-learning-based data-driven methodologies for classification and generation.Repurposing audio material to create new music - also known as sampling - was a foundation of electronic music and is a fundamental component of this practice. Currently, large-scale databases of audio offer vast collections of audio material for users to work with. The navigation on these databases is heavily focused on hierarchical tree directories. Consequently, sound retrieval is tiresome and often identified as an undesired interruption in the creative process. We address two fundamental methods for navigating sounds: characterization and generation. Characterizing loops and one-shots in terms of instruments or instrumentation allows for organizing unstructured collections and a faster retrieval for music-making. The generation of loops and one-shot sounds enables the creation of new sounds not present in an audio collection through interpolation or modification of the existing material. To achieve this, we employ deep-learning-based data-driven methodologies for classification and generation

    20-20 listening: a sound documentary dedicated to the study of listening experiences in acoustic environments

    Get PDF
    The intersection of sound and design is an exciting and complex space for artistic experimentation and research. My approach for this work was to design, write, record and mix seven podcast episodes that narrate my analysis and interpretation of how we listen to sounds and interpret their meaning. Each episode is dedicated to one topic, and presents multiple sound samples that illustrate my take on the subject. The episodes cover the basics of listening, how sound conveys information about objects in environments and how soundscapes are ubiquitous. They include how the music, sounds and noises in film convey meaning, represent physical qualities and produce an emotional connection with viewers. I also introduce an episode on dynamic audio in video games, and the process of design conceptualization and artistic interventions to make simple sound-based prototypes for people to make sounds, play and enjoy. Furthermore, I introduce the story of a whistling language in the Canary Islands to illustrate the concept of “acoustic community,” and how soundmarks create meaning and a sense of belonging in a social group. I also present a sonic composition that uses speech, sounds, noises and music to create an artistic narrative. This project is the sum of experiments in my sonic journey; it is an audio documentary that uses the listener’s focal attention to create stories about listening
    • 

    corecore