11,859 research outputs found

    Musical Robots For Children With ASD Using A Client-Server Architecture

    Get PDF
    Presented at the 22nd International Conference on Auditory Display (ICAD-2016)People with Autistic Spectrum Disorders (ASD) are known to have difficulty recognizing and expressing emotions, which affects their social integration. Leveraging the recent advances in interactive robot and music therapy approaches, and integrating both, we have designed musical robots that can facilitate social and emotional interactions of children with ASD. Robots communicate with children with ASD while detecting their emotional states and physical activities and then, make real-time sonification based on the interaction data. Given that we envision the use of multiple robots with children, we have adopted a client-server architecture. Each robot and sensing device plays a role as a terminal, while the sonification server processes all the data and generates harmonized sonification. After describing our goals for the use of sonification, we detail the system architecture and on-going research scenarios. We believe that the present paper offers a new perspective on the sonification application for assistive technologies

    Sound for Fantasy and Freedom

    Get PDF
    Sound is an integral part of our everyday lives. Sound tells us about physical events in the environ- ment, and we use our voices to share ideas and emotions through sound. When navigating the world on a day-to-day basis, most of us use a balanced mix of stimuli from our eyes, ears and other senses to get along. We do this totally naturally and without effort. In the design of computer game experiences, traditionally, most attention has been given to vision rather than the balanced mix of stimuli from our eyes, ears and other senses most of us use to navigate the world on a day to day basis. The risk is that this emphasis neglects types of interaction with the game needed to create an immersive experience. This chapter summarizes the relationship between sound properties, GameFlow and immersive experience and discusses two projects in which Interactive Institute, Sonic Studio has balanced perceptual stimuli and game mechanics to inspire and create new game concepts that liberate users and their imagination

    "Set phasors to stun": an algorithm to improve phase coherence on transients in multi-microphone recordings

    Get PDF
    Ever since the advent of multi-microphone recording, sound engineers have wrestled with the colouration of sound by phasing issues. For some this was an anathema; for others this colouration was a crucial ingredient of the finished product. Traditionally, delicate microphone placement was essential, with subtle movements and tilts allowing the producer/engineer to determine when a sound was “in phase” based on perception alone. More recently, DAW’s have allowed us to view multiple waveforms and manually nudge them into coherence with visual feedback now supporting the aural, although still a manual process. This paper will present an algorithm that allows automatic correction of phase via a unique Max/MSP patch operating on multiple audio components simultaneously. With a single button push, the producer can now hear a stereo recording with maximum coherence and thus make an artistic judgment as to whether the “ideal” is ideal, or better to pursue naturally occurring phase colouration in preference. In addition, the patch allows zoning in to spatially separated sound sources, eg tuning drum kit overheads to phase lock with the snare drum or hi-hat microphone. Audio examples will be played and the patch demonstrated in action. Limiting factors, contexts and applications will also be discussed

    Free associative composition: Practice led research into composition techniques that help enable free association.

    Get PDF
    The original compositions presented in this portfolio are the product of practice led research into developing and implementing composition techniques that enable free association. This com-mentary outlines the different approaches I have taken and the reasoning behind them

    Musica ex machina:a history of video game music

    Get PDF
    The history of video game music is a subject area that has received little attention by musicologists, and yet the form presents fascinating case studies both of musical minimalism, and the role of technology in influencing and shaping both musical form and aesthetics. This presentation shows how video game music evolved from simple tones, co-opted from sync circuits in early hardware to a sophisticated form of adaptive expression

    Toward a model of computational attention based on expressive behavior: applications to cultural heritage scenarios

    Get PDF
    Our project goals consisted in the development of attention-based analysis of human expressive behavior and the implementation of real-time algorithm in EyesWeb XMI in order to improve naturalness of human-computer interaction and context-based monitoring of human behavior. To this aim, perceptual-model that mimic human attentional processes was developed for expressivity analysis and modeled by entropy. Museum scenarios were selected as an ecological test-bed to elaborate three experiments that focus on visitor profiling and visitors flow regulation

    Development of a soundscape simulator tool

    Get PDF
    This paper discusses the development of an interactive soundscape simulator, enabling users to manipulate a series of parameters to investigate if there is group correlation between factors such as source selection, positioning and level. The basis of the simulator stems from fieldwork and recordings carried out in London and Manchester. Through the use of an enhanced version of soundwalking, respondents are led on a walk around an urban space focusing on the soundscape, whilst answering questions in a semi-structured interview. The data collected is then used to inform the ecological validity of the simulator. The laboratory based tests use simulations based on spaces recorded in a series of urban locations, as well as an ‘idealised’ soundscape simulation, featuring data from all recorded locations. The sound sources used are based on user highlighted selections from all locations, based on preferences extracted from soundwalk field data. Preliminary results show the simulator is effective in obtaining numerical data based on subjective choices as well as, effective qualitative data which provides an insight into the reasoning behind the respondents choices. This work forms part of the Positive Soundscape Project
    • 

    corecore