55 research outputs found

    The Effects of Design on Performance for Data-based and Task-based Sonification Designs: Evaluation of a Task-based Approach to Sonification Design for Surface Electromyography

    Get PDF
    The goal of this work was to evaluate a task-analysis-based approach to sonification design for surface electromyography (sEMG) data. A sonification is a type of auditory display that uses sound to convey information about data to a listener. Sonifications work by mapping changes in a parameter of sound (e.g., pitch) to changes in data values and they have been shown to be useful in biofeedback and movement analysis applications. However, research that investigates and evaluates sonifications has been difficult due to the highly interdisciplinary nature of the field. Progress has been made but to date, many sonification designs have not been empirically evaluated and have been described as annoying, confusing, or fatiguing. Sonification design decisions have also often been based on characteristics of the data being sonified, and not on the listener’s data analysis task. The hypothesis for this thesis was that focusing on the listener’s task when designing sonifications could result in sonifications that were more readily understood and less annoying to listen to. Task analysis methods have been developed in fields like Human Factors and Human Computer Interaction, and their purpose is to break tasks down into their most basic elements so that products and software can be developed to meet user needs. Applying this approach to sonification design, a type of task analysis focused on Goals, Operators, Methods, and Selection rules (GOMS) was used to analyze two sEMG data evaluation tasks, identify design criteria that a sonification would need to meet in order to allow a listener to perform these two tasks, and two sonification designs were created to facilitate accomplishment of these tasks. These two Task-based sonification designs were then empirically compared to two Data-based sonification designs. The Task-based designs resulted in better listener performance for both sEMG data evaluation tasks, demonstrating the effectiveness of the Task-based approach and suggesting that sonification designers may benefit from adopting a task-based approach to sonification design

    Imagining & Sensing: Understanding and Extending the Vocalist-Voice Relationship Through Biosignal Feedback

    Get PDF
    The voice is body and instrument. Third-person interpretation of the voice by listeners, vocal teachers, and digital agents is centred largely around audio feedback. For a vocalist, physical feedback from within the body provides an additional interaction. The vocalist’s understanding of their multi-sensory experiences is through tacit knowledge of the body. This knowledge is difficult to articulate, yet awareness and control of the body are innate. In the ever-increasing emergence of technology which quantifies or interprets physiological processes, we must remain conscious also of embodiment and human perception of these processes. Focusing on the vocalist-voice relationship, this thesis expands knowledge of human interaction and how technology influences our perception of our bodies. To unite these different perspectives in the vocal context, I draw on mixed methods from cog- nitive science, psychology, music information retrieval, and interactive system design. Objective methods such as vocal audio analysis provide a third-person observation. Subjective practices such as micro-phenomenology capture the experiential, first-person perspectives of the vocalists them- selves. Quantitative-qualitative blend provides details not only on novel interaction, but also an understanding of how technology influences existing understanding of the body. I worked with vocalists to understand how they use their voice through abstract representations, use mental imagery to adapt to altered auditory feedback, and teach fundamental practice to others. Vocalists use multi-modal imagery, for instance understanding physical sensations through auditory sensations. The understanding of the voice exists in a pre-linguistic representation which draws on embodied knowledge and lived experience from outside contexts. I developed a novel vocal interaction method which uses measurement of laryngeal muscular activations through surface electromyography. Biofeedback was presented to vocalists through soni- fication. Acting as an indicator of vocal activity for both conscious and unconscious gestures, this feedback allowed vocalists to explore their movement through sound. This formed new perceptions but also questioned existing understanding of the body. The thesis also uncovers ways in which vocalists are in control and controlled by, work with and against their bodies, and feel as a single entity at times and totally separate entities at others. I conclude this thesis by demonstrating a nuanced account of human interaction and perception of the body through vocal practice, as an example of how technological intervention enables exploration and influence over embodied understanding. This further highlights the need for understanding of the human experience in embodied interaction, rather than solely on digital interpretation, when introducing technology into these relationships

    Extended Abstracts

    Get PDF
    Presented at the 21st International Conference on Auditory Display (ICAD2015), July 6-10, 2015, Graz, Styria, Austria.Mark Ballora “Two examples of sonification for viewer engagement: Hurricanes and squirrel hibernation cycles” / Stephen Barrass, “ Diagnostic Singing Bowls” / Natasha Barrett, Kristian Nymoen. “Investigations in coarticulated performance gestures using interactive parameter-mapping 3D sonification” / Lapo Boschi, Arthur Paté, Benjamin Holtzman, Jean-Loïc le Carrou. “Can auditory display help us categorize seismic signals?” / Cédric Camier, François-Xavier Féron, Julien Boissinot, Catherine Guastavino. “Tracking moving sounds: Perception of spatial figures” / Coralie Diatkine, Stéphanie Bertet, Miguel Ortiz. “Towards the holistic spatialization of multiple sound sources in 3D, implementation using ambisonics to binaural technique” / S. Maryam FakhrHosseini, Paul Kirby, Myounghoon Jeon. “Regulating Drivers’ Aggressiveness by Sonifying Emotional Data” / Wolfgang Hauer, Katharina Vogt. “Sonification of a streaming-server logfile” / Thomas Hermann, Tobias Hildebrandt, Patrick Langeslag, Stefanie Rinderle-Ma. “Optimizing aesthetics and precision in sonification for peripheral process-monitoring” / Minna Huotilainen, Matti Gröhn, Iikka Yli-Kyyny, Jussi Virkkala, Tiina Paunio. “Sleep Enhancement by Sound Stimulation” / Steven Landry, Jayde Croschere, Myounghoon Jeon. “Subjective Assessment of In-Vehicle Auditory Warnings for Rail Grade Crossings” / Rick McIlraith, Paul Walton, Jude Brereton. “The Spatialised Sonification of Drug-Enzyme Interactions” / George Mihalas, Minodora Andor, Sorin Paralescu, Anca Tudor, Adrian Neagu, Lucian Popescu, Antoanela Naaji. “Adding Sound to Medical Data Representation” / Rainer Mittmannsgruber, Katharina Vogt. “Auditory assistance for timing presentations” / Joseph W. Newbold, Andy Hunt, Jude Brereton. “Chemical Spectral Analysis through Sonification” / S. Camille Peres, Daniel Verona, Paul Ritchey. “The Effects of Various Parameter Combinations in Parameter-Mapping Sonifications: A Pilot Study” / Eva Sjuve. “Metopia: Experiencing Complex Environmental Data Through Sound” / Benjamin Stahl, Katharina Vogt. “The Effect of Audiovisual Congruency on Short-Term Memory of Serial Spatial Stimuli: A Pilot Test” / David Worrall. “Realtime sonification and visualisation of network metadata (The NetSon Project)” / Bernhard Zeller, Katharina Vogt. “Auditory graph evolution by the example of spurious correlations” /The compiled collection of extended abstracts included in the ICAD 2015 Proceedings. Extended abstracts include, but are not limited to, late-breaking results, works in early stages of progress, novel methodologies, unique or controversial theoretical positions, and discussions of unsuccessful research or null findings

    Musical expectancy within movement sonification to overcome low self-efficacy

    Get PDF
    While engaging in physical activity is important for a healthy lifestyle, low self-efficacy, i.e. one's belief in one's own ability, can prevent engagement. Sound has been used in a variety of ways for physical activity: movement sonification to inform about movement, music to encourage and direct movement, and auditory illusions to adapt people's bodily representation and movement behaviour. However, no approach provides the whole picture when considering low self-efficacy. For example, sonification does not encourage movement past a person's expectation of their ability, music gives no information of one's capabilities, and auditory illusions do not direct changes in movement behaviour in a directed way. This thesis proposes a combined method that leverages the agency felt over sonification, our embodiment of music and movement altering feedback to design \textit{``musical expectancy sonifications''} which incorporate musical expectancy within sonification to alter movement perception and behaviour. This thesis proposes a Movement Sonification Expectation Model (MoSEM), which explores expectation within a movement sonification impact on people's perception of their abilities and the way they move. This MoSEM is then interrogated and developed in four initial control studies that investigate these sonifications for different types of movement as well as how they interact with one's expectation of a given movement. These findings led to an exploration of how the MoSEM can be applied to design sonification to support low-self efficacy in two case study populations: chronic pain rehabilitation, including one control study and one mixed methods study, and general well-being, including one interview study and two control studies. These studies show the impact of musical expectation on people's movement perception and behaviour. The findings from this thesis demonstrate not only how sonifications can be designed to use musical expectancy, but also shows a number of considerations that are needed when designing movement sonifications
    • …
    corecore