72 research outputs found

    Internal representations of auditory frequency: behavioral studies of format and malleability by instructions

    Get PDF
    Research has suggested that representational and perceptual systems draw upon some of the same processing structures, and evidence also has accumulated to suggest that representational formats are malleable by instructions. Very little research, however, has considered how nonspeech sounds are internally represented, and the use of audio in systems will often proceed under the assumption that separation of information by modality is sufficient for eliminating information processing conflicts. Three studies examined the representation of nonspeech sounds in working memory. In Experiment 1, a mental scanning paradigm suggested that nonspeech sounds can be flexibly represented in working memory, but also that a universal per-item scanning cost persisted across encoding strategies. Experiment 2 modified the sentence-picture verification task to include nonspeech sounds (i.e., a sound-sentence-picture verification task) and found evidence generally supporting three distinct formats of representation as well as a lingering effect of auditory stimuli for verification times across representational formats. Experiment 3 manipulated three formats of internal representation (verbal, visuospatial imagery, and auditory imagery) for a point estimation sonification task in the presence of three types of interference tasks (verbal, visuospatial, and auditory) in an effort to induce selective processing code (i.e., domain-specific working memory) interference. Results showed no selective interference but instead suggested a general performance decline (i.e., a general representational resource) for the sonification task in the presence of an interference task, regardless of the sonification encoding strategy or the qualitative interference task demands. Results suggested a distinct role of internal representations for nonspeech sounds with respect to cognitive theory. The predictions of the processing codes dimension of the multiple resources construct were not confirmed; possible explanations are explored. The practical implications for the use of nonspeech sounds in applications include a possible response time advantage when an external stimulus and the format of internal representation match.Ph.D.Committee Chair: Walker, Bruce; Committee Member: Bonebright, Terri; Committee Member: Catrambone, Richard; Committee Member: Corso, Gregory; Committee Member: Rogers, Wend

    An Interactive System for Generating Music from Moving Images

    Get PDF
    Moving images contain a wealth of information pertaining to motion. Motivated by the interconnectedness of music and movement, we present a framework for transforming the kinetic qualities of moving images into music. We developed an interactive software system that takes video as input and maps its motion attributes into the musical dimension based on perceptually grounded principles. The system combines existing sonification frameworks with theories and techniques of generative music. To evaluate the system, we conducted a two-part experiment. First, we asked participants to make judgements on video-audio correspondence from clips generated by the system. Second, we asked participants to give ratings for audiovisual works created using the system. These experiments revealed that 1) the system is able to generate music with a significant level of perceptual correspondence to the source video’s motion and 2) the system can effectively be used as an artistic tool for generative composition

    Springboard: exploring embodied metaphor in the design of sound feedback for physical responsive environments

    Get PDF
    Presented at the 15th International Conference on Auditory Display (ICAD2009), Copenhagen, Denmark, May 18-22, 2009In this paper we propose a role for suing embodied metaphor in the design of sound feedback for interactive physical environments. We describe the application of a balance metaphor in the design of the interaction model for a prototype interactive environment called Springboard. We focus specifically on the auditory feedback, and conclude with a discussion of design choices and future research directions based on our prototype

    Ways of Guided Listening: Embodied approaches to the design of interactive sonifications

    Get PDF
    This thesis presents three use cases for interactive feedback. In each case users interact with a system and receive feedback: the primary source of feedback is visual, while a second source of feedback is offered as sonification. The first use case comprised an interactive sonification system for use by pathologists in the triage stage of cancer diagnostics. Image features derived from computational homology are mapped to a soundscape with integrated auditory glance indicating potential regions of interests. The resulting prototype did not meet the requirements of a domain expert. In the second case this thesis presents an interactive sonification plug-in developed for a software package for interactive visualisation of macromolecular complexes. A framework for building different sonification methods in Python and an OSC-controlled sound producing software was established along with sonification methods and a general sonification plugin. It received generally positive feedback, but the mapping was deemed not very transparent. From these cases and ideas in sonification design literature, the Subject-Position-Based Sonification Design Framework (SPBDF) was developed. It explores an alternative conception of design: that working from a frame of reference encompassing a non-expert audience will lead towards sonifications that are more easily understood. A method for the analysis of sonifications according to its criteria is outlined and put into practice to evaluate a range of sonifications. This framework was evaluated in the third use case, a system for sonified feedback for an exercise device designed for back pain rehabilitation. Two different sonifications, one using SPBDF as basis of their design, were evaluated, indicating that interactive sonification can provide valuable feedback and improve task performance (decrease the mean speed) when the soundscape employed invokes an appropriate emotional response in the user

    Using Sound to Represent Uncertainty in Spatial Data

    Get PDF
    There is a limit to the amount of spatial data that can be shown visually in an effective manner, particularly when the data sets are extensive or complex. Using sound to represent some of these data (sonification) is a way of avoiding visual overload. This thesis creates a conceptual model showing how sonification can be used to represent spatial data and evaluates a number of elements within the conceptual model. These are examined in three different case studies to assess the effectiveness of the sonifications. Current methods of using sonification to represent spatial data have been restricted by the technology available and have had very limited user testing. While existing research shows that sonification can be done, it does not show whether it is an effective and useful method of representing spatial data to the end user. A number of prototypes show how spatial data can be sonified, but only a small handful of these have performed any user testing beyond the authors’ immediate colleagues (where n > 4). This thesis creates and evaluates sonification prototypes, which represent uncertainty using three different case studies of spatial data. Each case study is evaluated by a significant user group (between 45 and 71 individuals) who completed a task based evaluation with the sonification tool, as well as reporting qualitatively their views on the effectiveness and usefulness of the sonification method. For all three case studies, using sound to reinforce information shown visually results in more effective performance from the majority of the participants than traditional visual methods. Participants who were familiar with the dataset were much more effective at using the sonification than those who were not and an interactive sonification which requires significant involvement from the user was much more effective than a static sonification, which did not provide significant user engagement. Using sounds with a clear and easily understood scale (such as piano notes) was important to achieve an effective sonification. These findings are used to improve the conceptual model developed earlier in this thesis and highlight areas for future research

    Digital Sound Studies

    Get PDF
    The digital turn has created new opportunities for scholars across disciplines to use sound in their scholarship. This volume’s contributors provide a blueprint for making sound central to research, teaching, and dissemination. They show how digital sound studies has the potential to transform silent, text-centric cultures of communication in the humanities into rich, multisensory experiences that are more inclusive of diverse knowledges and abilities. Drawing on multiple disciplines—including rhetoric and composition, performance studies, anthropology, history, and information science—the contributors to Digital Sound Studies bring digital humanities and sound studies into productive conversation while probing the assumptions behind the use of digital tools and technologies in academic life. In so doing, they explore how sonic experience might transform our scholarly networks, writing processes, research methodologies, pedagogies, and knowledges of the archive

    Silent Light, Luminous Noise: Photophonics, Machines and the Senses

    Full text link
    This research takes the basic physical premise that sound can be synthesized using light, explores how this has historically been, and still is achieved, and how it can still be a fertile area for creative, theoretical and critical exploration in sound and the arts. Through the author's own artistic practice, different techniques of generating sound using the sonification of light are explored, and these techniques are then contextualised by their historical and theoretical setting in the time-based arts. Specifically, this text draws together diverse strands of scholarship on experimental sound and film practices, cultural histories, the senses, media theory and engineering to address effects and outcomes specific to photophonic sound and its relation to the moving image, and the sculptural and media works devised to produce it. The sonifier, or device engendering the transformations discussed is specifically addressed in its many forms, and a model proposed, whereby these devices and systems are an integral, readably inscribed component - both materially and culturally - in both the works they produce, and via our reflexive understanding of the processes involved, of the images or light signals used to produce them. Other practitioners' works are critically engaged to demonstrate how a sense of touch, or the haptic, can be thought of as an emergent property of moving image works which readably and structurally make use of photophonic sound (including the author's), and sound's essential role in this is examined. In developing, through an integration of theory and practice, a new approach in this under-researched field of sound studies, the author hopes to show how photophonic sound can act as both a metaphorical and material interface between experimental sound and image, and hopefully point the way towards a more comprehensive study of both
    • …
    corecore