2,490 research outputs found

    An introduction to interactive sonification

    Get PDF
    The research field of sonification, a subset of the topic of auditory display, has developed rapidly in recent decades. It brings together interests from the areas of data mining, exploratory data analysis, human–computer interfaces, and computer music. Sonification presents information by using sound (particularly nonspeech), so that the user of an auditory display obtains a deeper understanding of the data or processes under investigation by listening

    Non-visual overviews of complex data sets

    Get PDF
    This paper describes the design and preliminary testing of an interface to obtain overview information from complex numerical data tables non-visually, which is something that cannot be done with currently available accessibility tools for the blind and visually impaired users. A sonification technique that hides detail in the data and highlights its main features without doing any computations to the data, is combined with a graphics tablet for focus+context interactive navigation, in an interface called TableVis. Results from its evaluation suggest that this technique can deliver better scores than speech in time to answer overview questions, correctness of the answers and subjective workload

    Model-based target sonification on mobile devices

    Get PDF
    We investigate the use of audio and haptic feedback to augment the display of a mobile device controlled by tilt input. We provide an example of this based on Doppler effects, which highlight the user's approach to a target, or a target's movement from the current state, in the same way we hear the pitch of a siren change as it passes us. Twelve participants practiced navigation/browsing a state-space that was displayed via audio and vibrotactile modalities. We implemented the experiment on a Pocket PC, with an accelerometer attached to the serial port and a headset attached to audio port. Users navigated through the environment by tilting the device. Feedback was provided via audio displayed via a headset, and by vibrotactile information displayed by a vibrotactile unit in the Pocket PC. Users selected targets placed randomly in the state-space, supported by combinations of audio, visual and vibrotactile cues. The speed of target acquisition and error rate were measured, and summary statistics on the acquisition trajectories were calculated. These data were used to compare different display combinations and configurations. The results in the paper quantified the changes brought by predictive or 'quickened' sonified displays in mobile, gestural interaction

    Using Sound to Represent Uncertainty in Spatial Data

    Get PDF
    There is a limit to the amount of spatial data that can be shown visually in an effective manner, particularly when the data sets are extensive or complex. Using sound to represent some of these data (sonification) is a way of avoiding visual overload. This thesis creates a conceptual model showing how sonification can be used to represent spatial data and evaluates a number of elements within the conceptual model. These are examined in three different case studies to assess the effectiveness of the sonifications. Current methods of using sonification to represent spatial data have been restricted by the technology available and have had very limited user testing. While existing research shows that sonification can be done, it does not show whether it is an effective and useful method of representing spatial data to the end user. A number of prototypes show how spatial data can be sonified, but only a small handful of these have performed any user testing beyond the authors’ immediate colleagues (where n > 4). This thesis creates and evaluates sonification prototypes, which represent uncertainty using three different case studies of spatial data. Each case study is evaluated by a significant user group (between 45 and 71 individuals) who completed a task based evaluation with the sonification tool, as well as reporting qualitatively their views on the effectiveness and usefulness of the sonification method. For all three case studies, using sound to reinforce information shown visually results in more effective performance from the majority of the participants than traditional visual methods. Participants who were familiar with the dataset were much more effective at using the sonification than those who were not and an interactive sonification which requires significant involvement from the user was much more effective than a static sonification, which did not provide significant user engagement. Using sounds with a clear and easily understood scale (such as piano notes) was important to achieve an effective sonification. These findings are used to improve the conceptual model developed earlier in this thesis and highlight areas for future research

    Musical Robots For Children With ASD Using A Client-Server Architecture

    Get PDF
    Presented at the 22nd International Conference on Auditory Display (ICAD-2016)People with Autistic Spectrum Disorders (ASD) are known to have difficulty recognizing and expressing emotions, which affects their social integration. Leveraging the recent advances in interactive robot and music therapy approaches, and integrating both, we have designed musical robots that can facilitate social and emotional interactions of children with ASD. Robots communicate with children with ASD while detecting their emotional states and physical activities and then, make real-time sonification based on the interaction data. Given that we envision the use of multiple robots with children, we have adopted a client-server architecture. Each robot and sensing device plays a role as a terminal, while the sonification server processes all the data and generates harmonized sonification. After describing our goals for the use of sonification, we detail the system architecture and on-going research scenarios. We believe that the present paper offers a new perspective on the sonification application for assistive technologies

    Sonification of Network Traffic Flow for Monitoring and Situational Awareness

    Get PDF
    Maintaining situational awareness of what is happening within a network is challenging, not least because the behaviour happens within computers and communications networks, but also because data traffic speeds and volumes are beyond human ability to process. Visualisation is widely used to present information about the dynamics of network traffic dynamics. Although it provides operators with an overall view and specific information about particular traffic or attacks on the network, it often fails to represent the events in an understandable way. Visualisations require visual attention and so are not well suited to continuous monitoring scenarios in which network administrators must carry out other tasks. Situational awareness is critical and essential for decision-making in the domain of computer network monitoring where it is vital to be able to identify and recognize network environment behaviours.Here we present SoNSTAR (Sonification of Networks for SiTuational AwaReness), a real-time sonification system to be used in the monitoring of computer networks to support the situational awareness of network administrators. SoNSTAR provides an auditory representation of all the TCP/IP protocol traffic within a network based on the different traffic flows between between network hosts. SoNSTAR raises situational awareness levels for computer network defence by allowing operators to achieve better understanding and performance while imposing less workload compared to visual techniques. SoNSTAR identifies the features of network traffic flows by inspecting the status flags of TCP/IP packet headers and mapping traffic events to recorded sounds to generate a soundscape representing the real-time status of the network traffic environment. Listening to the soundscape allows the administrator to recognise anomalous behaviour quickly and without having to continuously watch a computer screen.Comment: 17 pages, 7 figures plus supplemental material in Github repositor

    Sonification of probabilistic feedback through granular synthesis

    Get PDF
    We describe a method to improve user feedback, specifically the display of time-varying probabilistic information, through asynchronous granular synthesis. We have applied these techniques to challenging control problems as well as to the sonification of online probabilistic gesture recognition. We're using these displays in mobile, gestural interfaces where visual display is often impractical

    Granular synthesis for display of time-varying probability densities

    Get PDF
    We present a method for displaying time-varying probabilistic information to users using an asynchronous granular synthesis technique. We extend the basic synthesis technique to include distribution over waveform source, spatial position, pitch and time inside waveforms. To enhance the synthesis in interactive contexts, we "quicken" the display by integrating predictions of user behaviour into the sonification. This includes summing the derivatives of the distribution during exploration of static densities, and using Monte-Carlo sampling to predict future user states in nonlinear dynamic systems. These techniques can be used to improve user performance in continuous control systems and in the interactive exploration of high dimensional spaces. This technique provides feedback from users potential goals, and their progress toward achieving them; modulating the feedback with quickening can help shape the users actions toward achieving these goals. We have applied these techniques to a simple nonlinear control problem as well as to the sonification of on-line probabilistic gesture recognition. We are applying these displays to mobile, gestural interfaces, where visual display is often impractical. The granular synthesis approach is theoretically elegant and easily applied in contexts where dynamic probabilistic displays are required
    corecore