4 research outputs found

    A Model for Data-Driven Sonification Using Soundscapes

    Get PDF
    A sonification is a rendering of audio in response to data, and is used in instances where visual representations of data are impossible, difficult, or unwanted. Designing sonifications often requires knowledge in multiple areas as well as an understanding of how the end users will use the system. This makes it an ideal candidate for end-user development where the user plays a role in the creation of the design. We present a model for sonification that utilizes user-specified examples and data to generate cross-domain mappings from data to sound. As a novel contribution we utilize soundscapes (acoustic scenes) for these user-selected examples to define a structure for the sonification. We demonstrate a proof of concept of our model using sound examples and discuss how we plan to build on this work in the future

    Supplementary materials for "A Model for Data-Driven Sonification Using Soundscapes," a poster presented at the ACM Intelligent User Interfaces Conference in 2015.

    Get PDF
    We provide: 1. An example "input" soundscape, exemplifying an existing soundscape that can be provided by a user 2. A clip of the "output" soundscape that was rendered for a dataset, using the above input soundscape and the mapping policy 3. A supplementary text describing the features of the input soundscape, sound groups, sound samples, and Twitter data, as well as the details of the mapping policy that were used to generate this output soundscape. This text also provides attribution for the sound files used in the sonification

    Sonification of Network Traffic Flow for Monitoring and Situational Awareness

    Get PDF
    Maintaining situational awareness of what is happening within a network is challenging, not least because the behaviour happens within computers and communications networks, but also because data traffic speeds and volumes are beyond human ability to process. Visualisation is widely used to present information about the dynamics of network traffic dynamics. Although it provides operators with an overall view and specific information about particular traffic or attacks on the network, it often fails to represent the events in an understandable way. Visualisations require visual attention and so are not well suited to continuous monitoring scenarios in which network administrators must carry out other tasks. Situational awareness is critical and essential for decision-making in the domain of computer network monitoring where it is vital to be able to identify and recognize network environment behaviours.Here we present SoNSTAR (Sonification of Networks for SiTuational AwaReness), a real-time sonification system to be used in the monitoring of computer networks to support the situational awareness of network administrators. SoNSTAR provides an auditory representation of all the TCP/IP protocol traffic within a network based on the different traffic flows between between network hosts. SoNSTAR raises situational awareness levels for computer network defence by allowing operators to achieve better understanding and performance while imposing less workload compared to visual techniques. SoNSTAR identifies the features of network traffic flows by inspecting the status flags of TCP/IP packet headers and mapping traffic events to recorded sounds to generate a soundscape representing the real-time status of the network traffic environment. Listening to the soundscape allows the administrator to recognise anomalous behaviour quickly and without having to continuously watch a computer screen.Comment: 17 pages, 7 figures plus supplemental material in Github repositor

    End-user Development of Sonifications using Soundscapes

    Get PDF
    Designing sonifications requires knowledge in many domains including sound design, sonification design, and programming. Thus end users typically do not create sonifications on their own, but instead work with sonification experts to iteratively co-design their systems. However, once a sonification system is deployed there is little a user can do to make adjustments. In this work, we present an approach for sonification system design that puts end users in the control of the design process by allowing them to interactively generate, explore, and refine sonification designs. Our approach allows a user to start creating sonifications simply by providing an example soundscape (i.e., an example of what they might want their sonification to sound like), and an example dataset illustrating properties of the data they would like to sonify. The user is then provided with the ability to employ automated or semi-automated design of mappings from features of the data to soundscape controls. To make this possible, we describe formal models for soundscape, data, and sonification, and an optimization-based method for creating sonifications that is informed by design principles outlined in past auditory display research
    corecore