9 research outputs found

    An environment for studying the impact of spatialising sonified graphs on data comprehension

    Get PDF
    We describe AudioCave, an environment for exploring the impact of spatialising sonified graphs on a set of numerical data comprehension tasks. Its design builds on findings regarding the effectiveness of sonified graphs for numerical data overview and discovery by visually impaired and blind students. We demonstrate its use as a test bed for comparing the approach of accessing a single sonified numerical datum at a time to one where multiple sonified numerical data can be accessed concurrently. Results from this experiment show that concurrent access facilitates the tackling of our set multivariate data comprehension tasks. AudioCave also demonstrates how the spatialisation of the sonified graphs provides opportunities for sharing the representation. We present two experiments investigating users solving set data comprehension tasks collaboratively by sharing the data representation

    Design guidelines for audio presentation of graphs and tables

    Get PDF
    Audio can be used to make visualisations accessible to blind and visually impaired people. The MultiVis Project has carried out research into suitable methods for presenting graphs and tables to blind people through the use of both speech and non-speech audio. This paper presents guidelines extracted from this research. These guidelines will enable designers to implement visualisation systems for blind and visually impaired users, and will provide a framework for researchers wishing to investigate the audio presentation of more complex visualisations

    Feeling what you hear: tactile feedback for navigation of audio graphs

    Get PDF
    Access to digitally stored numerical data is currently very limited for sight impaired people. Graphs and visualizations are often used to analyze relationships between numerical data, but the current methods of accessing them are highly visually mediated. Representing data using audio feedback is a common method of making data more accessible, but methods of navigating and accessing the data are often serial in nature and laborious. Tactile or haptic displays could be used to provide additional feedback to support a point-and-click type interaction for the visually impaired. A requirements capture conducted with sight impaired computer users produced a review of current accessibility technologies, and guidelines were extracted for using tactile feedback to aid navigation. The results of a qualitative evaluation with a prototype interface are also presented. Providing an absolute position input device and tactile feedback allowed the users to explore the graph using tactile and proprioceptive cues in a manner analogous to point-and-click techniques

    Design guidelines for audio representation of graphs and tables

    Get PDF
    Proceedings of the 9th International Conference on Auditory Display (ICAD), Boston, MA, July 7-9, 2003.Audio can be used to make visualisations accessible to blind and visually impaired people. The MultiVis Project has carried out research into suitable methods for presenting graphs and tables to blind people through the use of both speech and non-speech audio. This paper presents guidelines extracted from this research. These guidelines will enable designers to implement visualisation systems for blind and visually impaired users, and will provide a framework for researchers wishing to investigate the audio presentation of more complex visualisations

    Developing an interactive overview for non-visual exploration of tabular numerical information

    Get PDF
    This thesis investigates the problem of obtaining overview information from complex tabular numerical data sets non-visually. Blind and visually impaired people need to access and analyse numerical data, both in education and in professional occupations. Obtaining an overview is a necessary first step in data analysis, for which current non-visual data accessibility methods offer little support. This thesis describes a new interactive parametric sonification technique called High-Density Sonification (HDS), which facilitates the process of extracting overview information from the data easily and efficiently by rendering multiple data points as single auditory events. Beyond obtaining an overview of the data, experimental studies showed that the capabilities of human auditory perception and cognition to extract meaning from HDS representations could be used to reliably estimate relative arithmetic mean values within large tabular data sets. Following a user-centred design methodology, HDS was implemented as the primary form of overview information display in a multimodal interface called TableVis. This interface supports the active process of interactive data exploration non-visually, making use of proprioception to maintain contextual information during exploration (non-visual focus+context), vibrotactile data annotations (EMA-Tactons) that can be used as external memory aids to prevent high mental workload levels, and speech synthesis to access detailed information on demand. A series of empirical studies was conducted to quantify the performance attained in the exploration of tabular data sets for overview information using TableVis. This was done by comparing HDS with the main current non-visual accessibility technique (speech synthesis), and by quantifying the effect of different sizes of data sets on user performance, which showed that HDS resulted in better performance than speech, and that this performance was not heavily dependent on the size of the data set. In addition, levels of subjective workload during exploration tasks using TableVis were investigated, resulting in the proposal of EMA-Tactons, vibrotactile annotations that the user can add to the data in order to prevent working memory saturation in the most demanding data exploration scenarios. An experimental evaluation found that EMA-Tactons significantly reduced mental workload in data exploration tasks. Thus, the work described in this thesis provides a basis for the interactive non-visual exploration of a broad range of sizes of numerical data tables by offering techniques to extract overview information quickly, performing perceptual estimations of data descriptors (relative arithmetic mean) and managing demands on mental workload through vibrotactile data annotations, while seamlessly linking with explorations at different levels of detail and preserving spatial data representation metaphors to support collaboration with sighted users

    Collaborating through sounds: audio-only interaction with diagrams

    Get PDF
    PhDThe widening spectrum of interaction contexts and users’ needs continues to expose the limitations of the Graphical User Interface. But despite the benefits of sound in everyday activities and considerable progress in Auditory Display research, audio remains under-explored in Human- Computer Interaction (HCI). This thesis seeks to contribute to unveiling the potential of using audio in HCI by building on and extending current research on how we interact with and through the auditory modality. Its central premise is that audio, by itself, can effectively support collaborative interaction with diagrammatically represented information. Before exploring audio-only collaborative interaction, two preliminary questions are raised; first, how to translate a given diagram to an alternative form that can be accessed in audio; and second, how to support audio-only interaction with diagrams through the resulting form. An analysis of diagrams that emphasises their properties as external representations is used to address the first question. This analysis informs the design of a multiple perspective hierarchybased model that captures modality-independent features of a diagram when translating it into an audio accessible form. Two user studies then address the second question by examining the feasibility of the developed model to support the activities of inspecting, constructing and editing diagrams in audio. The developed model is then deployed in a collaborative lab-based context. A third study explores audio-only collaboration by examining pairs of participants who use audio as the sole means to communicate, access and edit shared diagrams. The channels through which audio is delivered to the workspace are controlled, and the effect on the dynamics of the collaborations is investigated. Results show that pairs of participants are able to collaboratively construct diagrams through sounds. Additionally, the presence or absence of audio in the workspace, and the way in which collaborators chose to work with audio were found to impact patterns of collaborative organisation, awareness of contribution to shared tasks and exchange of workspace awareness information. This work contributes to the areas of Auditory Display and HCI by providing empirically grounded evidence of how the auditory modality can be used to support individual and collaborative interaction with diagrams.Algerian Ministry of Higher Education and Scientific Research. (MERS

    The phenomenal rise of periphonic record production: a practice-based, musicological investigation of periphonic recomposition and mix aesthetics for binaural reproduction

    Get PDF
    ‘The Phenomenal Rise of Periphonic Record Production’ is a practice-based, musicological research project investigating the musicality of a non-front orientated approach to spatial music sound staging, posing the question ‘How can non-front orientated sound stages for music be approached and structured?’ The thesis argues that with integration of periphony (height and surround) there will be a requisite change in the way we actively listen to recorded music, facilitating new approaches to sound staging and record production. Further, in taking an ecological, embodied approach to production, a periphonic sound stage provides more creative agency than that offered through stereophonic or surround sound productions, and that without a visual informing the auditory perception the additional sonic dimensions may be enhanced beyond what current approaches to production can afford. The topics of study are explored through creative research practice and applied development of contemporary music production technique, drawing upon phenomenological method, and adopting practice as research and critical theory as research paradigms. The study constructs, collates and assesses spatial sound staging and production approaches for binaural encoded 3D audio arrangements and provides a framework for conceptualising and interpreting musical structure and lyrical narrative to spatial sonic schema using a non-front orientated approach to production. The techniques constructed within the scope of this project address key issues pertaining to periphonic sound staging and production, offering solution through a non-traditional, unique and democratic approach to spatial music production and creative research practice. The study collates primary research though practice and corroborates this data through focus group sessions that explore the perceived efficacy of the staging constructs and a non-front orientated approach to production. The work herein has been circulated through oral presentation at a variety of conferences, seminars and workshops over the last 6 years. Most recently, elements of Chapter 6 have been published and can be found in Chapter 13 of ‘Perspectives on Music Production – 3D Audio’ (Lord, 2021). The research presented in this thesis has also received citation in undergraduate, post-graduate and PhD level studies pertaining to spatial music production

    Development and exploration of a timbre space representation of audio

    Get PDF
    Sound is an important part of the human experience and provides valuable information about the world around us. Auditory human-computer interfaces do not have the same richness of expression and variety as audio in the world, and it has been said that this is primarily due to a lack of reasonable design tools for audio interfaces.There are a number of good guidelines for audio design and a strong psychoacoustic understanding of how sounds are interpreted. There are also a number of sound manipulation techniques developed for computer music. This research takes these ideas as the basis for an audio interface design system. A proof-of-concept of this system has been developed in order to explore the design possibilities allowed by the new system.The core of this novel audio design system is the timbre space. This provides a multi-dimensional representation of a sound. Each sound is represented as a path in the timbre space and this path can be manipulated geometrically. Several timbre spaces are compared to determine which amongst them is the best one for audio interface design. The various transformations available in the timbre space are discussed and the perceptual relevance of two novel transformations are explored by encoding "urgency" as a design parameter.This research demonstrates that the timbre space is a viable option for audio interface design and provides novel features that are not found in current audio design systems. A number of problems with the approach and some suggested solutions are discussed. The timbre space opens up new possibilities for audio designers to explore combinations of sounds and sound design based on perceptual cues rather than synthesiser parameters

    Proceedings of the 3rd international conference on disability, virtual reality and associated technologies (ICDVRAT 2000)

    Get PDF
    The proceedings of the conferenc
    corecore