2 research outputs found

    Manipulating synthetic voice parameters for navigation in hierarchical structures

    Get PDF
    Presented at the 11th International Conference on Auditory Display (ICAD2005)Auditory interfaces commonly use synthetic speech for conveying information. In many instances the information being conveyed is hierarchically structured, such as menus. In this paper, we describe the results of one experiment that was designed to investigate the use of multiple synthetic voices for representing hierarchical information. A hierarchy of 27 nodes was created (in which 2 of the nodes were not shown to the participants during the training session). A between subjects design (N-16) was conducted to evaluate the effect of multiple synthetic voices on recall rates. Two different forms of training were provided. Participant's tasks involved identifying the position of nodes in the hierarchy by listening to the synthetic voice. The results suggest that 84.38% of the participants recalled the position of the nodes accurately. The results also indicate that multiple synthetic voices can be used to facilitate navigation hierarchies. Overall, this study suggests that it is possible to use synthetic voices to represent hierarchies

    Collaborating through sounds: audio-only interaction with diagrams

    Get PDF
    PhDThe widening spectrum of interaction contexts and users’ needs continues to expose the limitations of the Graphical User Interface. But despite the benefits of sound in everyday activities and considerable progress in Auditory Display research, audio remains under-explored in Human- Computer Interaction (HCI). This thesis seeks to contribute to unveiling the potential of using audio in HCI by building on and extending current research on how we interact with and through the auditory modality. Its central premise is that audio, by itself, can effectively support collaborative interaction with diagrammatically represented information. Before exploring audio-only collaborative interaction, two preliminary questions are raised; first, how to translate a given diagram to an alternative form that can be accessed in audio; and second, how to support audio-only interaction with diagrams through the resulting form. An analysis of diagrams that emphasises their properties as external representations is used to address the first question. This analysis informs the design of a multiple perspective hierarchybased model that captures modality-independent features of a diagram when translating it into an audio accessible form. Two user studies then address the second question by examining the feasibility of the developed model to support the activities of inspecting, constructing and editing diagrams in audio. The developed model is then deployed in a collaborative lab-based context. A third study explores audio-only collaboration by examining pairs of participants who use audio as the sole means to communicate, access and edit shared diagrams. The channels through which audio is delivered to the workspace are controlled, and the effect on the dynamics of the collaborations is investigated. Results show that pairs of participants are able to collaboratively construct diagrams through sounds. Additionally, the presence or absence of audio in the workspace, and the way in which collaborators chose to work with audio were found to impact patterns of collaborative organisation, awareness of contribution to shared tasks and exchange of workspace awareness information. This work contributes to the areas of Auditory Display and HCI by providing empirically grounded evidence of how the auditory modality can be used to support individual and collaborative interaction with diagrams.Algerian Ministry of Higher Education and Scientific Research. (MERS
    corecore