5,403 research outputs found

    Constructing sonified haptic line graphs for the blind student: first steps

    Get PDF
    Line graphs stand as an established information visualisation and analysis technique taught at various levels of difficulty according to standard Mathematics curricula. It has been argued that blind individuals cannot use line graphs as a visualisation and analytic tool because they currently primarily exist in the visual medium. The research described in this paper aims at making line graphs accessible to blind students through auditory and haptic media. We describe (1) our design space for representing line graphs, (2) the technology we use to develop our prototypes and (3) the insights from our preliminary work

    Web-based haptic applications for blind people to create virtual graphs

    Get PDF
    Haptic technology has great potentials in many applications. This paper introduces our work on delivery haptic information via the Web. A multimodal tool has been developed to allow blind people to create virtual graphs independently. Multimodal interactions in the process of graph creation and exploration are provided by using a low-cost haptic device, the Logitech WingMan Force Feedback Mouse, and Web audio. The Web-based tool also provides blind people with the convenience of receiving information at home. In this paper, we present the development of the tool and evaluation results. Discussions on the issues related to the design of similar Web-based haptic applications are also given

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Computer Entertainment Technologies for the Visually Impaired: An Overview

    Get PDF
    Over the last years, works related to accessible technologies have increased both in number and in quality. This work presents a series of articles which explore different trends in the field of accessible video games for the blind or visually impaired. Reviewed articles are distributed in four categories covering the following subjects: (1) video game design and architecture, (2) video game adaptations, (3) accessible games as learning tools or treatments and (4) navigation and interaction in virtual environments. Current trends in accessible game design are also analysed, and data is presented regarding keyword use and thematic evolution over time. As a conclusion, a relative stagnation in the field of human-computer interaction for the blind is detected. However, as the video game industry is becoming increasingly interested in accessibility, new research opportunities are starting to appear

    Collaborating through sounds: audio-only interaction with diagrams

    Get PDF
    PhDThe widening spectrum of interaction contexts and users’ needs continues to expose the limitations of the Graphical User Interface. But despite the benefits of sound in everyday activities and considerable progress in Auditory Display research, audio remains under-explored in Human- Computer Interaction (HCI). This thesis seeks to contribute to unveiling the potential of using audio in HCI by building on and extending current research on how we interact with and through the auditory modality. Its central premise is that audio, by itself, can effectively support collaborative interaction with diagrammatically represented information. Before exploring audio-only collaborative interaction, two preliminary questions are raised; first, how to translate a given diagram to an alternative form that can be accessed in audio; and second, how to support audio-only interaction with diagrams through the resulting form. An analysis of diagrams that emphasises their properties as external representations is used to address the first question. This analysis informs the design of a multiple perspective hierarchybased model that captures modality-independent features of a diagram when translating it into an audio accessible form. Two user studies then address the second question by examining the feasibility of the developed model to support the activities of inspecting, constructing and editing diagrams in audio. The developed model is then deployed in a collaborative lab-based context. A third study explores audio-only collaboration by examining pairs of participants who use audio as the sole means to communicate, access and edit shared diagrams. The channels through which audio is delivered to the workspace are controlled, and the effect on the dynamics of the collaborations is investigated. Results show that pairs of participants are able to collaboratively construct diagrams through sounds. Additionally, the presence or absence of audio in the workspace, and the way in which collaborators chose to work with audio were found to impact patterns of collaborative organisation, awareness of contribution to shared tasks and exchange of workspace awareness information. This work contributes to the areas of Auditory Display and HCI by providing empirically grounded evidence of how the auditory modality can be used to support individual and collaborative interaction with diagrams.Algerian Ministry of Higher Education and Scientific Research. (MERS

    Virtual Audio - Three-Dimensional Audio in Virtual Environments

    Get PDF
    Three-dimensional interactive audio has a variety ofpotential uses in human-machine interfaces. After lagging seriously behind the visual components, the importance of sound is now becoming increas-ingly accepted. This paper mainly discusses background and techniques to implement three-dimensional audio in computer interfaces. A case study of a system for three-dimensional audio, implemented by the author, is described in great detail. The audio system was moreover integrated with a virtual reality system and conclusions on user tests and use of the audio system is presented along with proposals for future work at the end of the paper. The thesis begins with a definition of three-dimensional audio and a survey on the human auditory system to give the reader the needed knowledge of what three-dimensional audio is and how human auditory perception works

    Sound at the user interface

    Get PDF

    Sound at the user interface

    Get PDF
    corecore