3 research outputs found

    Clique: Perceptually Based, Task Oriented Auditory Display for GUI Applications

    Get PDF
    Screen reading is the prevalent approach for presenting graphical desktop applications in audio. The primary function of a screen reader is to describe what the user encounters when interacting with a graphical user interface (GUI). This straightforward method allows people with visual impairments to hear exactly what is on the screen, but with significant usability problems in a multitasking environment. Screen reader users must infer the state of on-going tasks spanning multiple graphical windows from a single, serial stream of speech. In this dissertation, I explore a new approach to enabling auditory display of GUI programs. With this method, the display describes concurrent application tasks using a small set of simultaneous speech and sound streams. The user listens to and interacts solely with this display, never with the underlying graphical interfaces. Scripts support this level of adaption by mapping GUI components to task definitions. Evaluation of this approach shows improvements in user efficiency, satisfaction, and understanding with little development effort. To develop this method, I studied the literature on existing auditory displays, working user behavior, and theories of human auditory perception and processing. I then conducted a user study to observe problems encountered and techniques employed by users interacting with an ideal auditory display: another human being. Based on my findings, I designed and implemented a prototype auditory display, called Clique, along with scripts adapting seven GUI applications. I concluded my work by conducting a variety of evaluations on Clique. The results of these studies show the following benefits of Clique over the state of the art for users with visual impairments (1-5) and mobile sighted users (6): 1. Faster, accurate access to speech utterances through concurrent speech streams. 2. Better awareness of peripheral information via concurrent speech and sound streams. 3. Increased information bandwidth through concurrent streams. 4. More efficient information seeking enabled by ubiquitous tools for browsing and searching. 5. Greater accuracy in describing unfamiliar applications learned using a consistent, task-based user interface. 6. Faster completion of email tasks in a standard GUI after exposure to those tasks in audio

    Perception of unattended speech

    Get PDF
    Presented at the 10th International Conference on Auditory Display (ICAD2004)This study addresses the question of speech processing under unattended conditions. Dupoux et al. (2003) have recently claimed that unattended words are not lexically processed. We test their conclusion with a different paradigm : participants had to detect a target word belonging to a specific category presented in a rapid list of words, in the attended ear. In the unattended ear, concatenated sentences were presented, some containing a repetition prime presented just before the target words. We found a significant priming effect of 22 ms (Experiment 1), for category detection in the presence of a prime compared with no prime. This priming effect was not affected by whether the right or the left ear received the prime (Experiment 2a and 2b). We also found that the priming effect disappeared when there was no pitch range difference between attended and unattended messages (Experiment 3 and 4). Finally, we replicated the priming effect by compelling participants to focus on the attended message asking them to perform a second task (Experiment 5)
    corecore