11 research outputs found

    "Spindex" (speech index) enhances menu navigation user experience of touch screen devices in various input gestures: tapping, wheeling, and flicking

    Get PDF
    In a large number of electronic devices, users interact with the system by navigating through various menus. Auditory menus can complement or even replace visual menus, so research on auditory menus has recently increased with mobile devices as well as desktop computers. Despite the potential importance of auditory displays on touch screen devices, little research has been attempted to enhance the effectiveness of auditory menus for those devices. In the present study, I investigated how advanced auditory cues enhance auditory menu navigation on a touch screen smartphone, especially for new input gestures such as tapping, wheeling, and flicking methods for navigating a one-dimensional menu. Moreover, I examined if advanced auditory cues improve user experience, not only for visuals-off situations, but also for visuals-on contexts. To this end, I used a novel auditory menu enhancement called a "spindex" (i.e., speech index), in which brief audio cues inform the users of where they are in a long menu. In this study, each item in a menu was preceded by a sound based on the item's initial letter. One hundred and twenty two undergraduates navigated through an alphabetized list of 150 song titles. The study was a split-plot design with manipulated auditory cue type (text-to-speech (TTS) alone vs. TTS plus spindex), visual mode (on vs. off), and input gesture style (tapping, wheeling, and flicking). Target search time and subjective workload for the TTS + spindex were lower than those of the TTS alone in all input gesture types regardless of visual type. Also, on subjective ratings scales, participants rated the TTS + spindex condition higher than the plain TTS on being 'effective' and 'functionally helpful'. The interaction between input methods and output modes (i.e., auditory cue types) and its effects on navigation behaviors was also analyzed based on the two-stage navigation strategy model used in auditory menus. Results were discussed in analogy with visual search theory and in terms of practical applications of spindex cues.M.S.Committee Chair: Bruce N. Walker; Committee Member: Frank Durso; Committee Member: Gregory M. Cors

    An exploration of semiotics of new auditory displays: A comparative analysis with visual displays

    Get PDF
    Communicability is an important factor of user interfaces. To address communicability, extensive research has been done on visual displays, whereas relatively little research has been done on auditory displays. The present paper attempts to analyze semiotics of novel auditory displays (spearcon, spindex, and lyricon) using Peirce’s classification of signs: icon, symbol, and index. After the aesthetic developmental patterns of the visual counterparts are presented, semiotics of auditory cues is discussed with future design directions

    Cultural differences in preference of auditory emoticons: USA and South Korea

    Get PDF
    For the last two decades, research on auditory displays and sonification has continuously increased. However, most research has focused on cognitive and functional mapping rather than emotional mapping. Moreover, there has not been much research on cultural differences on auditory displays. The present study compared user preference of auditory emoticons in two countries: USA and South Korea. Seventy students evaluated 112 auditory icons and 115 earcons regarding 30 emotional adjectives. Results indicated that they showed similar preference in the same category (auditory icons or earcons), but they showed different patterns when they were asked to select the best sound between the two categorical sounds. Implications for cultural differences in preference and directions for future design and research of auditory emoticons are discussed

    A survey on hardware and software solutions for multimodal wearable assistive devices targeting the visually impaired

    Get PDF
    The market penetration of user-centric assistive devices has rapidly increased in the past decades. Growth in computational power, accessibility, and cognitive device capabilities have been accompanied by significant reductions in weight, size, and price, as a result of which mobile and wearable equipment are becoming part of our everyday life. In this context, a key focus of development has been on rehabilitation engineering and on developing assistive technologies targeting people with various disabilities, including hearing loss, visual impairments and others. Applications range from simple health monitoring such as sport activity trackers, through medical applications including sensory (e.g. hearing) aids and real-time monitoring of life functions, to task-oriented tools such as navigational devices for the blind. This paper provides an overview of recent trends in software and hardware-based signal processing relevant to the development of wearable assistive solutions

    The Role of Sonification as a Code Navigation Aid: Improving Programming Structure Readability and Understandability For Non-Visual Users

    Get PDF
    Integrated Development Environments (IDEs) play an important role in the workflow of many software developers, e.g. providing syntactic highlighting or other navigation aids to support the creation of lengthy codebases. Unfortunately, such complex visual information is difficult to convey with current screen-reader technologies, thereby creating barriers for programmers who are blind, who are nevertheless using IDEs. This dissertation is focused on utilizing audio-based techniques to assist non-visual programmers when navigating through large amounts of code. Recently, audio generation techniques have seen major improvements in their capabilities to covey visually-based information to both sighted and non-visual users – making them a potential candidate for providing useful information, especially in places where information is visually structured. However, there is little known about the usability of such techniques in software development. Therefore, we investigated whether audio-based techniques capable of providing useful information about the code structure to assist non-visual programmers. The major contributions in this dissertation are split into two major parts: The first part of this dissertation explains our prior work that investigates the major challenges in software development faced by non-visual programmers, specifically code navigation difficulties. It also discusses areas of improvement where additional features could be developed in order to make the programming environment more accessible to non-visual programmers. The second part of this dissertation focuses on studies aimed to evaluate the usability and efficacy of audio-based techniques for conveying the structure of the programming codebase, which was suggested by the stakeholders in Part I. Specifically, we investigated various sound effects, audio parameters, and different interaction techniques to determine whether these techniques could provide adequate support to assist non-visual programmers when navigating through lengthy codebases. In Part II, we discussed the methodological aspects of evaluating the above-mentioned techniques with the stakeholders and examine these techniques using an audio-based prototype that was designed to control audio timing, locations, and methods of interaction. A set of design guidelines are provided based on the evaluation described previously to suggest including an auditory-based feedback system in the programming environment in efforts to improve code structure readability and understandability for assisting non-visual programmers

    The effect of experience on the use of multimodal displays in a multitasking interaction

    Get PDF
    Theories and previous work suggest that performance while multitasking can benefit from the use of displays that employ multiple modalities. Studies often show benefits of these multimodal displays but not to the extent that theories of multimodal task-sharing might suggest. However, it is often the case that the studies investigating this effect give users at least one type of display that they are not accustomed to, often an auditory display, and compare their performance on these novel displays to a visual display, with which most people are familiar. This leaves a question open regarding the effects of longer-term experience with these multimodal displays. The current study investigated the effect of practice with multimodal displays, comparing two multimodal displays to a standard visuals-only display. Over the course of four sessions, participants practiced a list-searching secondary task on one of three display types (two auditory plus visual displays, and one visual-only display) while performing a visual-manual task. Measures of search-task and primary task performance along with workload, visual behaviors, and perceived performance were collected. Results of the study support previous work with regard to more visual time on the primary task for those using multimodal displays, and show that perceived helpfulness increased over time for those using the multimodal displays. However, the results also point to practice effects taking place almost equally across the conditions, which suggest that initial task-sharing behaviors seen with well-designed multimodal displays may not benefit as much from practice as hypothesized, or may require additional time to take hold. The results of the research are discussed regarding their use in research and applying multimodal displays in the real world as well as in how these results fit with theories of multimodal task-sharing.Ph.D

    Making Spatial Information Accessible on Touchscreens for Users who are Blind and Visually Impaired

    Get PDF
    Touchscreens have become a de facto standard of input for mobile devices as they most optimally use the limited input and output space that is imposed by their form factor. In recent years, people who are blind and visually impaired have been increasing their usage of smartphones and touchscreens. Although basic access is available, there are still many accessibility issues left to deal with in order to bring full inclusion to this population. One of the important challenges lies in accessing and creating of spatial information on touchscreens. The work presented here provides three new techniques, using three different modalities, for accessing spatial information on touchscreens. The first system makes geometry and diagram creation accessible on a touchscreen through the use of text-to-speech and gestural input. This first study is informed by a qualitative study of how people who are blind and visually impaired currently access and create graphs and diagrams. The second system makes directions through maps accessible using multiple vibration sensors without any sound or visual output. The third system investigates the use of binaural sound on a touchscreen to make various types of applications accessible such as physics simulations, astronomy, and video games

    Concurrency in auditory displays for connected television

    Get PDF
    Many television experiences depend on users being both willing and able to visually attend to screen-based information. Auditory displays offer an alternative method for presenting this information and could benefit all users. This thesis explores how this may be achieved through the design and evaluation of auditory displays involving varying degrees of concurrency for two television use cases: menu navigation and presenting related content alongside a television show. The first study, on the navigation of auditory menus, looked at onset asynchrony and word length in the presentation of spoken menus. The effects of these on task duration, accuracy and workload were considered. Onset asynchrony and word length both caused significant effects on task duration and accuracy, while workload was only affected by onset asynchrony. An optimum asynchrony was identified, which was the same for both long and short words, but better performance was obtained with the shorter words that no longer overlapped. The second experiment investigated how disruption, workload, and preference are affected when presenting additional content accompanying a television programme. The content took the form of sound from different spatial locations or as text on a smartphone and the programme's soundtrack was either modified or left unaltered. Leaving the soundtrack unaltered or muting it negatively impacted user experience. Removing the speech from the television programme and presenting the secondary content as sound from a smartphone was the best auditory approach. This was found to compare well with the textual presentation, resulting in less visual disruption and imposing a similar workload. Additionally, the thesis reviews the state-of-the-art in television experiences and auditory displays. The human auditory system is introduced and important factors in the concurrent presentation of speech are highlighted. Conclusions about the utility of concurrency within auditory displays for television are made and areas for further work are identified

    Percepção sonora : discutindo os limites e as possibilidades de interação e de interdependência positiva de pessoas com deficiência visual em sistemas Web síncronos

    Get PDF
    Atualmente vive-se em uma sociedade em rede na qual a informação é o principal ativo econômico e social. A interação e a interdependência positiva, elementos estruturantes de atividades cooperativas mediadas por computador são fundamentais para o desenvolvimento de competências cognitivas, interpessoais e intrapessoais. Práticas cooperativas mediadas por dispositivos tecnológicos devem levar em consideração diferentes perfis de usuários. A análise de sistemas Web síncronos, Google Docs e Word Online, evidenciaram fragilidades na acessibilidade e nas possibilidades de interação de pessoas com deficiência visual, ao revelar lacunas quanto à implementação de recursos de percepção sonora. Esses resultados, conduziram a proposição do objetivo deste estudo, analisar os limites e as possibilidades do uso de elementos de suporte à percepção sonora em sistemas Web síncronos como dispositivos de inserção de pessoas com deficiência visual em ações cooperativas. Visando validar os dispositivos tecnológicos e analisar a interação de sujeitos com deficiência visual foram desenvolvidos sistema Web de bate-papo e ferramenta de escrita cooperativa ambos com suporte à percepção sonora. Protocolos de pesquisa, complementados por entrevistas semiestruturadas, mapearam os dados de interação de cinco sujeitos com deficiência visual nos sistemas implementados, analisados por meio de categorias previamente estabelecidas: percepção no espaço de trabalho e cooperação. A análise e discussão de resultados revelou que o consorciamento dos elementos de percepção sonoro e as estratégias de navegabilidade tornaram os sistemas acessíveis e potencializaram a ação cooperativa de participantes com deficiência visual. A base conceitual utilizada, os sistemas desenvolvidos e a discussão dos resultados, disponibilizam um conjunto de informações relevantes para a qualificação e garantia de equidade em sistemas Web síncronos, estabelecendo as condições de possibilidade para novos estudos, visando um maior aprofundamento desse campo de conhecimento.We live in a network society information is the main economic and social asset. Interaction and positive interdependence, elements for cooperative computer activities are fundamental for the development of cognitive, interpersonal and intrapersonal skills. Computer-mediated cooperative practices should consider different user profiles. The analysis of synchronous Web systems, Google Docs and Word Online, showed weaknesses in the accessibility and possibilities of interaction of visually impaired users, revealing issues in the development of sound awareness features. These results led to the aim of this study, to analyze the limits and possibilities of the use of elements to support sound awareness in synchronous Web systems as tools for the insertion of visually impaired users in cooperative activities. In order to analyze the interaction of visually impaired subjects and validate the technological resources, a Web chat system and a cooperative writing tool were developed with support of sound awareness. Research protocols, complemented by semistructured interviews, mapping the interaction data of visually impaired users in the implemented systems were analyzed through previous categories: awareness in the workspace and cooperation. The analysis and discussion of results revealed that the elements of sound awareness and the navigability strategies made the systems accessible and enhanced the cooperative activities of visually impaired participants. The conceptual basis used, the systems developed, and the discussion of the results, provided a set of relevant information to qualify the equity participation of visually impaired users in synchronous Web systems, establishing the conditions for further studies, aiming a deep understanding of this area of knowledge
    corecore