12 research outputs found

    AUGMENTED TOUCH INTERACTIONS WITH FINGER CONTACT SHAPE AND ORIENTATION

    Get PDF
    Touchscreen interactions are far less expressive than the range of touch that human hands are capable of - even considering technologies such as multi-touch and force-sensitive surfaces. Recently, some touchscreens have added the capability to sense the actual contact area of a finger on the touch surface, which provides additional degrees of freedom - the size and shape of the touch, and the finger's orientation. These additional sensory capabilities hold promise for increasing the expressiveness of touch interactions - but little is known about whether users can successfully use the new degrees of freedom. To provide this baseline information, we carried out a study with a finger-contact-sensing touchscreen, and asked participants to produce a range of touches and gestures with different shapes and orientations, with both one and two fingers. We found that people are able to reliably produce two touch shapes and three orientations across a wide range of touches and gestures - a result that was confirmed in another study that used the augmented touches for a screen lock application

    Designing multimodal interaction for the visually impaired

    Get PDF
    Although multimodal computer input is believed to have advantages over unimodal input, little has been done to understand how to design a multimodal input mechanism to facilitate visually impaired users\u27 information access. This research investigates sighted and visually impaired users\u27 multimodal interaction choices when given an interaction grammar that supports speech and touch input modalities. It investigates whether task type, working memory load, or prevalence of errors in a given modality impact a user\u27s choice. Theories in human memory and attention are used to explain the users\u27 speech and touch input coordination. Among the abundant findings from this research, the following are the most important in guiding system design: (1) Multimodal input is likely to be used when it is available. (2) Users select input modalities based on the type of task undertaken. Users prefer touch input for navigation operations, but speech input for non-navigation operations. (3) When errors occur, users prefer to stay in the failing modality, instead of switching to another modality for error correction. (4) Despite the common multimodal usage patterns, there is still a high degree of individual differences in modality choices. Additional findings include: (I) Modality switching becomes more prevalent when lower working memory and attentional resources are required for the performance of other concurrent tasks. (2) Higher error rates increases modality switching but only under duress. (3) Training order affects modality usage. Teaching a modality first versus second increases the use of this modality in users\u27 task performance. In addition to discovering multimodal interaction patterns above, this research contributes to the field of human computer interaction design by: (1) presenting a design of an eyes-free multimodal information browser, (2) presenting a Wizard of Oz method for working with visually impaired users in order to observe their multimodal interaction. The overall contribution of this work is that of one of the early investigations into how speech and touch might be combined into a non-visual multimodal system that can effectively be used for eyes-free tasks

    Eyes-Off Physically Grounded Mobile Interaction

    Get PDF
    This thesis explores the possibilities, challenges and future scope for eyes-off, physically grounded mobile interaction. We argue that for interactions with digital content in physical spaces, our focus should not be constantly and solely on the device we are using, but fused with an experience of the places themselves, and the people who inhabit them. Through the design, development and evaluation of a series ofnovel prototypes we show the benefits of a more eyes-off mobile interaction style.Consequently, we are able to outline several important design recommendations for future devices in this area.The four key contributing chapters of this thesis each investigate separate elements within this design space. We begin by evaluating the need for screen-primary feedback during content discovery, showing how a more exploratory experience can be supported via a less-visual interaction style. We then demonstrate how tactilefeedback can improve the experience and the accuracy of the approach. In our novel tactile hierarchy design we add a further layer of haptic interaction, and show how people can be supported in finding and filtering content types, eyes-off. We then turn to explore interactions that shape the ways people interact with aphysical space. Our novel group and solo navigation prototypes use haptic feedbackfor a new approach to pedestrian navigation. We demonstrate how variations inthis feedback can support exploration, giving users autonomy in their navigationbehaviour, but with an underlying reassurance that they will reach the goal.Our final contributing chapter turns to consider how these advanced interactionsmight be provided for people who do not have the expensive mobile devices that areusually required. We extend an existing telephone-based information service to support remote back-of-device inputs on low-end mobiles. We conclude by establishingthe current boundaries of these techniques, and suggesting where their usage couldlead in the future

    Math in the Dark: Tools for Expressing Mathematical Content by Visually Impaired Students

    Get PDF
    Blind and visually impaired students are under-represented in the science, technology, engineering, and mathematics disciplines of higher education and the workforce. This is due primarily to the difficulties they encounter in trying to succeed in mathematics courses. While there are sufficient tools available to create Braille content, including the special Nemeth Braille used in the U.S. for mathematics constructs, there are very few tools to allow a blind or visually impaired student to create his/her own mathematical content in a manner that sighted individuals can use. The software tools that are available are isolated, do not interface well with other common software, and may be priced for institutional use instead of individual use. Instructors are unprepared or unable to interact with these students in a real-time manner. All of these factors combine to isolate the blind or visually impaired student in the study of mathematics. Nemeth Braille is a complete mathematical markup system in Braille, containing everything that is needed to produce quality math content at all levels of complexity. Blind and visually impaired students should not have to learn any additional markup languages in order to produce math content. This work addressed the needs of the individual blind or visually impaired student who must be able to produce mathematical content for course assignments, and who wishes to interact with peers and instructors on a real-time basis to share mathematical content. Two tools were created to facilitate mathematical interaction: a Nemeth Braille editor, and a real-time instant messenger chat capability that supports Nemeth Braille and MathML constructs. In the Visually Impaired view, the editor accepts Nemeth Braille input, displays the math expressions in a tree structure which will allow sub-expressions to be expanded or collapsed. The Braille constructs can be translated to MathML for display within MathType. Similarly, in the Sighted view, math constructs entered in MathType can be translated into Nemeth Braille. Mathematical content can then be shared between sighted and visually impaired users via the instant messenger chat capability. Using Math in the Dark software, blind and visually impaired students can work math problems fully in Nemeth Braille and can seamlessly convert their work into MathML for viewing by sighted instructors. The converted output has the quality of professionally produced math content. Blind and VI students can also communicate and share math constructs with a sighted partner via a real-time chat feature, with automatic translation in both directions, allowing VI students to obtain help in real-time from a sighted instructor or tutor. By eliminating the burden of translation, this software will help to remove the barriers faced by blind and VI students who wish to excel in the STEM fields of study

    Using an essentiality and proficiency approach to improve the web browsing experience of visually impaired users

    Get PDF
    Increased volumes of content exacerbate the Web accessibility issues faced by people with visual impairments. Essentiality & Proficiency is presented as one method of easing access to information in Websites by addressing the volume of content coupled with how it is presented. This research develops the concept of Essentiality for Web authors. A preliminary survey was conducted to understand the accessibility issues faced by people with visual impairments. Structured interviews were conducted with twelve participants and a further 26 participants responded to online questionnaires. In total there were 38 participants (both sexes), aged 18 to 54 years. 68% had visual impairments, three had motor issues, one had a hearing impairment and two had cognitive impairments. The findings show that the overload of information on a page was the most prominent difficulty experienced when using the Web. The findings from the preliminary survey fed into an empirical study. Four participants aged 21 to 54 years (both sexes) from the preliminary survey were presented with a technology demonstrator to check the feasibility of Essentiality & Proficiency in the real environment. It was found that participants were able to identify and appreciate the reduced volume of information. This initiated the iterative development of the prototype tool. Microformatting is used in the development of the Essentiality & Proficiency prototype tool to allow the reformulated Web pages to remain standards compliant. There is a formative evaluation of the prototype tool using an experimental design methodology. A convenience sample of nine participants (both sexes) with a range of visual impairments, aged 18 to 52 performed tasks on a computer under three essentiality conditions. With an alpha level .05, the evaluation of the Essentiality & Proficiency tool has been shown to offer some improvement in accessing information

    スマートフォンを用いた視覚障碍者向け移動支援システムアーキテクチャに関する研究

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学教授 坂村 健, 東京大学教授 越塚 登, 東京大学教授 暦本 純一, 東京大学教授 中尾 彰宏, 東京大学教授 石川 徹University of Tokyo(東京大学

    Using an essentiality & proficiency approach to improve the web browsing experience of visually impaired users

    Get PDF
    Increased volumes of content exacerbate the Web accessibility issues faced by people with visual impairments. Essentiality & Proficiency is presented as one method of easing access to information in Websites by addressing the volume of content coupled with how it is presented. This research develops the concept of Essentiality for Web authors. A preliminary survey was conducted to understand the accessibility issues faced by people with visual impairments. Structured interviews were conducted with twelve participants and a further 26 participants responded to online questionnaires. In total there were 38 participants (both sexes), aged 18 to 54 years. 68% had visual impairments, three had motor issues, one had a hearing impairment and two had cognitive impairments. The findings show that the overload of information on a page was the most prominent difficulty experienced when using the Web. The findings from the preliminary survey fed into an empirical study. Four participants aged 21 to 54 years (both sexes) from the preliminary survey were presented with a technology demonstrator to check the feasibility of Essentiality & Proficiency in the real environment. It was found that participants were able to identify and appreciate the reduced volume of information. This initiated the iterative development of the prototype tool. Microformatting is used in the development of the Essentiality & Proficiency prototype tool to allow the reformulated Web pages to remain standards compliant. There is a formative evaluation of the prototype tool using an experimental design methodology. A convenience sample of nine participants (both sexes) with a range of visual impairments, aged 18 to 52 performed tasks on a computer under three essentiality conditions. With an alpha level .05, the evaluation of the Essentiality & Proficiency tool has been shown to offer some improvement in accessing information.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Using an essentiality & proficiency approach to improve the web browsing experience of visually impaired users

    Get PDF
    Increased volumes of content exacerbate the Web accessibility issues faced by people with visual impairments. Essentiality & Proficiency is presented as one method of easing access to information in Websites by addressing the volume of content coupled with how it is presented. This research develops the concept of Essentiality for Web authors. A preliminary survey was conducted to understand the accessibility issues faced by people with visual impairments. Structured interviews were conducted with twelve participants and a further 26 participants responded to online questionnaires. In total there were 38 participants (both sexes), aged 18 to 54 years. 68% had visual impairments, three had motor issues, one had a hearing impairment and two had cognitive impairments. The findings show that the overload of information on a page was the most prominent difficulty experienced when using the Web. The findings from the preliminary survey fed into an empirical study. Four participants aged 21 to 54 years (both sexes) from the preliminary survey were presented with a technology demonstrator to check the feasibility of Essentiality & Proficiency in the real environment. It was found that participants were able to identify and appreciate the reduced volume of information. This initiated the iterative development of the prototype tool. Microformatting is used in the development of the Essentiality & Proficiency prototype tool to allow the reformulated Web pages to remain standards compliant. There is a formative evaluation of the prototype tool using an experimental design methodology. A convenience sample of nine participants (both sexes) with a range of visual impairments, aged 18 to 52 performed tasks on a computer under three essentiality conditions. With an alpha level .05, the evaluation of the Essentiality & Proficiency tool has been shown to offer some improvement in accessing information.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Clique: Perceptually Based, Task Oriented Auditory Display for GUI Applications

    Get PDF
    Screen reading is the prevalent approach for presenting graphical desktop applications in audio. The primary function of a screen reader is to describe what the user encounters when interacting with a graphical user interface (GUI). This straightforward method allows people with visual impairments to hear exactly what is on the screen, but with significant usability problems in a multitasking environment. Screen reader users must infer the state of on-going tasks spanning multiple graphical windows from a single, serial stream of speech. In this dissertation, I explore a new approach to enabling auditory display of GUI programs. With this method, the display describes concurrent application tasks using a small set of simultaneous speech and sound streams. The user listens to and interacts solely with this display, never with the underlying graphical interfaces. Scripts support this level of adaption by mapping GUI components to task definitions. Evaluation of this approach shows improvements in user efficiency, satisfaction, and understanding with little development effort. To develop this method, I studied the literature on existing auditory displays, working user behavior, and theories of human auditory perception and processing. I then conducted a user study to observe problems encountered and techniques employed by users interacting with an ideal auditory display: another human being. Based on my findings, I designed and implemented a prototype auditory display, called Clique, along with scripts adapting seven GUI applications. I concluded my work by conducting a variety of evaluations on Clique. The results of these studies show the following benefits of Clique over the state of the art for users with visual impairments (1-5) and mobile sighted users (6): 1. Faster, accurate access to speech utterances through concurrent speech streams. 2. Better awareness of peripheral information via concurrent speech and sound streams. 3. Increased information bandwidth through concurrent streams. 4. More efficient information seeking enabled by ubiquitous tools for browsing and searching. 5. Greater accuracy in describing unfamiliar applications learned using a consistent, task-based user interface. 6. Faster completion of email tasks in a standard GUI after exposure to those tasks in audio

    Designing Nonvisual Bookmarks for Mobile PDA Users

    Get PDF
    corecore