730 research outputs found

    Computer interfaces for the visually impaired

    Get PDF
    Information access via computer terminals extends to blind and low vision persons employed in many technical and nontechnical disciplines. Two aspects are detailed of providing computer technology for persons with a vision related handicap. First, research into the most effective means of integrating existing adaptive technologies into information systems was made. This was conducted to integrate off the shelf products with adaptive equipment for cohesive integrated information processing systems. Details are included that describe the type of functionality required in software to facilitate its incorporation into a speech and/or braille system. The second aspect is research into providing audible and tactile interfaces to graphics based interfaces. Parameters are included for the design and development of the Mercator Project. The project will develop a prototype system for audible access to graphics based interfaces. The system is being built within the public domain architecture of X windows to show that it is possible to provide access to text based applications within a graphical environment. This information will be valuable to suppliers to ADP equipment since new legislation requires manufacturers to provide electronic access to the visually impaired

    An Ambiguous Technique for Nonvisual Text Entry

    Get PDF
    Text entry is a common daily task for many people, but it can be a challenge for people with visual impairments when using virtual touchscreen keyboards that lack physical key boundaries. In this thesis, we investigate using a small number of gestures to select from groups of characters to remove most or all dependence on touch locations. We leverage a predictive language model to select the most likely characters from the selected groups once a user completes each word. Using a preliminary interface with six groups of characters based on a Qwerty keyboard, we find that users are able to enter text with no visual feedback at 19.1 words per minute (WPM) with a 2.1% character error rate (CER) after five hours of practice. We explore ways to optimize the ambiguous groups to reduce the number of disambiguation errors. We develop a novel interface named FlexType with four character groups instead of six in order to remove all remaining location dependence and enable one-handed input. We compare optimized groups with and without constraining the group assignments to alphabetical order in a user study. We find that users enter text with no visual feedback at 12.0 WPM with a 2.0% CER using the constrained groups after four hours of practice. There was no significant difference from the unconstrained groups. We improve FlexType based on user feedback and tune the recognition algorithm parameters based on the study data. We conduct an interview study with 12 blind users to assess the challenges they encounter while entering text and solicit feedback on FlexType, and we further incorporate this feedback into the interface. We evaluate the improved interface in a longitudinal study with 12 blind participants. On average, participants entered text at 8.2 words per minute using FlexType, 7.5 words per minute using a Qwerty keyboard with VoiceOver, and at 26.9 words per minute using Braille Screen Input

    Talking to computers

    Get PDF
    A popular belief amongst UX designers is that the more voice user interfaces (i.e. Alexa, Siri, Google Assistant) speak and behave like people, the more functional they will be. But, conversational mimicry is not the only way a screenless computer can communicate information. The scope of sounds humans can interpret, manipulate, and make is broad. This project seeks to identify ways designers can mine this domain for interaction cues that promote a deeper understanding of digital content and the systems that deliver it

    Investigating the relationships between preferences, gender, and high school students\u27 geometry performance

    Get PDF
    In this quantitative study, the relationships between high school students\u27 preference for solution methods, geometry performance, task difficulty, and gender were investigated. The data was collected from 161 high school students from six different schools at a county located in central Florida in the United States. The study was conducted during the 2013-2014 school year. The participants represented a wide range in socioeconomic status, were from a range of grades (10-12), and were enrolled in different mathematics courses (Algebra 2, Geometry, Financial Algebra, and Pre-calculus). Data were collected primarily with the aid of a geometry test and a geometry questionnaire. Using a think-aloud protocol, a short interview was also conducted with some students. For the purpose of statistical analysis, students\u27 preferences for solution methods were quantified into numeric values, and then a visuality score was obtained for each student. Students\u27 visuality scores ranged from -12 to +12. The visuality scores were used to assess students\u27 preference for solution methods. A standardized test score was used to measure students\u27 geometry performance. The data analysis indicated that the majority of students were visualizers. The statistical analysis revealed that there was not an association between preference for solution methods and students\u27 geometry performance. The preference for solving geometry problems using either visual or nonvisual methods was not influenced by task difficulty. Students were equally likely to employ visual as well as nonvisual solution methods regardless of the task difficulty. Gender was significant in geometry performance but not in preference for solution methods. Female students\u27 geometry performance was significantly higher than male students\u27 geometry performance. The findings of this study suggested that instruction should be focused on incorporating both visual and nonvisual teaching strategies in mathematics lesson activities in order to develop preference for both visual and nonvisual solution methods

    Web-based multimodal graphs for visually impaired people

    Get PDF
    This paper describes the development and evaluation of Web-based multimodal graphs designed for visually impaired and blind people. The information in the graphs is conveyed to visually impaired people through haptic and audio channels. The motivation of this work is to address problems faced by visually impaired people in accessing graphical information on the Internet, particularly the common types of graphs for data visualization. In our work, line graphs, bar charts and pie charts are accessible through a force feedback device, the Logitech WingMan Force Feedback Mouse. Pre-recorded sound files are used to represent graph contents to users. In order to test the usability of the developed Web graphs, an evaluation was conducted with bar charts as the experimental platform. The results showed that the participants could successfully use the haptic and audio features to extract information from the Web graphs

    Designing Accessible Nonvisual Maps

    Get PDF
    Access to nonvisual maps has long required special equipment and training to use; Google Maps, ESRI, and other commonly used digital maps are completely visual and thus inaccessible to people with visual impairments. This project presents the design and evaluation of an easy to use digital auditory map and 3D model interactive map. A co-design was also undertaken to discover tools for an ideal nonvisual navigational experience. Baseline results of both studies are presented so future work can improve on the designs. The user evaluation revealed that both prototypes were moderately easy to use. An ideal nonvisual navigational experience, according to these participants, consists of both an accurate turn by turn navigational system, and an interactive map. Future work needs to focus on the development of appropriate tools to enable this ideal experience

    Allocentric spatial performance higher in early-blind and sighted adults than in retinopathy-of-prematurity adults

    Get PDF
    The question as to whether people totally blind since infancy process allocentric or ‘external’ spatial information like the sighted has caused considerable debate within the literature. Due to the extreme rarity of the population, researchers have often included individuals with Retinopathy of Prematurity (RoP – over oxygenation at birth) within the sample. However, RoP is inextricably confounded with prematurity per se. Prematurity, without visual disability, has been associated with spatial processing difficulties. In this experiment, blindfolded sighted and two groups of functionally totally blind participants heard text descriptions from a survey (allocentric) or route (egocentric) perspective. One blind group lost their sight due to retinopathy of prematurity (RoP – over oxygenation at birth) and a second group before 24 months of age. The accuracy of participants’ mental representations derived from the text descriptions were assessed via questions and maps. The RoP participants had lower scores than the sighted and early blind, who performed similarly. In other words, it was not visual impairment alone that resulted in impaired allocentric spatial performance in this task, but visual impairment together with RoP. This finding may help explain the contradictions within the existing literature on the role of vision in allocentric spatial processing

    Development of Tangible Code Blocks for the Blind and Visually Impaired

    Get PDF
    The fields of Science, Technology, Engineering, and Mathematics (STEM) have been growing at an accelerating rate in recent times. Knowing how to program has become one key skill for entering all of these STEM fields. However, many students find programming difficult. The block based programming language, Scratch, was specifically designed to lower hurdles to learning how to program for sighted students. Unfortunately, although very effective and widely used in K12 classrooms, Scratch, similar to other block based languages, is inaccessible to students who are blind and visually impaired (BVI). This thesis is part of a larger project to make the Scratch environment accessible to BVI students. The focus of this thesis is on creating a tangible code block design that: 1) is accessible to BVIs, 2) retains the reduced need to struggle with syntax of Scratch, 3) allows code construction through action, 4) and co-construction with other BVI and sighted students, and 5) can create moderately sized programs at low cost. The first several parts of this thesis consider the design and assessment process for the code blocks, which went through two iterations. The four major components of the first design iteration were: 1) the use of passive blocks, with use of 2) the local edge shape connectivity between blocks defining the program syntax, 3) telescoping tubing to allow nested expressions when valid, and 4) haptically legible commands for both Braille and non-Braille users. The first iteration of the block design was compared to a text based method in building and correcting operator expressions that included both simple and nested expressions of the arithmetic, relational and logical operators. BVI participants produced correct code significantly more when doing the tasks with the code blocks than with the text method. Although the text method was faster, it did not account for any additional time that would be needed to identify and change incorrect code before a program could be run. One weakness of the first iteration was that it was difficult for BVI participants to easily determine connectivity between validly connecting code blocks. The second design iteration considered the effect of embedding different degrees of magnetic attraction within the local shape connection to improve identification of the connectivity. It also considered how to represent some commands that had additional restrictions to those found with most of the other code block types. In particular, we considered the use of different “stopper” designs to prevent numeric literals from being placed in the left slot of a “set” command, which could only accept a variable. Results from a set of studies evaluating the ability of BVI participants to identify the connectivity between blocks found that the magnetic attraction within the connection significantly improved accuracy and ease of use, with the stronger magnetic connections preferred. They also found that a stopper design could be used for “exceptions”, with the longer stopper aligned with the local connection preferred. The final part of the thesis examines the use of the code blocks by the targeted population (BVI students in middle school) in a classroom setting within the context of the entire nonvisual interface. To do this, two day code camps were conducted with BVI middle school students, and recorded on video and audio. Qualitative content analysis was used to verify that the students interacted with the system as intended by the code block design. Results suggest that the students did interact with the code blocks as intended by the design, but minor improvements should be made to increase their ease of use. Participants did appear to have a positive experience with the code blocks and the system overall

    Human-powered smartphone assistance for blind people

    Get PDF
    Mobile devices are fundamental tools for inclusion and independence. Yet, there are still many open research issues in smartphone accessibility for blind people (Grussenmeyer and Folmer 2017). Currently, learning how to use a smartphone is non-trivial, especially when we consider that the need to learn new apps and accommodate to updates never ceases. When first transitioning from a basic feature-phone, people have to adapt to new paradigms of interaction. Where feature phones had a finite set of applications and functions, users can extend the possible functions and uses of a smartphone by installing new 3rd party applications. Moreover, the interconnectivity of these applications means that users can explore a seemingly endless set of workflows across applications. To that end, the fragmented nature of development on these devices results in users needing to create different mental models for each application. These characteristics make smartphone adoption a demanding task, as we found from our eight-week longitudinal study on smartphone adoption by blind people. We conducted multiple studies to characterize the smartphone challenges that blind people face, and found people often require synchronous, co-located assistance from family, peers, friends, and even strangers to overcome the different barriers they face. However, help is not always available, especially when we consider the disparity in each barrier, individual support network and current location. In this dissertation we investigated if and how in-context human-powered solutions can be leveraged to improve current smartphone accessibility and ease of use. Building on a comprehensive knowledge of the smartphone challenges faced and coping mechanisms employed by blind people, we explored how human-powered assistive technologies can facilitate use. The thesis of this dissertation is: Human-powered smartphone assistance by non-experts is effective and impacts perceptions of self-efficacy
    corecore