379 research outputs found

    Making Spatial Information Accessible on Touchscreens for Users who are Blind and Visually Impaired

    Get PDF
    Touchscreens have become a de facto standard of input for mobile devices as they most optimally use the limited input and output space that is imposed by their form factor. In recent years, people who are blind and visually impaired have been increasing their usage of smartphones and touchscreens. Although basic access is available, there are still many accessibility issues left to deal with in order to bring full inclusion to this population. One of the important challenges lies in accessing and creating of spatial information on touchscreens. The work presented here provides three new techniques, using three different modalities, for accessing spatial information on touchscreens. The first system makes geometry and diagram creation accessible on a touchscreen through the use of text-to-speech and gestural input. This first study is informed by a qualitative study of how people who are blind and visually impaired currently access and create graphs and diagrams. The second system makes directions through maps accessible using multiple vibration sensors without any sound or visual output. The third system investigates the use of binaural sound on a touchscreen to make various types of applications accessible such as physics simulations, astronomy, and video games

    Using Games to Practice Screen Reader Gestures

    Get PDF
    Tese de mestrado, Engenharia Informática (Engenharia de Software) Universidade de Lisboa, Faculdade de Ciências, 2021Nowadays, a smartphone is fundamental for multiple aspects of our lives. These have evolved from a basic tool to communicate to a multi-purpose tool that allows to communicate with colleagues and friends and to get any information or entertainment. Android and iOS, the most popular mobile operating systems, have developed built-in screen readers that make smartphones generally accessible to blind people using gestures and help them use more of their smartphones. However, users experience difficulties due to unfamiliarity with the gestures and due to lack of interaction with their touchscreens. One possible way to improve the accessibility of these technologies could be through games that can teach how to perform a gesture correctly and explain how it can be used, as lately there has been a growing interest on using video games as an innovative educational tool. We developed Games for Gestures, a set of accessible games to discover and learn the gestures Google Talkback offers and our goal is to explore whether it is possible for mobile accessible games to be used as a gesture discovery and practice method. Corda focuses on teaching how to navigate with Explore by Touch. Foguete focuses on directional swipes left and right and on teaching Swipe To Explore. Guarda Redes is focused on the more advanced gestures. To evaluate our games, we performed a study in which participants played our games for a period of 5 days. After that, we conducted audio-recorded remote interviews with questions about the games and their overall perception of gestures. Our results suggest that accessible games could be important in the process of learning gestures, as they offer a playful method of learning, particularly for less experienced users. This, in turn, would increase their autonomy and inclusion, as this process would become easier and more fun for them

    Human-powered smartphone assistance for blind people

    Get PDF
    Mobile devices are fundamental tools for inclusion and independence. Yet, there are still many open research issues in smartphone accessibility for blind people (Grussenmeyer and Folmer 2017). Currently, learning how to use a smartphone is non-trivial, especially when we consider that the need to learn new apps and accommodate to updates never ceases. When first transitioning from a basic feature-phone, people have to adapt to new paradigms of interaction. Where feature phones had a finite set of applications and functions, users can extend the possible functions and uses of a smartphone by installing new 3rd party applications. Moreover, the interconnectivity of these applications means that users can explore a seemingly endless set of workflows across applications. To that end, the fragmented nature of development on these devices results in users needing to create different mental models for each application. These characteristics make smartphone adoption a demanding task, as we found from our eight-week longitudinal study on smartphone adoption by blind people. We conducted multiple studies to characterize the smartphone challenges that blind people face, and found people often require synchronous, co-located assistance from family, peers, friends, and even strangers to overcome the different barriers they face. However, help is not always available, especially when we consider the disparity in each barrier, individual support network and current location. In this dissertation we investigated if and how in-context human-powered solutions can be leveraged to improve current smartphone accessibility and ease of use. Building on a comprehensive knowledge of the smartphone challenges faced and coping mechanisms employed by blind people, we explored how human-powered assistive technologies can facilitate use. The thesis of this dissertation is: Human-powered smartphone assistance by non-experts is effective and impacts perceptions of self-efficacy

    Evaluation of the Accessibility of Touchscreens for Individuals who are Blind or have Low Vision: Where to go from here

    Get PDF
    Touchscreen devices are well integrated into daily life and can be found in both personal and public spaces, but the inclusion of accessible features and interfaces continues to lag behind technology’s exponential advancement. This thesis aims to explore the experiences of individuals who are blind or have low vision (BLV) while interacting with non-tactile touchscreens, such as smartphones, tablets, smartwatches, coffee machines, smart home devices, kiosks, ATM machines, and more. The goal of this research is to create a set of recommended guidelines that can be used in designing and developing either personal devices or shared public technologies with accessible touchscreens. This study consists of three phases, the first being an exploration of existing research related to accessibility of non-tactile touchscreens, followed by semi-structured interviews of 20 BLV individuals to address accessibility gaps in previous work, and finally a survey in order to get a better understanding of the experiences, thoughts, and barriers for BLV individuals while interacting with touchscreen devices. Some of the common themes found include: loss of independence, lack or uncertainty of accessibility features, and the need and desire for improvements. Common approaches for interaction were: the use of high markings, asking for sighted assistance, and avoiding touchscreen devices. These findings were used to create a set of recommended guidelines which include a universal feature setup, the setup of accessibility settings, universal headphone jack position, tactile feedback, ask for help button, situational lighting, and the consideration of time

    Challenges Faced by Persons with Disabilities Using Self-Service Technologies

    Get PDF
    Foreseeable game changing solutions to SSTs will allow for better universal access by better implementing features that are easy and intuitive to use from the inception. Additional robotic advancements will allow for better and easier delivery of goods for consumers. Improvements to artificial intelligence will allow for better communication through natural language and alternative forms of communication. Furthermore, artificial intelligence will aid consumers at SSTs by remembering the consumers preferences and needs. With all foreseeable game changing solutions people with disabilities will be consulted when new and improved SSTs are being developed allowing for the SST to maximize its potential

    Impact of universal design ballot interfaces on voting performance and satisfaction of people with and without vision loss

    Get PDF
    Since the Help America Vote Act (HAVA) in 2002 that addressed improvements to voting systems and voter access through the use of electronic technologies, electronic voting systems have improved in U.S. elections. However, voters with disabilities have been disappointed and frustrated, because they have not been able to vote privately and independently (Runyan, 2007). Voting accessibility for individuals with disabilities has generally been accomplished through specialized designs, providing the addition of alternative inputs (e.g., headphones with tactile keypad for audio output, sip-and-puff) and outputs (e.g., audio output) to existing hardware and/or software architecture. However, while the add-on features may technically be accessible, they are often complex and difficult for poll workers to set up and require more time for targeted voters with disabilities to use compared to the direct touch that enable voters without disabilities to select any candidate in a particular contest at any time. To address the complexities and inequities with the accessible alternatives, a universal design (UD) approach was used to design two experimental ballot interfaces, namely EZ Ballot and QUICK Ballot, that seamlessly integrate accessible features (e.g., audio output) based on the goal of designing one voting system for all. EZ Ballot presents information linearly (i.e., one candidate’s name at a time) and voters can choose Yes or No inputs that does not require search (i.e., finding a particular name). QUICK Ballot presents multiple names that allow users to choose a name using direct-touch or gesture-touch interactions (e.g., the drag and lift gesture). Despite the same goal of providing one type of voting system for all voters, each ballot has a unique selection and navigation process designed to facilitate access and participation in voting. Thus, my proposed research plan was to examine the effectiveness of the two UD ballots primarily with respect to their different ballot structures in facilitating voting performance and satisfaction for people with a range of visual abilities including those with blindness or vision loss. The findings from this work show that voters with a range of visual abilities were able to use both ballots independently. However, as expected, the voter performance and preferences of each ballot interface differed by voters through the range of visual abilities. While non-sighted voters made fewer errors on the linear ballot (EZ Ballot), partially-sighted and sighted voters completed the random access ballot (QUICK Ballot) in less time. In addition, a higher percentage of non-sighted participants preferred the linear ballot, and a higher percentage of sighted participants preferred the random ballot. The main contributions of this work are in: 1) utilizing UD principles to design ballot interfaces that can be differentially usable by voters with a range of abilities; 2) demonstrating the feasibility of two UD ballot interfaces by voters with a range of visual abilities; 3) providing an impact for people with a range of visual abilities on other applications. The study suggests that the two ballots, both designed according to UD principles but with different weighting of principles, can be differentially usable by individuals with a range of visual abilities. This approach clearly distinguishes this work from previous efforts, which have focused on developing one UD solution for everyone because UD does not dictate a single solution for everyone (e.g., a one-size-fits-all approach), but rather supports flexibility in use that provide a new perspective into human-computer interaction (Stephanidis, 2001).Ph.D
    • …
    corecore