1,920 research outputs found

    Making Spatial Information Accessible on Touchscreens for Users who are Blind and Visually Impaired

    Get PDF
    Touchscreens have become a de facto standard of input for mobile devices as they most optimally use the limited input and output space that is imposed by their form factor. In recent years, people who are blind and visually impaired have been increasing their usage of smartphones and touchscreens. Although basic access is available, there are still many accessibility issues left to deal with in order to bring full inclusion to this population. One of the important challenges lies in accessing and creating of spatial information on touchscreens. The work presented here provides three new techniques, using three different modalities, for accessing spatial information on touchscreens. The first system makes geometry and diagram creation accessible on a touchscreen through the use of text-to-speech and gestural input. This first study is informed by a qualitative study of how people who are blind and visually impaired currently access and create graphs and diagrams. The second system makes directions through maps accessible using multiple vibration sensors without any sound or visual output. The third system investigates the use of binaural sound on a touchscreen to make various types of applications accessible such as physics simulations, astronomy, and video games

    StateLens: A Reverse Engineering Solution for Making Existing Dynamic Touchscreens Accessible

    Full text link
    Blind people frequently encounter inaccessible dynamic touchscreens in their everyday lives that are difficult, frustrating, and often impossible to use independently. Touchscreens are often the only way to control everything from coffee machines and payment terminals, to subway ticket machines and in-flight entertainment systems. Interacting with dynamic touchscreens is difficult non-visually because the visual user interfaces change, interactions often occur over multiple different screens, and it is easy to accidentally trigger interface actions while exploring the screen. To solve these problems, we introduce StateLens - a three-part reverse engineering solution that makes existing dynamic touchscreens accessible. First, StateLens reverse engineers the underlying state diagrams of existing interfaces using point-of-view videos found online or taken by users using a hybrid crowd-computer vision pipeline. Second, using the state diagrams, StateLens automatically generates conversational agents to guide blind users through specifying the tasks that the interface can perform, allowing the StateLens iOS application to provide interactive guidance and feedback so that blind users can access the interface. Finally, a set of 3D-printed accessories enable blind people to explore capacitive touchscreens without the risk of triggering accidental touches on the interface. Our technical evaluation shows that StateLens can accurately reconstruct interfaces from stationary, hand-held, and web videos; and, a user study of the complete system demonstrates that StateLens successfully enables blind users to access otherwise inaccessible dynamic touchscreens.Comment: ACM UIST 201

    A comparative evaluation of touch and pen gestures for adult and child users

    Get PDF
    In this paper, we present results of two user studies that compared the performance of touch-based and pen-based gesture input on capacitive touchscreens for both adult and 8-11 years old child users. Results showed that inputting gestures with pen was significantly faster and more accurate than touch for adult users. However, no significant effect of input method was observed on performance for child users. Similarly, user experience evaluation showed that a large number of adult users favoured one technique over the other and/or found a technique more comfortable to use than the other, while child users were mostly neutral. This trend, however, was not statistically significant.CIEC – Research Centre on Child Studies, IE, UMinho (FCT R&D unit 317), Portuga

    EyePACT: eye-based parallax correction on touch-enabled interactive displays

    Get PDF
    The parallax effect describes the displacement between the perceived and detected touch locations on a touch-enabled surface. Parallax is a key usability challenge for interactive displays, particularly for those that require thick layers of glass between the screen and the touch surface to protect them from vandalism. To address this challenge, we present EyePACT, a method that compensates for input error caused by parallax on public displays. Our method uses a display-mounted depth camera to detect the user's 3D eye position in front of the display and the detected touch location to predict the perceived touch location on the surface. We evaluate our method in two user studies in terms of parallax correction performance as well as multi-user support. Our evaluations demonstrate that EyePACT (1) significantly improves accuracy even with varying gap distances between the touch surface and the display, (2) adapts to different levels of parallax by resulting in significantly larger corrections with larger gap distances, and (3) maintains a significantly large distance between two users' fingers when interacting with the same object. These findings are promising for the development of future parallax-free interactive displays

    Interaction techniques for older adults using touchscreen devices : a literature review

    Get PDF
    International audienceSeveral studies investigated different interaction techniques and input devices for older adults using touchscreen. This literature review analyses the population involved, the kind of tasks that were executed, the apparatus, the input techniques, the provided feedback, the collected data and author's findings and their recommendations. As conclusion, this review shows that age-related changes, previous experience with technologies, characteristics of handheld devices and use situations need to be studied

    Target size guidelines for interactive displays on the flight deck

    Get PDF
    The avionics industry is seeking to understand the challenges and benefits of touchscreens on flight decks. This paper presents an investigation of interactive displays on the flight deck focusing on the impact of target size, placement and vibration on performance. A study was undertaken with search and rescue (SAR) crew members in an operational setting in helicopters. Results are essential to understand how to design effective touchscreen interfaces for the flight deck. Results show that device placement, vibration and target size have significant effects on targeting accuracy. However, increasing target size eliminates the negative effects of placement and vibration in most cases. The findings suggest that 15 mm targets are sufficiently large for non-safety critical Electronic Flight Bag (EFB) applications. For interaction with fixed displays where pilots have to extend their arms, and for safety critical tasks it is recommended to use interactive elements of about 20 mm size

    Target size guidelines for interactive displays on the flight deck

    Get PDF
    The avionics industry is seeking to understand the challenges and benefits of touchscreens on flight decks. This paper presents an investigation of interactive displays on the flight deck focusing on the impact of target size, placement and vibration on performance. A study was undertaken with search and rescue (SAR) crew members in an operational setting in helicopters. Results are essential to understand how to design effective touchscreen interfaces for the flight deck. Results show that device placement, vibration and target size have significant effects on targeting accuracy. However, increasing target size eliminates the negative effects of placement and vibration in most cases. The findings suggest that 15 mm targets are sufficiently large for non-safety critical Electronic Flight Bag (EFB) applications. For interaction with fixed displays where pilots have to extend their arms, and for safety critical tasks it is recommended to use interactive elements of about 20 mm size

    Eignung von virtueller Physik und Touch-Gesten in Touchscreen-Benutzerschnittstellen fĂĽr kritische Aufgaben

    Get PDF
    The goal of this reasearch was to examine if modern touch screen interaction concepts that are established on consumer electronic devices like smartphones can be used in time-critical and safety-critical use cases like for machine control or healthcare appliances. Several prevalent interaction concepts with and without touch gestures and virtual physics were tested experimentally in common use cases to assess their efficiency, error rate and user satisfaction during task completion. Based on the results, design recommendations for list scrolling and horizontal dialog navigation are given.Das Ziel dieser Forschungsarbeit war es zu untersuchen, ob moderne Touchscreen-Interaktionskonzepte, die auf Consumer-Electronic-Geräten wie Smartphones etabliert sind, für zeit- und sicherheitskritische Anwendungsfälle wie Maschinensteuerung und Medizingeräte geeignet sind. Mehrere gebräuchliche Interaktionskonzepte mit und ohne Touch-Gesten und virtueller Physik wurden in häufigen Anwendungsfällen experimentell auf ihre Effizienz, Fehlerrate und Nutzerzufriedenheit bei der Aufgabenlösung untersucht. Basierend auf den Resultaten werden Empfehlungen für das Scrollen in Listen und dem horizontalen Navigieren in mehrseitigen Software-Dialogen ausgesprochen

    GTmoPass: Two-factor Authentication on Public Displays Using Gaze-touch Passwords and Personal Mobile Devices

    Get PDF
    As public displays continue to deliver increasingly private and personalized content, there is a need to ensure that only the legitimate users can access private information in sensitive contexts. While public displays can adopt similar authentication concepts like those used on public terminals (e.g., ATMs), authentication in public is subject to a number of risks. Namely, adversaries can uncover a user's password through (1) shoulder surfing, (2) thermal attacks, or (3) smudge attacks. To address this problem we propose GTmoPass, an authentication architecture that enables Multi-factor user authentication on public displays. The first factor is a knowledge-factor: we employ a shoulder-surfing resilient multimodal scheme that combines gaze and touch input for password entry. The second factor is a possession-factor: users utilize their personal mobile devices, on which they enter the password. Credentials are securely transmitted to a server via Bluetooth beacons. We describe the implementation of GTmoPass and report on an evaluation of its usability and security, which shows that although authentication using GTmoPass is slightly slower than traditional methods, it protects against the three aforementioned threats
    • …
    corecore