62 research outputs found

    Investigation of dynamic three-dimensional tangible touchscreens: Usability and feasibility

    Get PDF
    The ability for touchscreen controls to move from two physical dimensions to three dimensions may soon be possible. Though solutions exist for enhanced tactile touchscreen interaction using vibrotactile devices, no definitive commercial solution yet exists for providing real, physical shape to the virtual buttons on a touchscreen display. Of the many next steps in interface technology, this paper concentrates on the path leading to tangible, dynamic, touchscreen surfaces. An experiment was performed that explores the usage differences between a flat surface touchscreen and one augmented with raised surface controls. The results were mixed. The combination of tactile-visual modalities had a negative effect on task completion time when visual attention was focused on a single task (single target task time increased by 8% and the serial target task time increased by 6%). On the other hand, the dual modality had a positive effect on error rate when visual attention was divided between two tasks (the serial target error rate decreased by 50%). In addition to the experiment, this study also investigated the feasibility of creating a dynamic, three dimensional, tangible touchscreen. A new interface solution may be possible by inverting the traditional touchscreen architecture and integrating emerging technologies such as organic light emitting diode (OLED) displays and electrorheological fluid based tactile pins

    Supporting Eyes-Free Human–Computer Interaction with Vibrotactile Haptification

    Get PDF
    The sense of touch is a crucial sense when using our hands in complex tasks. Some tasks we learn to do even without sight by just using the sense of touch in our fingers and hands. Modern touchscreen devices, however, have lost some of that tactile feeling while removing physical controls from the interaction. Touch is also a sense that is underutilized in interactions with technology and could provide new ways of interaction to support users. While users are using information technology in certain situations, they cannot visually and mentally focus completely during the interaction. Humans can utilize their sense of touch more comprehensively in interactions and learn to understand tactile information while interacting with information technology. This thesis introduces a set of experiments that evaluate human capabilities to understand and notice tactile information provided by current actuator technology and further introduces a couple of examples of haptic user interfaces (HUIs) to use under eyes-free use scenarios. These experiments evaluate the benefits of such interfaces for users and concludes with some guidelines and methods for how to create this kind of user interfaces. The experiments in this thesis can be divided into three groups. In the first group, with the first two experiments, the detection of vibrotactile stimuli and interpretation of the abstract meaning of vibrotactile feedback was evaluated. Experiments in the second group evaluated how to design rhythmic vibrotactile tactons to be basic vibrotactile primitives for HUIs. The last group of two experiments evaluated how these HUIs benefit the users in the distracted and eyes-free interaction scenarios. The primary aim for this series of experiments was to evaluate if utilizing the current level of actuation technology could be used more comprehensively than in current-day solutions with simple haptic alerts and notifications. Thus, to find out if the comprehensive use of vibrotactile feedback in interactions would provide additional benefits for the users, compared to the current level of haptic interaction methods and nonhaptic interaction methods. The main finding of this research is that while using more comprehensive HUIs in eyes-free distracted-use scenarios, such as while driving a car, the user’s main task, driving, is performed better. Furthermore, users liked the comprehensively haptified user interfaces

    Evaluation of the Accessibility of Touchscreens for Individuals who are Blind or have Low Vision: Where to go from here

    Get PDF
    Touchscreen devices are well integrated into daily life and can be found in both personal and public spaces, but the inclusion of accessible features and interfaces continues to lag behind technology’s exponential advancement. This thesis aims to explore the experiences of individuals who are blind or have low vision (BLV) while interacting with non-tactile touchscreens, such as smartphones, tablets, smartwatches, coffee machines, smart home devices, kiosks, ATM machines, and more. The goal of this research is to create a set of recommended guidelines that can be used in designing and developing either personal devices or shared public technologies with accessible touchscreens. This study consists of three phases, the first being an exploration of existing research related to accessibility of non-tactile touchscreens, followed by semi-structured interviews of 20 BLV individuals to address accessibility gaps in previous work, and finally a survey in order to get a better understanding of the experiences, thoughts, and barriers for BLV individuals while interacting with touchscreen devices. Some of the common themes found include: loss of independence, lack or uncertainty of accessibility features, and the need and desire for improvements. Common approaches for interaction were: the use of high markings, asking for sighted assistance, and avoiding touchscreen devices. These findings were used to create a set of recommended guidelines which include a universal feature setup, the setup of accessibility settings, universal headphone jack position, tactile feedback, ask for help button, situational lighting, and the consideration of time

    The Social Network: How People with Visual Impairment use Mobile Phones in Kibera, Kenya

    Get PDF
    Living in an informal settlement with a visual impairment can be very challenging resulting in social exclusion. Mobile phones have been shown to be hugely beneficial to people with sight loss in formal and high-income settings. However, little is known about whether these results hold true for people with visual impairment (VIPs) in informal settlements. We present the findings of a case study of mobile technology use by VIPs in Kibera, an informal settlement in Nairobi. We used contextual interviews, ethnographic observations and a co-design workshop to explore how VIPs use mobile phones in their daily lives, and how this use influences the social infrastructure of VIPs. Our findings suggest that mobile technology supports and shapes the creation of social infrastructure. However, this is only made possible through the existing support networks of the VIPs, which are mediated through four types of interaction: direct, supported, dependent and restricted

    Concurrent speech feedback for blind people on touchscreens

    Get PDF
    Tese de Mestrado, Engenharia Informática, 2023, Universidade de Lisboa, Faculdade de CiênciasSmartphone interactions are demanding. Most smartphones come with limited physical buttons, so users can not rely on touch to guide them. Smartphones come with built-in accessibility mechanisms, for example, screen readers, that make the interaction accessible for blind users. However, some tasks are still inefficient or cumbersome. Namely, when scanning through a document, users are limited by the single sequential audio channel provided by screen readers. Or when tasks are interrupted in the presence of other actions. In this work, we explored alternatives to optimize smartphone interaction by blind people by leveraging simultaneous audio feedback with different configurations, such as different voices and spatialization. We researched 5 scenarios: Task interruption, where we use concurrent speech to reproduce a notification without interrupting the current task; Faster information consumption, where we leverage concurrent speech to announce up to 4 different contents simultaneously; Text properties, where the textual formatting is announced; The map scenario, where spatialization provides feedback on how close or distant a user is from a particular location; And smartphone interactions scenario, where there is a corresponding sound for each gesture, and instead of reading the screen elements (e.g., button), a corresponding sound is played. We conducted a study with 10 blind participants whose smartphone usage experience ranges from novice to expert. During the study, we asked participants’ perceptions and preferences for each scenario, what could be improved, and in what situations these extra capabilities are valuable to them. Our results suggest that these extra capabilities we presented are helpful for users, especially if these can be turned on and off according to the user’s needs and situation. Moreover, we find that using concurrent speech works best when announcing short messages to the user while listening to longer content and not so much to have lengthy content announced simultaneously
    • …
    corecore