11,752 research outputs found

    Feeling what you hear: tactile feedback for navigation of audio graphs

    Get PDF
    Access to digitally stored numerical data is currently very limited for sight impaired people. Graphs and visualizations are often used to analyze relationships between numerical data, but the current methods of accessing them are highly visually mediated. Representing data using audio feedback is a common method of making data more accessible, but methods of navigating and accessing the data are often serial in nature and laborious. Tactile or haptic displays could be used to provide additional feedback to support a point-and-click type interaction for the visually impaired. A requirements capture conducted with sight impaired computer users produced a review of current accessibility technologies, and guidelines were extracted for using tactile feedback to aid navigation. The results of a qualitative evaluation with a prototype interface are also presented. Providing an absolute position input device and tactile feedback allowed the users to explore the graph using tactile and proprioceptive cues in a manner analogous to point-and-click techniques

    A Novel Design of Audio CAPTCHA for Visually Impaired Users

    Get PDF
    CAPTCHAs are widely used by web applications for the purpose of security and privacy. However, traditional text-based CAPTCHAs are not suitable for sighted users much less users with visual impairments. To address the issue, this paper proposes a new mechanism for CAPTCHA called HearAct, which is a real-time audio-based CAPTCHA that enables easy access for users with visual impairments. The user listens to the sound of something (the “sound-maker”), and he/she must identify what the sound-maker is. After that, HearAct identifies a word and requires the user to analyze a word and determine whether it has the stated letter or not. If the word has the letter, the user must tap and if not, they swipe. This paper presents our HearAct pilot study conducted with thirteen blind users. The preliminary user study results suggest the new form of CAPTCHA has a lot of potential for both blind and visual users. The results also show that the HearAct CAPTCHA can be solved in a shorter time than the text-based CAPTCHAs because HearAct allows users to solve the CAPTCHA using gestures instead of typing. Thus, participants preferred HearAct over audio-based CAPTCHAs. The results of the study also show that the success rate of solving the HearAct CAPTCHA is 82.05% and 43.58% for audio CAPTCHA. A significant usability differences between the System Usability score for HearAct CAPTCHA method was 88.07 compared to audio CAPTCHA was 52.11%. Using gestures to solve the CAPTCHA challenge is the most preferable feature in the HearAct solution. To increase the security of HearAct, it is necessary to increase the number of sounds in the CAPTCHA. There is also a need to improve the CAPTCHA solution to cover wide range of users by adding corresponding image with each sound to meet deaf users’ needs; they then need to identify the spelling of the sound maker’s word

    Concurrent speech feedback for blind people on touchscreens

    Get PDF
    Tese de Mestrado, Engenharia InformĂĄtica, 2023, Universidade de Lisboa, Faculdade de CiĂȘnciasSmartphone interactions are demanding. Most smartphones come with limited physical buttons, so users can not rely on touch to guide them. Smartphones come with built-in accessibility mechanisms, for example, screen readers, that make the interaction accessible for blind users. However, some tasks are still inefficient or cumbersome. Namely, when scanning through a document, users are limited by the single sequential audio channel provided by screen readers. Or when tasks are interrupted in the presence of other actions. In this work, we explored alternatives to optimize smartphone interaction by blind people by leveraging simultaneous audio feedback with different configurations, such as different voices and spatialization. We researched 5 scenarios: Task interruption, where we use concurrent speech to reproduce a notification without interrupting the current task; Faster information consumption, where we leverage concurrent speech to announce up to 4 different contents simultaneously; Text properties, where the textual formatting is announced; The map scenario, where spatialization provides feedback on how close or distant a user is from a particular location; And smartphone interactions scenario, where there is a corresponding sound for each gesture, and instead of reading the screen elements (e.g., button), a corresponding sound is played. We conducted a study with 10 blind participants whose smartphone usage experience ranges from novice to expert. During the study, we asked participants’ perceptions and preferences for each scenario, what could be improved, and in what situations these extra capabilities are valuable to them. Our results suggest that these extra capabilities we presented are helpful for users, especially if these can be turned on and off according to the user’s needs and situation. Moreover, we find that using concurrent speech works best when announcing short messages to the user while listening to longer content and not so much to have lengthy content announced simultaneously

    Auditory interfaces: Using sound to improve the HSL metro ticketing interface for the visually impaired

    Get PDF
    Around 252 million trips by public transport are taken in Helsinki every year, and about 122 million passengers travel by Helsinki City Transport (tram, metro and ferry) in and around Finland's capitol. Given these numbers, it is important that the system be as wholly efficient, inclusive, and as easy to use as possible. In my master's thesis, I examine Helsinki Region Transport's ticketing and information system. I pay special attention to their new touch screen card readers, framing them in the context of increasing usability and accessibility through the use of sound design. I look at what design decisions have been made and compare these with a variety of available technology that exists today, as well as what solutions are being used in other cities. Throughout my research, I've placed an emphasis on sonic cues and sound design, as this is my area of study. Everything is assessed against the requirements and perspective of Helsinki's public transportation end users who are blind and visually impaired. I have used desk research, field research, user testing and stakeholder interviews in my methodology. I have put forth suggestions on how to improve the current system, taking into account the learnings from my research. I have looked at key points around people with disabilities and how sound can be used to improve accessibility and general functionality for all. I also hope to share this thesis with HSL and HKL, whom may use it to inform future optimization of their systems

    SeeChart: Enabling Accessible Visualizations Through Interactive Natural Language Interface For People with Visual Impairments

    Full text link
    Web-based data visualizations have become very popular for exploring data and communicating insights. Newspapers, journals, and reports regularly publish visualizations to tell compelling stories with data. Unfortunately, most visualizations are inaccessible to readers with visual impairments. For many charts on the web, there are no accompanying alternative (alt) texts, and even if such texts exist they do not adequately describe important insights from charts. To address the problem, we first interviewed 15 blind users to understand their challenges and requirements for reading data visualizations. Based on the insights from these interviews, we developed SeeChart, an interactive tool that automatically deconstructs charts from web pages and then converts them to accessible visualizations for blind people by enabling them to hear the chart summary as well as to interact through data points using the keyboard. Our evaluation with 14 blind participants suggests the efficacy of SeeChart in understanding key insights from charts and fulfilling their information needs while reducing their required time and cognitive burden.Comment: 28 pages, 13 figure

    Multimodal Accessibility of Documents

    Get PDF

    Instructional eLearning technologies for the vision impaired

    Get PDF
    The principal sensory modality employed in learning is vision, and that not only increases the difficulty for vision impaired students from accessing existing educational media but also the new and mostly visiocentric learning materials being offered through on-line delivery mechanisms. Using as a reference Certified Cisco Network Associate (CCNA) and IT Essentials courses, a study has been made of tools that can access such on-line systems and transcribe the materials into a form suitable for vision impaired learning. Modalities employed included haptic, tactile, audio and descriptive text. How such a multi-modal approach can achieve equivalent success for the vision impaired is demonstrated. However, the study also shows the limits of the current understanding of human perception, especially with respect to comprehending two and three dimensional objects and spaces when there is no recourse to vision

    Designing user experiences: a game engine for the blind

    Get PDF
    Video games experience an ever-increasing interest by society since their inception on the 70’s. This form of computer entertainment may let the player have a great time with family and friends, or it may as well provide immersion into a story full of details and emotional content. Prior to the end user playing a video game, a huge effort is performed in lots of disciplines: screenwriting, scenery design, graphical design, programming, optimization or marketing are but a few examples. This work is done by game studios, where teams of professionals from different backgrounds join forces in the inception of the video game. From the perspective of Human-Computer Interaction, which studies how people interact with computers to complete tasks, a game developer can be regarded as a user whose task is to create the logic of a video game using a computer. One of the main foundations of HCI. is that an in-depth understanding of the user’s needs and preferences is vital for creating a usable piece of technology. This point is important as a single piece of technology (in this case, the set of tools used by a game developer) may – and should have been designed to – be used on the same team by users with different knowledge, abilities and capabilities. Embracing this diversity of users functional capabilities is the core foundation of accessibility, which is tightly related to and studied from the discipline of HCI. The driving force behind this research is a question that came after considering game developers: Could someone develop a video game being fully or partially blind? Would it be possible for these users to be part of a game development team? What should be taken into account to cover their particular needs and preferences so that they could perform this task being comfortable and productive? The goal of this work is to propose a possible solution that can assure inclusion of fully or partially blind users in the context of computer game development. To do this, a Used Centered Design methodology has been followed. This approach is ideal in this case as it starts including people you’re designing for and ends with new solutions that are tailor made to suit their needs. First, previously designed solutions for this problem and related works have been analyzed. Secondly, an exploratory study has been performed to know how should the target user be able to interact with a computer when developing games, and design insights are drawn from both the state of the art analysis and the study results. Next, a solution has been proposed based on the design insights, and a prototype has been implemented. The solution has been evaluated with accessibility guidelines. It has been finally concluded that the proposed solution is accessible for visually impaired users.Ingeniería Informátic
    • 

    corecore