170 research outputs found

    Towards Location-Independent Eyes-Free Text Entry

    Get PDF
    We propose an interface for eyes-free text entry using an ambiguous technique and conduct a preliminary user study. We find that user are able to enter text at 19.09 words per minute (WPM) with a 2.08% character error rate (CER) after eight hours of practice. We explore ways to optimize the ambiguous groupings to reduce the number of disambiguation errors, both with and without familiarity constraints. We find that it is feasible to reduce the number of ambiguous groups from six to four. Finally, we explore a technique for presenting word suggestions to users using simultaneous audio feedback. We find that accuracy is quite poor when the words are played fully simultaneously, but improves when a slight delay is added before each voice

    Survey of Eye-Free Text Entry Techniques of Touch Screen Mobile Devices Designed for Visually Impaired Users

    Get PDF
    Now a days touch screen mobiles are becoming more popular amongst sighted as well visually impaired people due to its simple interface and efficient interaction techniques. Most of the touch screen devices designed for visually impaired users based on screen readers, haptic and different user interface (UI).In this paper we present a critical review of different keypad layouts designed for visually impaired users and their effect on text entry speed. And try to list out key issues to extend accessibility and text entry rate of touch screen devices.Keywords: Text entry rate, touch screen mobile devices, visually impaired users

    Braille text entry on smartwatches : an evaluation of methods for composing the Braille cell

    Get PDF
    Smartwatches are gaining popularity on market with a set of features comparable to smartphones in a wearable device. This novice technology brings new interaction paradigms and challenges for blind users, who have difficulties dealing with touchscreens. Among a variety of tasks that must be studied, text entry is analyzed, considering that current existing solutions may be unsatisfactory (as voice input) or even unfeasible (as working with tiny QWERTY keyboards) for a blind user. More specifically, this paper presents a study on possible solutions for composing a Braille cell on smart-watches. Five prototypes were developed and different feedback features were proposed. These are confronted with seven specialists on an evaluation study that results in a qualitative analysis of which strategies can be more useful for blind users in a Braille text entry.Postprin

    FlexType: Flexible Text Input with a Small Set of Input Gestures

    Get PDF
    In many situations, it may be impractical or impossible to enter text by selecting precise locations on a physical or touchscreen keyboard. We present an ambiguous keyboard with four character groups that has potential applications for eyes-free text entry, as well as text entry using a single switch or a brain-computer interface. We develop a procedure for optimizing these character groupings based on a disambiguation algorithm that leverages a long-span language model. We produce both alphabetically-constrained and unconstrained character groups in an offline optimization experiment and compare them in a longitudinal user study. Our results did not show a significant difference between the constrained and unconstrained character groups after four hours of practice. As expected, participants had significantly more errors with the unconstrained groups in the first session, suggesting a higher barrier to learning the technique. We therefore recommend the alphabetically-constrained character groups, where participants were able to achieve an average entry rate of 12.0 words per minute with a 2.03% character error rate using a single hand and with no visual feedback

    Making Spatial Information Accessible on Touchscreens for Users who are Blind and Visually Impaired

    Get PDF
    Touchscreens have become a de facto standard of input for mobile devices as they most optimally use the limited input and output space that is imposed by their form factor. In recent years, people who are blind and visually impaired have been increasing their usage of smartphones and touchscreens. Although basic access is available, there are still many accessibility issues left to deal with in order to bring full inclusion to this population. One of the important challenges lies in accessing and creating of spatial information on touchscreens. The work presented here provides three new techniques, using three different modalities, for accessing spatial information on touchscreens. The first system makes geometry and diagram creation accessible on a touchscreen through the use of text-to-speech and gestural input. This first study is informed by a qualitative study of how people who are blind and visually impaired currently access and create graphs and diagrams. The second system makes directions through maps accessible using multiple vibration sensors without any sound or visual output. The third system investigates the use of binaural sound on a touchscreen to make various types of applications accessible such as physics simulations, astronomy, and video games

    Reconfigurable PDA for the Visually Impaired Using FPGAs

    Get PDF
    Curtin University Brailler (CUB) is a Personal Digital Assistant (PDA) for visually impaired people. Its objective is to make information in different formats accessible to people with limited visual ability. This paper presents the design and implementation of two modules: a print-to-Braille translation system and a Braille keyboard controller. The translator implements Blenkhorn's algorithm in hardware, liberating the microprocessor to perform other functions. The Braille keyboard controller along with a low cost keyboard provides users with a note-taking function. These modules are used as intellectual property (IP) cores coupled to a 32-bit MicroBlaze processor in an embedded system-on-a-chip (SoC). In its current implementation, the microprocessor uses a hierarchical interrupt scheme to coordinate IP cores. A prototype of the complete embedded system is under development using Xilinx's FPGAs. The system is a potential platform for the development of embedded systems to assist the visually impaired

    mBrailler: Multimodal Braille Keyboard for Android

    Get PDF
    Touchscreen devices have paved their way into the mobile scene, presenting a wide set of possibilities but a comparable number of new challenges, particularly for people who are blind. While these devices have a small number of tactile cues, such as buttons, they provide the opportunity to create novel interaction techniques. In this paper, we present mBrailler. mBrailler is a mobile braille keyboard that combines the benefits of physical keyboards (speed and accuracy) and gestural interfaces (flexibility and personalization). We built an 8-button Braille keyboard that can be attached to the back of mainstream smartphones allowing fast and familiar chorded input. On the other hand,the touchscreen enables thumb entered gestures for more complex text editing operations, such as caret movement, text selection, copy, and paste. This project combines the tactile benefits of Braille typewriters with the customization of smartphone applications. We aim to provide a more efficient and effective typing experience for blind users, thus increasing their productivity with current mobile devices

    Human Computer Interface for Victims using FPGA

    Get PDF
    Visually impaired people face many challenges in the society; particularly students with visual impairments face unique challenges in the education environment. They struggle a lot to access the information, so to resolve this obstacle in reading and to allow the visually impaired students to fully access and participate in the curriculum with the greatest possible level of independence, a Braille transliteration system using VLSI is designed. Here Braille input is given to FPGA Virtex-4 kit via Braille keyboard. The Braille language is converted into English language by decoding logic in VHDL/Verilog and then the corresponding alphabet letter is converted into speech signal with the help of the algorithm. Speaker is used for the voice output. This project allows the visually impaired people to get literate also the person can get a conformation about what is being typed, every time that character is being pressed, this prevents the occurrence of mistakes
    • …
    corecore