7,857 research outputs found

    An Empirical Evaluation On Vibrotactile Feedback For Wristband System

    Full text link
    With the rapid development of mobile computing, wearable wrist-worn is becoming more and more popular. But the current vibrotactile feedback patterns of most wrist-worn devices are too simple to enable effective interaction in nonvisual scenarios. In this paper, we propose the wristband system with four vibrating motors placed in different positions in the wristband, providing multiple vibration patterns to transmit multi-semantic information for users in eyes-free scenarios. However, we just applied five vibrotactile patterns in experiments (positional up and down, horizontal diagonal, clockwise circular, and total vibration) after contrastive analyzing nine patterns in a pilot experiment. The two experiments with the same 12 participants perform the same experimental process in lab and outdoors. According to the experimental results, users can effectively distinguish the five patterns both in lab and outside, with approximately 90% accuracy (except clockwise circular vibration of outside experiment), proving these five vibration patterns can be used to output multi-semantic information. The system can be applied to eyes-free interaction scenarios for wrist-worn devices.Comment: 10 pages

    A Support System for Graphics for Visually Impaired People

    Get PDF
    As the Internet plays an important role in today’s society, graphics is widely used to present, convey and communicate information in many different areas. Complex information is often easier to understand and analyze by graphics. Even though graphics plays an important role, accessibility support is very limited for web graphics. Web graphics accessibility is not only for people with disabilities, but also for people who want to get and use information in ways different from the ones originally intended. One of the problems regarding graphics for blind people is that we have few data on how a blind person draws or how he/she receives graphical information. Based on Katz’s pupils’ research, one concludes that blind people can draw in outline and that they have a good sense of three-dimensional shape and space. In this thesis, I propose and develop a system, which can serve as a tool to be used by researchers investigating these and related issues. Our support system is built to collect the drawings from visually impaired people by finger movement on Braille devices or touch devices, such as tablets. When the drawing data is collected, the system will automatically generate the graphical XML data, which are easily accessed by applications and web services. The graphical XML data are stored locally or remotely. Compared to other support systems, our system is the first automatic system to provide web services to collect and access such data. The system also has the capability to integrate into cloud computing so that people can use the system anywhere to collect and access the data

    A Review of Smart Materials in Tactile Actuators for Information Delivery

    Full text link
    As the largest organ in the human body, the skin provides the important sensory channel for humans to receive external stimulations based on touch. By the information perceived through touch, people can feel and guess the properties of objects, like weight, temperature, textures, and motion, etc. In fact, those properties are nerve stimuli to our brain received by different kinds of receptors in the skin. Mechanical, electrical, and thermal stimuli can stimulate these receptors and cause different information to be conveyed through the nerves. Technologies for actuators to provide mechanical, electrical or thermal stimuli have been developed. These include static or vibrational actuation, electrostatic stimulation, focused ultrasound, and more. Smart materials, such as piezoelectric materials, carbon nanotubes, and shape memory alloys, play important roles in providing actuation for tactile sensation. This paper aims to review the background biological knowledge of human tactile sensing, to give an understanding of how we sense and interact with the world through the sense of touch, as well as the conventional and state-of-the-art technologies of tactile actuators for tactile feedback delivery

    Instructional eLearning technologies for the vision impaired

    Get PDF
    The principal sensory modality employed in learning is vision, and that not only increases the difficulty for vision impaired students from accessing existing educational media but also the new and mostly visiocentric learning materials being offered through on-line delivery mechanisms. Using as a reference Certified Cisco Network Associate (CCNA) and IT Essentials courses, a study has been made of tools that can access such on-line systems and transcribe the materials into a form suitable for vision impaired learning. Modalities employed included haptic, tactile, audio and descriptive text. How such a multi-modal approach can achieve equivalent success for the vision impaired is demonstrated. However, the study also shows the limits of the current understanding of human perception, especially with respect to comprehending two and three dimensional objects and spaces when there is no recourse to vision

    Accessibility of E-Commerce Websites for Vision Impaired Persons

    Get PDF
    In this thesis accessibility problems with websites for vision-impaired persons are discussed in detail. General accessibility problems and those specific to e-commerce websites,especially on-line shopping websites are discussed. Accessibility problems are analyzed from the perspective of a screen reader user. As a solution for the accessibility problems identified, the WCAG 2.0 guidelines are reviewed and new changes are proposed to improve the existing guidelines. Enhanced solutions using tactile media capable of providing a better web browsing experience for vision-impaired persons are also discussed

    Learner-centred Accessibility for Interoperable Web-based Educational Systems

    Get PDF
    This paper describes the need for an information model and specifications that support a new strategy for delivering accessible computer-based resources to learners based on their specific needs and preferences in the circumstances in which they are operating. The strategy augments the universal accessibility of resources model to enable systems to focus on individual learners and their particular accessibility needs and preferences. A set of specifications known as the AccessForAll specifications is proposed

    Making Graphical Information Accessible Without Vision Using Touch-based Devices

    Get PDF
    Accessing graphical material such as graphs, figures, maps, and images is a major challenge for blind and visually impaired people. The traditional approaches that have addressed this issue have been plagued with various shortcomings (such as use of unintuitive sensory translation rules, prohibitive costs and limited portability), all hindering progress in reaching the blind and visually-impaired users. This thesis addresses aspects of these shortcomings, by designing and experimentally evaluating an intuitive approach —called a vibro-audio interface— for non-visual access to graphical material. The approach is based on commercially available touch-based devices (such as smartphones and tablets) where hand and finger movements over the display provide position and orientation cues by synchronously triggering vibration patterns, speech output and auditory cues, whenever an on-screen visual element is touched. Three human behavioral studies (Exp 1, 2, and 3) assessed usability of the vibro-audio interface by investigating whether its use leads to development of an accurate spatial representation of the graphical information being conveyed. Results demonstrated efficacy of the interface and importantly, showed that performance was functionally equivalent with that found using traditional hardcopy tactile graphics, which are the gold standard of non-visual graphical learning. One limitation of this approach is the limited screen real estate of commercial touch-screen devices. This means large and deep format graphics (e.g., maps) will not fit within the screen. Panning and zooming operations are traditional techniques to deal with this challenge but, performing these operations without vision (i.e., using touch) represents several computational challenges relating both to cognitive constraints of the user and technological constraints of the interface. To address these issues, two human behavioral experiments were conducted, that assessed the influence of panning (Exp 4) and zooming (Exp 5) operations in non-visual learning of graphical material and its related human factors. Results from experiments 4 and 5 indicated that the incorporation of panning and zooming operations enhances the non-visual learning process and leads to development of more accurate spatial representation. Together, this thesis demonstrates that the proposed approach —using a vibro-audio interface— is a viable multimodal solution for presenting dynamic graphical information to blind and visually-impaired persons and supporting development of accurate spatial representations of otherwise inaccessible graphical materials
    • 

    corecore