142 research outputs found
HaptiRead: Reading Braille as Mid-Air Haptic Information
Mid-air haptic interfaces have several advantages - the haptic information is
delivered directly to the user, in a manner that is unobtrusive to the
immediate environment. They operate at a distance, thus easier to discover;
they are more hygienic and allow interaction in 3D. We validate, for the first
time, in a preliminary study with sighted and a user study with blind
participants, the use of mid-air haptics for conveying Braille. We tested three
haptic stimulation methods, where the haptic feedback was either: a) aligned
temporally, with haptic stimulation points presented simultaneously (Constant);
b) not aligned temporally, presenting each point independently
(Point-By-Point); or c) a combination of the previous methodologies, where
feedback was presented Row-by-Row. The results show that mid-air haptics is a
viable technology for presenting Braille characters, and the highest average
accuracy (94% in the preliminary and 88% in the user study) was achieved with
the Point-by-Point method.Comment: 8 pages, 8 figures, 2 tables, DIS'2
Making Spatial Information Accessible on Touchscreens for Users who are Blind and Visually Impaired
Touchscreens have become a de facto standard of input for mobile devices as they most optimally use the limited input and output space that is imposed by their form factor. In recent years, people who are blind and visually impaired have been increasing their usage of smartphones and touchscreens. Although basic access is available, there are still many accessibility issues left to deal with in order to bring full inclusion to this population. One of the important challenges lies in accessing and creating of spatial information on touchscreens. The work presented here provides three new techniques, using three different modalities, for accessing spatial information on touchscreens. The first system makes geometry and diagram creation accessible on a touchscreen through the use of text-to-speech and gestural input. This first study is informed by a qualitative study of how people who are blind and visually impaired currently access and create graphs and diagrams. The second system makes directions through maps accessible using multiple vibration sensors without any sound or visual output. The third system investigates the use of binaural sound on a touchscreen to make various types of applications accessible such as physics simulations, astronomy, and video games
The Graphical Access Challenge for People with Visual Impairments: Positions and Pathways Forward
Graphical access is one of the most pressing challenges for individuals who are blind or visually impaired. This chapter discusses some of the factors underlying the graphics access challenge, reviews prior approaches to addressing this long-standing information access barrier, and describes some promising new solutions. We specifically focus on touchscreen-based smart devices, a relatively new class of information access technologies, which our group believes represent an exemplary model of user-centered, needs-based design. We highlight both the challenges and the vast potential of these technologies for alleviating the graphics accessibility gap and share the latest results in this line of research. We close with recommendations on ideological shifts in mindset about how we approach solving this vexing access problem, which will complement both technological and perceptual advancements that are rapidly being uncovered through a growing research community in this domain
BRAILLESHAPES : efficient text input on smartwatches for blind people
Tese de Mestrado, Engenharia Informática, 2023, Universidade de Lisboa, Faculdade de CiênciasMobile touchscreen devices like smartphones or smartwatches are a predominant part
of our lives. They have evolved, and so have their applications. Due to the constant
growth and advancements in technology, using such devices as a means to accomplish a
vast amount of tasks has become common practice.
Nonetheless, relying on touch-based interactions, requiring good spatial ability and
memorization inherent to mobile devices, and lacking sufficient tactile cues, makes these
devices visually demanding, thus providing a strenuous interaction modality for visually impaired people. In scenarios occurring in movement-based contexts or where onehanded use is required, it is even more apparent.
We believe devices like smartwatches can provide numerous advantages when addressing such topics. However, they lack accessible solutions for several tasks, with most
of the existing ones for mobile touchscreen devices targeting smartphones. With communication being of the utmost importance and intrinsic to humankind, one task, in particular, for which it is imperative to provide solutions addressing its surrounding accessibility
concerns is text entry.
Since Braille is a reading standard for blind people and provided positive results in
prior work regarding accessible text entry approaches, we believe using it as the basis for
an accessible text entry solution can help solidify a standardization for this type of interaction modality. It can also allow users to leverage previous knowledge, reducing possible
extra cognitive load. Yet, even though Braille-based chording solutions achieved good
results, due to the reduced space of the smartwatch’s touchscreen, a tapping approach is
not the most feasible. Hence, we found the best option to be a gesture-based solution.
Therefore, with this thesis, we explored and validated the concept and feasibility of
Braille-based shapes as the foundation for an accessible gesture-based smartwatch text
entry method for visually impaired people
Secured and Smart Electronic voting system
Now a days various displays are becoming available for implementing a new kind of human computer interaction (HCI) method. Among them, touch panel displays have been used in wide variety of applications and are proven to be a useful interface infrastructure. We exemplify our approach through the design and development of secured & smart ectronic voting system. As the Supreme Court recently ordered to include the “Reject” option, so that the voter can reject if he is not interested in any party. This touch screen based electronic voting system provides confirmation after selecting a party from the list. A beep sound will be generated when the voter presses the confirmation so that the vote will be casted successfully to a right party. This type of electronic voting systems allow easy confirmation and casting of vote without any assistance. This system also provides security by entering the voter ID whether it is correct or not. We also conducted a preliminary evaluation to verify the effectiveness of the system
Multi-Sensory Interaction for Blind and Visually Impaired People
This book conveyed the visual elements of artwork to the visually impaired through various sensory elements to open a new perspective for appreciating visual artwork. In addition, the technique of expressing a color code by integrating patterns, temperatures, scents, music, and vibrations was explored, and future research topics were presented. A holistic experience using multi-sensory interaction acquired by people with visual impairment was provided to convey the meaning and contents of the work through rich multi-sensory appreciation. A method that allows people with visual impairments to engage in artwork using a variety of senses, including touch, temperature, tactile pattern, and sound, helps them to appreciate artwork at a deeper level than can be achieved with hearing or touch alone. The development of such art appreciation aids for the visually impaired will ultimately improve their cultural enjoyment and strengthen their access to culture and the arts. The development of this new concept aids ultimately expands opportunities for the non-visually impaired as well as the visually impaired to enjoy works of art and breaks down the boundaries between the disabled and the non-disabled in the field of culture and arts through continuous efforts to enhance accessibility. In addition, the developed multi-sensory expression and delivery tool can be used as an educational tool to increase product and artwork accessibility and usability through multi-modal interaction. Training the multi-sensory experiences introduced in this book may lead to more vivid visual imageries or seeing with the mind’s eye
Towards Location-Independent Eyes-Free Text Entry
We propose an interface for eyes-free text entry using an ambiguous technique and conduct a preliminary user study. We find that user are able to enter text at 19.09 words per minute (WPM) with a 2.08% character error rate (CER) after eight hours of practice. We explore ways to optimize the ambiguous groupings to reduce the number of disambiguation errors, both with and without familiarity constraints. We find that it is feasible to reduce the number of ambiguous groups from six to four. Finally, we explore a technique for presenting word suggestions to users using simultaneous audio feedback. We find that accuracy is quite poor when the words are played fully simultaneously, but improves when a slight delay is added before each voice
Crossmodal audio and tactile interaction with mobile touchscreens
Touchscreen mobile devices often use cut-down versions of desktop user interfaces placing high demands on the visual sense that may prove awkward in mobile settings. The research in this thesis addresses the problems encountered by situationally impaired mobile users by using crossmodal interaction to exploit the abundant similarities between the audio and tactile modalities. By making information available to both senses, users can receive the information in the most suitable way, without having to abandon their primary task to look at the device.
This thesis begins with a literature review of related work followed by a definition of crossmodal icons. Two icons may be considered to be crossmodal if and only if they provide a common representation of data, which is accessible interchangeably via different modalities. Two experiments investigated possible parameters for use in crossmodal icons with results showing that rhythm, texture and spatial location are effective.
A third experiment focused on learning multi-dimensional crossmodal icons and the extent to which this learning transfers between modalities. The results showed identification rates of 92% for three-dimensional audio crossmodal icons when trained in the tactile equivalents, and identification rates of 89% for tactile crossmodal icons when trained in the audio equivalent.
Crossmodal icons were then incorporated into a mobile touchscreen QWERTY keyboard. Experiments showed that keyboards with audio or tactile feedback produce fewer errors and greater speeds of text entry compared to standard touchscreen keyboards. The next study examined how environmental variables affect user performance with the same keyboard. The data showed that each modality performs differently with varying levels of background noise or vibration and the exact levels at which these performance decreases occur were established.
The final study involved a longitudinal evaluation of a touchscreen application, CrossTrainer, focusing on longitudinal effects on performance with audio and tactile feedback, the impact of context on performance and personal modality preference. The results show that crossmodal audio and tactile icons are a valid method of presenting information to situationally impaired mobile touchscreen users with recognitions rates of 100% over time. This thesis concludes with a set of guidelines on the design and application of crossmodal audio and tactile feedback to enable application and interface designers to employ such feedback in all systems
A new dynamic tactile display for reconfigurable braille: implementation and tests
Different tactile interfaces have been proposed to represent either text (braille) or, in a few cases, tactile large-area screens as replacements for visual displays. None of the implementations so far can be customized to match users' preferences, perceptual differences and skills. Optimal choices in these respects are still debated; we approach a solution by designing a flexible device allowing the user to choose key parameters of tactile transduction. We present here a new dynamic tactile display, a 8 × 8 matrix of plastic pins based on well-established and reliable piezoelectric technology to offer high resolution (pin gap 0.7mm) as well as tunable strength of the pins displacement, and refresh rate up to 50s(−1). It can reproduce arbitrary patterns, allowing it to serve the dual purpose of providing, depending on contingent user needs, tactile rendering of non-character information, and reconfigurable braille rendering. Given the relevance of the latter functionality for the expected average user, we considered testing braille encoding by volunteers a benchmark of primary importance. Tests were performed to assess the acceptance and usability with minimal training, and to check whether the offered flexibility was indeed perceived by the subject as an added value compared to conventional braille devices. Different mappings between braille dots and actual tactile pins were implemented to match user needs. Performances of eight experienced braille readers were defined as the fraction of correct identifications of rendered content. Different information contents were tested (median performance on random strings, words, sentences identification was about 75%, 85%, 98%, respectively, with a significant increase, p < 0.01), obtaining statistically significant improvements in performance during the tests (p < 0.05). Experimental results, together with qualitative ratings provided by the subjects, show a good acceptance and the effectiveness of the proposed solution
- …