6 research outputs found
On the intelligibility of fast synthesized speech for individuals with early-onset blindness
People with visual disabilities increasingly use text-to-speech synthesis as a primary output modality for interaction with computers. Surprisingly, there have been no systematic comparisons of the performance of different text-to-speech systems for this user population. In this paper we report the results of a pilot experiment on the intelligibility of fast synthesized speech for individuals with early-onset blindness. Using an open-response recall task, we collected data on four synthesis systems representing two major approaches to text-to-speech synthesis: formant-based synthesis and concatenative unit selection synthesis. We found a significant effect of speaking rate on intelligibility of synthesized speech, and a trend towards significance for synthesizer type. In post-hoc analyses, we found that participant-related factors, including age and familiarity with a synthesizer and voice, also affect intelligibility of fast synthesized speech
Making Spatial Information Accessible on Touchscreens for Users who are Blind and Visually Impaired
Touchscreens have become a de facto standard of input for mobile devices as they most optimally use the limited input and output space that is imposed by their form factor. In recent years, people who are blind and visually impaired have been increasing their usage of smartphones and touchscreens. Although basic access is available, there are still many accessibility issues left to deal with in order to bring full inclusion to this population. One of the important challenges lies in accessing and creating of spatial information on touchscreens. The work presented here provides three new techniques, using three different modalities, for accessing spatial information on touchscreens. The first system makes geometry and diagram creation accessible on a touchscreen through the use of text-to-speech and gestural input. This first study is informed by a qualitative study of how people who are blind and visually impaired currently access and create graphs and diagrams. The second system makes directions through maps accessible using multiple vibration sensors without any sound or visual output. The third system investigates the use of binaural sound on a touchscreen to make various types of applications accessible such as physics simulations, astronomy, and video games
Principles and Guidelines for Advancement of Touchscreen-Based Non-visual Access to 2D Spatial Information
Graphical materials such as graphs and maps are often inaccessible to millions of blind and visually-impaired (BVI) people, which negatively impacts their educational prospects, ability to travel, and vocational opportunities. To address this longstanding issue, a three-phase research program was conducted that builds on and extends previous work establishing touchscreen-based haptic cuing as a viable alternative for conveying digital graphics to BVI users. Although promising, this approach poses unique challenges that can only be addressed by schematizing the underlying graphical information based on perceptual and spatio-cognitive characteristics pertinent to touchscreen-based haptic access. Towards this end, this dissertation empirically identified a set of design parameters and guidelines through a logical progression of seven experiments.
Phase I investigated perceptual characteristics related to touchscreen-based graphical access using vibrotactile stimuli, with results establishing three core perceptual guidelines: (1) a minimum line width of 1mm should be maintained for accurate line-detection (Exp-1), (2) a minimum interline gap of 4mm should be used for accurate discrimination of parallel vibrotactile lines (Exp-2), and (3) a minimum angular separation of 4mm should be used for accurate discrimination of oriented vibrotactile lines (Exp-3). Building on these parameters, Phase II studied the core spatio-cognitive characteristics pertinent to touchscreen-based non-visual learning of graphical information, with results leading to the specification of three design guidelines: (1) a minimum width of 4mm should be used for supporting tasks that require tracing of vibrotactile lines and judging their orientation (Exp-4), (2) a minimum width of 4mm should be maintained for accurate line tracing and learning of complex spatial path patterns (Exp-5), and (3) vibrotactile feedback should be used as a guiding cue to support the most accurate line tracing performance (Exp-6). Finally, Phase III demonstrated that schematizing line-based maps based on these design guidelines leads to development of an accurate cognitive map. Results from Experiment-7 provide theoretical evidence in support of learning from vision and touch as leading to the development of functionally equivalent amodal spatial representations in memory. Findings from all seven experiments contribute to new theories of haptic information processing that can guide the development of new touchscreen-based non-visual graphical access solutions
Recommended from our members
Towards a better understanding of sensory substitution: the theory and practice of developing visual-to-auditory sensory substitution devices
Visual impairment is a global and potentially devastating affliction. Sensory substitution
devices have the potential to lessen the impact of blindness by presenting vision via another
modality. The chief motivation behind each of the chapters that follow is the production of
more useful sensory substitution devices. The first empirical chapter (chapter two)
demonstrates the use of interactive genetic algorithms to determine an optimal set of
parameters for a sensory substitution device based on an auditory encoding of vision (“the
vOICe”). In doing so, it introduces the first version of a novel sensory substitution device which
is configurable at run-time. It also presents data from three interactive genetic algorithm
based experiments that use this new sensory substitution device. Chapter three radically
expands on this theme by introducing a general purpose, modular framework for developing
visual-to-auditory sensory substitution devices (“Polyglot”). This framework is the fuller
realisation of the Polyglot device introduced in the first chapter and is based on the principle
of End-User Development (EUD). In chapter four, a novel method of evaluating sensory
substitution devices using eye-tracking is introduced. The data shows both that the copresentation
of visual stimuli assists localisation and that gaze predicted an auditory target
location more reliably than the behavioural responses. Chapter five explores the relationship
between sensory substitution devices and other tools that are used to acquire real-time
sensory information (“sensory tools”). This taxonomy unites a range of technology from
telescopes and cochlear implants to attempts to create a magnetic sense that can guide
further research. Finally, in chapter six, the possibility of representing colour through sound is
explored. The existence of a crossmodal correspondence between (equi-luminant) hue and
pitch is documented that may reflect a relationship between pitch and the geometry of visible
colour space
Interactive maps for visually impaired people : design, usability and spatial cognition
Connaître la géographie de son environnement urbain est un enjeu important pour les personnes déficientes visuelles. Des cartes tactiles en relief sont généralement utilisées mais elles présentent des limitations importantes (nombre limité d'informations, recours à une légende braille). Les nouvelles technologies permettent d'envisager des solutions innovantes. Nous avons conçu et développé une carte interactive accessible, en suivant un processus de conception participative. Cette carte est basée sur un dispositif multi-touch, une carte tactile en relief et une sortie sonore. Ce dispositif permet au sujet de recueillir des informations en double-cliquant sur certains objets de la carte. Nous avons démontré expérimentalement que ce prototype était plus efficace et plus satisfaisant pour des utilisateurs déficients visuels qu'une carte tactile simple. Nous avons également exploré et testé différents types d'interactions avancées accessibles pour explorer la carte. Cette thèse démontre l'importance des cartes tactiles interactives pour les déficients visuels et leur cognition spatiale.Knowing the geography of an urban environment is crucial for visually impaired people. Tactile relief maps are generally used, but they retain significant limitations (limited amount of information, use of braille legend, etc.). Recent technological progress allows the development of innovative solutions which overcome these limitations. In this thesis, we present the design of an accessible interactive map through a participatory design process. This map is composed by a multi-touch screen with tactile map overlay and speech output. It provides auditory information when tapping on map elements. We have demonstrated in an experiment that our prototype was more effective and satisfactory for visually impaired users than a simple raised-line map. We also explored and tested different types of advanced non-visual interaction for exploring the map. This thesis demonstrates the importance of interactive tactile maps for visually impaired people and their spatial cognition