383 research outputs found
Design and User Satisfaction of Interactive Maps for Visually Impaired People
Multimodal interactive maps are a solution for presenting spatial information
to visually impaired people. In this paper, we present an interactive
multimodal map prototype that is based on a tactile paper map, a multi-touch
screen and audio output. We first describe the different steps for designing an
interactive map: drawing and printing the tactile paper map, choice of
multi-touch technology, interaction technologies and the software architecture.
Then we describe the method used to assess user satisfaction. We provide data
showing that an interactive map - although based on a unique, elementary,
double tap interaction - has been met with a high level of user satisfaction.
Interestingly, satisfaction is independent of a user's age, previous visual
experience or Braille experience. This prototype will be used as a platform to
design advanced interactions for spatial learning
Schematisation in Hard-copy Tactile Orientation Maps
This dissertation investigates schematisation of computer-generated tactile orientation maps that support mediation of spatial knowledge of unknown urban environments. Computergenerated tactile orientation maps are designed to provide the blind with an overall impression of their surroundings. Their details are displayed by means of elevated features that are created by embossers and can be distinguished by touch. The initial observation of this dissertation states that only very little information is actually transported through tactile maps owing to the coarse resolution of tactual senses and the cognitive effort involved in the serial exploration of tactile maps. However, the differences between computer-generated, embossed tactile maps and manufactured, deep-drawn tactile maps are significant. Therefore the possibilities and confines of communicating information through tactile maps produced with embossers is a primary area of research. This dissertation has been able to demonstrate that the quality of embossed prints is an almost equal alternative to traditionally manufactured deep-drawn maps. Their great advantage is fast and individual production and (apart from the initial procurement costs for the printer)low price, accessibility and easy understanding without the need of prior time-consuming training. Simplification of tactile maps is essential, even more so than in other maps. It can be achieved by selecting a limited number from all map elements available. Qualitative simplification through schematisation may present an additional option to simplification through quantitative selection. In this context schematisation is understood as cognitively motivated simplification of geometry and synchronised maintenance of topology. Rather than further reducing the number of displayed objects, the investigation concentrates on how the presentation of different forms of streets (natural vs. straightened) and junctions (natural vs. prototypical) affects the transfer of knowledge. In a second area of research, a thesis establishes that qualitative simplification of tactile orientation maps through schematisation can enhance their usability and make them easier to understand than maps that have not been schematised. The dissertation shows that simplifying street forms and limiting them to prototypical junctions does not only accelerate map exploration but also has a beneficial influence on retention performance. The majority of participants that took part in the investigation selected a combination of both as their preferred display option. Tactile maps that have to be tediously explored through touch, uncovering every detail, complicate attaining a first impression or an overall perception. A third area of research is examined, establishing which means could facilitate map readersĂą options to discover certain objects on the map quickly and without possessing a complete overview. Three types of aids are examined: guiding lines leading from the frame of the map to the object, position indicators represented by position markers at the frame of the map and coordinate specifications found within a grid on the map. The dissertation shows that all three varieties can be realised by embossers. Although a guiding line proves to be fast in size A4 tactile maps containing only one target object and few distracting objects, it also impedes further exploration of the map (similar to the grid). In the following, advantages and drawbacks of the various aids in this and other applications are discussed. In conclusion the dissertation elaborates on the linking points of all three examinations. They connect and it is argued that cognitively motivated simplification should be a principle of construction for embossed tactile orientation maps in order to support their use and comprehension. A summary establishes the recommendations that result from this dissertation regarding construction of tactile orientation maps considering the limitations through embosser constraints. Then I deliberate how to adapt schematisation of other maps contingent to intended function, previous knowledge of the map reader, and the relation between the time in which knowledge is acquired and the time it is employed. Closing the dissertation, I provide an insight into its confines and deductions and finish with a prospective view to possible transfers of the findings to other applications, e.g. multimedia or interactive maps on pin-matrix displays and devices
Designing a New Tactile Display Technology and its Disability Interactions
People with visual impairments have a strong desire for a refreshable tactile interface that can provide immediate access to full page of Braille and tactile graphics. Regrettably, existing devices come at a considerable expense and remain out of reach for many. The exorbitant costs associated with current tactile displays stem from their intricate design and the multitude of components needed for their construction. This underscores the pressing need for technological innovation that can enhance tactile displays, making them more accessible and available to individuals with visual impairments. This research thesis delves into the development of a novel tactile display technology known as Tacilia. This technology's necessity and prerequisites are informed by in-depth qualitative engagements with students who have visual impairments, alongside a systematic analysis of the prevailing architectures underpinning existing tactile display technologies. The evolution of Tacilia unfolds through iterative processes encompassing conceptualisation, prototyping, and evaluation. With Tacilia, three distinct products and interactive experiences are explored, empowering individuals to manually draw tactile graphics, generate digitally designed media through printing, and display these creations on a dynamic pin array display. This innovation underscores Tacilia's capability to streamline the creation of refreshable tactile displays, rendering them more fitting, usable, and economically viable for people with visual impairments
Instructional eLearning technologies for the vision impaired
The principal sensory modality employed in learning is vision, and that not only increases the difficulty for vision impaired students from accessing existing educational media but also the new and mostly visiocentric learning materials being offered through on-line delivery mechanisms. Using as a reference Certified Cisco Network Associate (CCNA) and IT Essentials courses, a study has been made of tools that can access such on-line systems and transcribe the materials into a form suitable for vision impaired learning. Modalities employed included haptic, tactile, audio and descriptive text. How such a multi-modal approach can achieve equivalent success for the vision impaired is demonstrated. However, the study also shows the limits of the current understanding of human perception, especially with respect to comprehending two and three dimensional objects and spaces when there is no recourse to vision
Principles and Guidelines for Advancement of Touchscreen-Based Non-visual Access to 2D Spatial Information
Graphical materials such as graphs and maps are often inaccessible to millions of blind and visually-impaired (BVI) people, which negatively impacts their educational prospects, ability to travel, and vocational opportunities. To address this longstanding issue, a three-phase research program was conducted that builds on and extends previous work establishing touchscreen-based haptic cuing as a viable alternative for conveying digital graphics to BVI users. Although promising, this approach poses unique challenges that can only be addressed by schematizing the underlying graphical information based on perceptual and spatio-cognitive characteristics pertinent to touchscreen-based haptic access. Towards this end, this dissertation empirically identified a set of design parameters and guidelines through a logical progression of seven experiments.
Phase I investigated perceptual characteristics related to touchscreen-based graphical access using vibrotactile stimuli, with results establishing three core perceptual guidelines: (1) a minimum line width of 1mm should be maintained for accurate line-detection (Exp-1), (2) a minimum interline gap of 4mm should be used for accurate discrimination of parallel vibrotactile lines (Exp-2), and (3) a minimum angular separation of 4mm should be used for accurate discrimination of oriented vibrotactile lines (Exp-3). Building on these parameters, Phase II studied the core spatio-cognitive characteristics pertinent to touchscreen-based non-visual learning of graphical information, with results leading to the specification of three design guidelines: (1) a minimum width of 4mm should be used for supporting tasks that require tracing of vibrotactile lines and judging their orientation (Exp-4), (2) a minimum width of 4mm should be maintained for accurate line tracing and learning of complex spatial path patterns (Exp-5), and (3) vibrotactile feedback should be used as a guiding cue to support the most accurate line tracing performance (Exp-6). Finally, Phase III demonstrated that schematizing line-based maps based on these design guidelines leads to development of an accurate cognitive map. Results from Experiment-7 provide theoretical evidence in support of learning from vision and touch as leading to the development of functionally equivalent amodal spatial representations in memory. Findings from all seven experiments contribute to new theories of haptic information processing that can guide the development of new touchscreen-based non-visual graphical access solutions
Developing an inclusive curriculum for visually disabled students
[Aims]
The purpose of this guide is to help staff identify and remove the barriers that visually disabled students may encounter when studying one of the GEES disciplines - i.e. geography, earth and environmental sciences - and to
suggest ways in which students can be helped to enjoy a fulfilling learning experience. Some of the advice and guidance offered will be generic, reflecting the importance of a strategic approach within institutions and departments to the planning and delivery of inclusive curricula. However, much of the advice will apply to specific forms of visual disability, and to the demands made by the study of GEES disciplines. Moreover, because each student is unique, most of what is discussed here will need to be made relevant and personal to individual students. It is a key principle of this guide that a blanket approach to the
management of the learning needs of visually disabled students on a GEES programme of study is likely to be ineffective
Tactile Arrays for Virtual Textures
This thesis describes the development of three new tactile stimulators for active
touch, i.e. devices to deliver virtual touch stimuli to the fingertip in response to
exploratory movements by the user. All three stimulators are designed to provide
spatiotemporal patterns of mechanical input to the skin via an array of contactors,
each under individual computer control. Drive mechanisms are based on
piezoelectric bimorphs in a cantilever geometry.
The first of these is a 25-contactor array (5 Ă 5 contactors at 2 mm spacing). It
is a rugged design with a compact drive system and is capable of producing strong
stimuli when running from low voltage supplies. Combined with a PC mouse,
it can be used for active exploration tasks. Pilot studies were performed which
demonstrated that subjects could successfully use the device for discrimination of
line orientation, simple shape identification and line following tasks.
A 24-contactor stimulator (6 Ă 4 contactors at 2 mm spacing) with improved
bandwidth was then developed. This features control electronics designed to transmit
arbitrary waveforms to each channel (generated on-the-fly, in real time) and
software for rapid development of experiments. It is built around a graphics tablet,
giving high precision position capability over a large 2D workspace. Experiments
using two-component stimuli (components at 40 Hz and 320 Hz) indicate that
spectral balance within active stimuli is discriminable independent of overall intensity,
and that the spatial variation (texture) within the target is easier to detect
at 320 Hz that at 40 Hz.
The third system developed (again 6 Ă 4 contactors at 2 mm spacing) was a lightweight modular stimulator developed for fingertip and thumb grasping tasks;
furthermore it was integrated with force-feedback on each digit and a complex
graphical display, forming a multi-modal Virtual Reality device for the display of
virtual textiles. It is capable of broadband stimulation with real-time generated
outputs derived from a physical model of the fabric surface. In an evaluation study,
virtual textiles generated from physical measurements of real textiles were ranked
in categories reflecting key mechanical and textural properties. The results were
compared with a similar study performed on the real fabrics from which the virtual
textiles had been derived. There was good agreement between the ratings of the
virtual textiles and the real textiles, indicating that the virtual textiles are a good
representation of the real textiles and that the system is delivering appropriate
cues to the user
Tabletop tangible maps and diagrams for visually impaired users
En dépit de leur omniprésence et de leur rÎle essentiel dans nos vies professionnelles et personnelles, les représentations
graphiques, qu'elles soient numériques ou sur papier, ne sont pas accessibles aux personnes déficientes visuelles car elles
ne fournissent pas d'informations tactiles. Par ailleurs, les inégalités d'accÚs à ces représentations ne cessent de
s'accroßtre ; grùce au développement de représentations graphiques dynamiques et disponibles en ligne, les personnes voyantes
peuvent non seulement accéder à de grandes quantités de données, mais aussi interagir avec ces données par le biais de
fonctionnalités avancées (changement d'échelle, sélection des données à afficher, etc.). En revanche, pour les personnes
déficientes visuelles, les techniques actuellement utilisées pour rendre accessibles les cartes et les diagrammes nécessitent
l'intervention de spécialistes et ne permettent pas la création de représentations interactives.
Cependant, les récentes avancées dans le domaine de l'adaptation automatique de contenus laissent entrevoir, dans les
prochaines années, une augmentation de la quantité de contenus adaptés. Cette augmentation doit aller de pair avec le
développement de dispositifs utilisables et abordables en mesure de supporter l'affichage de représentations interactives et
rapidement modifiables, tout en étant accessibles aux personnes déficientes visuelles. Certains prototypes de recherche
s'appuient sur une reprĂ©sentation numĂ©rique seulement : ils peuvent ĂȘtre instantanĂ©ment modifiĂ©s mais ne fournissent que trĂšs
peu de retour tactile, ce qui rend leur exploration complexe d'un point de vue cognitif et impose de fortes contraintes sur
le contenu. D'autres prototypes s'appuient sur une reprĂ©sentation numĂ©rique et physique : bien qu'ils puissent ĂȘtre explorĂ©s
tactilement, ce qui est un rĂ©el avantage, ils nĂ©cessitent un support tactile qui empĂȘche toute modification rapide. Quant aux
dispositifs similaires à des tablettes Braille, mais avec des milliers de picots, leur coût est prohibitif.
L'objectif de cette thÚse est de pallier les limitations de ces approches en étudiant comment développer des cartes et
diagrammes interactifs physiques, modifiables et abordables. Pour cela, nous nous appuyons sur un type d'interface qui a
rarement été étudié pour des utilisateurs déficients visuels : les interfaces tangibles, et plus particuliÚrement les
interfaces tangibles sur table. Dans ces interfaces, des objets physiques représentent des informations numériques et peuvent
ĂȘtre manipulĂ©s par l'utilisateur pour interagir avec le systĂšme, ou par le systĂšme lui-mĂȘme pour reflĂ©ter un changement du
modÚle numérique - on parle alors d'interfaces tangibles sur tables animées, ou actuated. Grùce à la conception, au
développement et à l'évaluation de trois interfaces tangibles sur table (les Tangible Reels, la Tangible Box et BotMap), nous
proposons un ensemble de solutions techniques répondant aux spécificités des interfaces tangibles pour des personnes
déficientes visuelles, ainsi que de nouvelles techniques d'interaction non-visuelles, notamment pour la reconstruction d'une
carte ou d'un diagramme et l'exploration de cartes de type " Pan & Zoom ". D'un point de vue théorique, nous proposons aussi
une nouvelle classification pour les dispositifs interactifs accessibles.Despite their omnipresence and essential role in our everyday lives, online and printed graphical representations are
inaccessible to visually impaired people because they cannot be explored using the sense of touch. The gap between sighted
and visually impaired people's access to graphical representations is constantly growing due to the increasing development
and availability of online and dynamic representations that not only give sighted people the opportunity to access large
amounts of data, but also to interact with them using advanced functionalities such as panning, zooming and filtering. In
contrast, the techniques currently used to make maps and diagrams accessible to visually impaired people require the
intervention of tactile graphics specialists and result in non-interactive tactile representations.
However, based on recent advances in the automatic production of content, we can expect in the coming years a growth in the
availability of adapted content, which must go hand-in-hand with the development of affordable and usable devices. In
particular, these devices should make full use of visually impaired users' perceptual capacities and support the display of
interactive and updatable representations. A number of research prototypes have already been developed. Some rely on digital
representation only, and although they have the great advantage of being instantly updatable, they provide very limited
tactile feedback, which makes their exploration cognitively demanding and imposes heavy restrictions on content. On the other
hand, most prototypes that rely on digital and physical representations allow for a two-handed exploration that is both
natural and efficient at retrieving and encoding spatial information, but they are physically limited by the use of a tactile
overlay, making them impossible to update. Other alternatives are either extremely expensive (e.g. braille tablets) or offer
a slow and limited way to update the representation (e.g. maps that are 3D-printed based on users' inputs).
In this thesis, we propose to bridge the gap between these two approaches by investigating how to develop physical
interactive maps and diagrams that support two-handed exploration, while at the same time being updatable and affordable. To
do so, we build on previous research on Tangible User Interfaces (TUI) and particularly on (actuated) tabletop TUIs, two
fields of research that have surprisingly received very little interest concerning visually impaired users. Based on the
design, implementation and evaluation of three tabletop TUIs (the Tangible Reels, the Tangible Box and BotMap), we propose
innovative non-visual interaction techniques and technical solutions that will hopefully serve as a basis for the design of
future TUIs for visually impaired users, and encourage their development and use. We investigate how tangible maps and
diagrams can support various tasks, ranging from the (re)construction of diagrams to the exploration of maps by panning and
zooming. From a theoretical perspective we contribute to the research on accessible graphical representations by highlighting
how research on maps can feed research on diagrams and vice-versa. We also propose a classification and comparison of
existing prototypes to deliver a structured overview of current research
Touch- and Walkable Virtual Reality to Support Blind and Visually Impaired Peoplesâ Building Exploration in the Context of Orientation and Mobility
Der Zugang zu digitalen Inhalten und Informationen wird immer wichtiger fĂŒr eine erfolgreiche Teilnahme an der heutigen, zunehmend digitalisierten Zivilgesellschaft. Solche Informationen werden meist visuell prĂ€sentiert, was den Zugang fĂŒr blinde und sehbehinderte Menschen einschrĂ€nkt. Die grundlegendste Barriere ist oft die elementare Orientierung und MobilitĂ€t (und folglich die soziale MobilitĂ€t), einschlieĂlich der Erlangung von Kenntnissen ĂŒber unbekannte GebĂ€ude vor deren Besuch. Um solche Barrieren zu ĂŒberbrĂŒcken, sollten technische Hilfsmittel entwickelt und eingesetzt werden. Es ist ein Kompromiss zwischen technologisch niedrigschwellig zugĂ€nglichen und verbreitbaren Hilfsmitteln und interaktiv-adaptiven, aber komplexen Systemen erforderlich. Die Anpassung der Technologie der virtuellen RealitĂ€t (VR) umfasst ein breites Spektrum an Entwicklungs- und Entscheidungsoptionen. Die Hauptvorteile der VR-Technologie sind die erhöhte InteraktivitĂ€t, die Aktualisierbarkeit und die Möglichkeit, virtuelle RĂ€ume und Modelle als Abbilder von realen RĂ€umen zu erkunden, ohne dass reale Gefahren und die begrenzte VerfĂŒgbarkeit von sehenden Helfern auftreten. Virtuelle Objekte und Umgebungen haben jedoch keine physische Beschaffenheit.
Ziel dieser Arbeit ist es daher zu erforschen, welche VR-Interaktionsformen sinnvoll sind (d.h. ein angemessenes Verbreitungspotenzial bieten), um virtuelle ReprĂ€sentationen realer GebĂ€ude im Kontext von Orientierung und MobilitĂ€t berĂŒhrbar oder begehbar zu machen. Obwohl es bereits inhaltlich und technisch disjunkte Entwicklungen und Evaluationen zur VR-Technologie gibt, fehlt es an empirischer Evidenz. ZusĂ€tzlich bietet diese Arbeit einen Ăberblick ĂŒber die verschiedenen Interaktionen.
Nach einer Betrachtung der menschlichen Physiologie, Hilfsmittel (z.B. taktile Karten) und technologischen Eigenschaften wird der aktuelle Stand der Technik von VR vorgestellt und die Anwendung fĂŒr blinde und sehbehinderte Nutzer und der Weg dorthin durch die EinfĂŒhrung einer neuartigen Taxonomie diskutiert. Neben der Interaktion selbst werden Merkmale des Nutzers und des GerĂ€ts, der Anwendungskontext oder die nutzerzentrierte Entwicklung bzw. Evaluation als Klassifikatoren herangezogen. BegrĂŒndet und motiviert werden die folgenden Kapitel durch explorative AnsĂ€tze, d.h. im Bereich 'small scale' (mit sogenannten Datenhandschuhen) und im Bereich 'large scale' (mit einer avatargesteuerten VR-Fortbewegung).
Die folgenden Kapitel fĂŒhren empirische Studien mit blinden und sehbehinderten Nutzern durch und geben einen formativen Einblick, wie virtuelle Objekte in Reichweite der HĂ€nde mit haptischem Feedback erfasst werden können und wie verschiedene Arten der VR-Fortbewegung zur Erkundung virtueller Umgebungen eingesetzt werden können. Daraus werden gerĂ€teunabhĂ€ngige technologische Möglichkeiten und auch Herausforderungen fĂŒr weitere Verbesserungen abgeleitet. Auf der Grundlage dieser Erkenntnisse kann sich die weitere Forschung auf Aspekte wie die spezifische Gestaltung interaktiver Elemente, zeitlich und rĂ€umlich kollaborative Anwendungsszenarien und die Evaluation eines gesamten Anwendungsworkflows (d.h. Scannen der realen Umgebung und virtuelle Erkundung zu Trainingszwecken sowie die Gestaltung der gesamten Anwendung in einer langfristig barrierefreien Weise) konzentrieren.Access to digital content and information is becoming increasingly important for successful participation in today's increasingly digitized civil society. Such information is mostly presented visually, which restricts access for blind and visually impaired people. The most fundamental barrier is often basic orientation and mobility (and consequently, social mobility), including gaining knowledge about unknown buildings before visiting them. To bridge such barriers, technological aids should be developed and deployed. A trade-off is needed between technologically low-threshold accessible and disseminable aids and interactive-adaptive but complex systems. The adaptation of virtual reality (VR) technology spans a wide range of development and decision options. The main benefits of VR technology are increased interactivity, updatability, and the possibility to explore virtual spaces as proxies of real ones without real-world hazards and the limited availability of sighted assistants. However, virtual objects and environments have no physicality.
Therefore, this thesis aims to research which VR interaction forms are reasonable (i.e., offering a reasonable dissemination potential) to make virtual representations of real buildings touchable or walkable in the context of orientation and mobility. Although there are already content and technology disjunctive developments and evaluations on VR technology, there is a lack of empirical evidence. Additionally, this thesis provides a survey between different interactions.
Having considered the human physiology, assistive media (e.g., tactile maps), and technological characteristics, the current state of the art of VR is introduced, and the application for blind and visually impaired users and the way to get there is discussed by introducing a novel taxonomy. In addition to the interaction itself, characteristics of the user and the device, the application context, or the user-centered development respectively evaluation are used as classifiers. Thus, the following chapters are justified and motivated by explorative approaches, i.e., in the group of 'small scale' (using so-called data gloves) and in the scale of 'large scale' (using an avatar-controlled VR locomotion) approaches.
The following chapters conduct empirical studies with blind and visually impaired users and give formative insight into how virtual objects within hands' reach can be grasped using haptic feedback and how different kinds of VR locomotion implementation can be applied to explore virtual environments. Thus, device-independent technological possibilities and also challenges for further improvements are derived. On the basis of this knowledge, subsequent research can be focused on aspects such as the specific design of interactive elements, temporally and spatially collaborative application scenarios, and the evaluation of an entire application workflow (i.e., scanning the real environment and exploring it virtually for training purposes, as well as designing the entire application in a long-term accessible manner)
- âŠ