95 research outputs found
Sonification of guidance data during road crossing for people with visual impairments or blindness
In the last years several solutions were proposed to support people with
visual impairments or blindness during road crossing. These solutions focus on
computer vision techniques for recognizing pedestrian crosswalks and computing
their relative position from the user. Instead, this contribution addresses a
different problem; the design of an auditory interface that can effectively
guide the user during road crossing. Two original auditory guiding modes based
on data sonification are presented and compared with a guiding mode based on
speech messages.
Experimental evaluation shows that there is no guiding mode that is best
suited for all test subjects. The average time to align and cross is not
significantly different among the three guiding modes, and test subjects
distribute their preferences for the best guiding mode almost uniformly among
the three solutions. From the experiments it also emerges that higher effort is
necessary for decoding the sonified instructions if compared to the speech
instructions, and that test subjects require frequent `hints' (in the form of
speech messages). Despite this, more than 2/3 of test subjects prefer one of
the two guiding modes based on sonification. There are two main reasons for
this: firstly, with speech messages it is harder to hear the sound of the
environment, and secondly sonified messages convey information about the
"quantity" of the expected movement
MOBILE ASSISTIVE TECHNOLOGIES FOR PEOPLE WITH VISUAL IMPAIRMENT: SENSING AND CONVEYING INFORMATION TO SUPPORT ORIENTATION, MOBILITY AND ACCESS TO IMAGES
Smartphones are accessible to persons with visual impairment or blindness (VIB): screen reader technologies, integrated with mobile operating systems, enable non-visual interaction with the device. Also, features like GPS receivers, inertial sensors and cameras enable the development of Mobile Assistive Technologies (MATs) to support people with VIB. A preliminary analysis, conducted adopting an user-centric approach, highlighted some issues experienced by people with VIB in everyday activities from three main fields: orientation, mobility and access to images.
Traditional approaches to address these issues, based on assistive tools and technologies, have some limitations: in the field of mobility, for example, existing navigation support solutions (e.g. the white cane) cannot be used to perceive some environmental features like crosswalks or the current state of traffic lights; in the field of orientation, tactile maps adopted to develop cognitive maps of the environment are limited in the amount of information that can be represented on a single surface and by the lack of interactivity, two issues experienced also in other fields where access to graphical information is of paramount importance like, for example, didactics of STEM subjects.
This work presents new MATs that deal with these limitations by introducing novel solutions in different fields of Computer Science. Original computer vision techniques, designed to detect the presence of pedestrian crossings and the state of traffic lights, are used to sense information from the environment and support mobility of people with VIB. Novel sonification techniques are introduced to efficiently convey information with three different goals: first, to convey guidance information in urban crossings; second, to enhance the development of cognitive maps by augmenting tactile surfaces; third, to enable quick access to images.
Experience reported in this dissertation shows that the proposed MATs are effective in supporting people with VIB and, in general, that mobile devices are a versatile platform to enable affordable and pervasive access to assistive technologies. Involving target users in the evaluation of MATs emerged as a major challenge in this work. However, it is shown how such challenge can be addressed by adopting large scale evaluation techniques typical of HCI research
ASSISTIVE TECHNOLOGIES ON MOBILE DEVICES FOR PEOPLE WITH VISUAL IMPAIRMENTS
Spatial understanding and cognitive mapping are challenging tasks for people with visual impairments. The goal of this work is to leverage computer vision and spatial understanding techniques along with audio-haptic proprioceptive interaction paradigms for assisting people with visual impairments in spatial comprehension and memorization. Abstract space exploration in the field of assistive didactics is tackled through tactile exploration and audio feedback resulting in two solutions. The first one focuses on math learning in primary education while the second one focuses on function graph tactile exploration and sonification. In the field of spatial comprehension during way-finding for people with visual impairments, computer vision and spatial reasoning techniques are used for detecting visual cues such as zebra pedestrian crossings and for safely guiding the user with respect to the detected elements. Suitable interaction paradigms based on sonification and haptic feedback are designed to assist the user efficiently and quickly during the navigation
Experimental Analysis of a Spatialised Audio Interface for People with Visual Impairments
Sound perception is a fundamental skill for many people with severe sight impairments. The research presented in this paper is part of an ongoing project with the aim to create a mobile guidance aid to help people with vision impairments find objects within an unknown indoor environment. This system requires an effective non-visual interface and uses bone-conduction headphones to transmit audio instructions to the user. It has been implemented and tested with spatialised audio cues, which convey the direction of a predefined target in 3D space. We present an in-depth evaluation of the audio interface with several experiments that involve a large number of participants, both blindfolded and with actual visual impairments, and analyse the pros and cons of our design choices. In addition to producing results comparable to the state-of-the-art, we found that Fitts’s Law (a predictive model for human movement) provides a suitable a metric that can be used to improve and refine the quality of the audio interface in future mobile navigation aids
An Orientation & Mobility Aid for People with Visual Impairments
Orientierung&Mobilität (O&M) umfasst eine Reihe von Techniken für Menschen mit Sehschädigungen, die ihnen helfen, sich im Alltag zurechtzufinden. Dennoch benötigen sie einen umfangreichen und sehr aufwendigen Einzelunterricht mit O&M Lehrern, um diese Techniken in ihre täglichen Abläufe zu integrieren. Während einige dieser Techniken assistive Technologien benutzen, wie zum Beispiel den Blinden-Langstock, Points of Interest Datenbanken oder ein Kompass gestütztes Orientierungssystem, existiert eine unscheinbare Kommunikationslücke zwischen verfügbaren Hilfsmitteln und Navigationssystemen.
In den letzten Jahren sind mobile Rechensysteme, insbesondere Smartphones, allgegenwärtig geworden. Dies eröffnet modernen Techniken des maschinellen Sehens die Möglichkeit, den menschlichen Sehsinn bei Problemen im Alltag zu unterstützen, die durch ein nicht barrierefreies Design entstanden sind. Dennoch muss mit besonderer Sorgfalt vorgegangen werden, um dabei nicht mit den speziellen persönlichen Kompetenzen und antrainierten Verhaltensweisen zu kollidieren, oder schlimmstenfalls O&M Techniken sogar zu widersprechen.
In dieser Dissertation identifizieren wir eine räumliche und systembedingte Lücke zwischen Orientierungshilfen und Navigationssystemen für Menschen mit Sehschädigung. Die räumliche Lücke existiert hauptsächlich, da assistive Orientierungshilfen, wie zum Beispiel der Blinden-Langstock, nur dabei helfen können, die Umgebung in einem limitierten Bereich wahrzunehmen, während Navigationsinformationen nur sehr weitläufig gehalten sind. Zusätzlich entsteht diese Lücke auch systembedingt zwischen diesen beiden Komponenten — der Blinden-Langstock kennt die Route nicht, während ein Navigationssystem nahegelegene Hindernisse oder O&M Techniken nicht weiter betrachtet. Daher schlagen wir verschiedene Ansätze zum Schließen dieser Lücke vor, um die Verbindung und Kommunikation zwischen Orientierungshilfen und Navigationsinformationen zu verbessern und betrachten das Problem dabei aus beiden Richtungen. Um nützliche relevante Informationen bereitzustellen, identifizieren wir zuerst die bedeutendsten Anforderungen an assistive Systeme und erstellen einige Schlüsselkonzepte, die wir bei unseren Algorithmen und Prototypen beachten.
Existierende assistive Systeme zur Orientierung basieren hauptsächlich auf globalen Navigationssatellitensystemen. Wir versuchen, diese zu verbessern, indem wir einen auf Leitlinien basierenden Routing Algorithmus erstellen, der auf individuelle Bedürfnisse anpassbar ist und diese berücksichtigt. Generierte Routen sind zwar unmerklich länger, aber auch viel sicherer, gemäß den in Zusammenarbeit mit O&M Lehrern erstellten objektiven Kriterien. Außerdem verbessern wir die Verfügbarkeit von relevanten georeferenzierten Datenbanken, die für ein derartiges bedarfsgerechtes Routing benötigt werden. Zu diesem Zweck erstellen wir einen maschinellen Lernansatz, mit dem wir Zebrastreifen in Luftbildern erkennen, was auch über Ländergrenzen hinweg funktioniert, und verbessern dabei den Stand der Technik.
Um den Nutzen von Mobilitätsassistenz durch maschinelles Sehen zu optimieren, erstellen wir O&M Techniken nachempfundene Ansätze, um die räumliche Wahrnehmung der unmittelbaren Umgebung zu erhöhen. Zuerst betrachten wir dazu die verfügbare Freifläche und informieren auch über mögliche Hindernisse. Weiterhin erstellen wir einen neuartigen Ansatz, um die verfügbaren Leitlinien zu erkennen und genau zu lokalisieren, und erzeugen virtuelle Leitlinien, welche Unterbrechungen überbrücken und bereits frühzeitig Informationen über die nächste Leitlinie bereitstellen. Abschließend verbessern wir die Zugänglichkeit von Fußgängerübergängen, insbesondere Zebrastreifen und Fußgängerampeln, mit einem Deep Learning Ansatz.
Um zu analysieren, ob unsere erstellten Ansätze und Algorithmen einen tatsächlichen Mehrwert für Menschen mit Sehschädigung erzeugen, vollziehen wir ein kleines Wizard-of-Oz-Experiment zu unserem bedarfsgerechten Routing — mit einem sehr ermutigendem Ergebnis. Weiterhin führen wir eine umfangreichere Studie mit verschiedenen Komponenten und dem Fokus auf Fußgängerübergänge durch. Obwohl unsere statistischen Auswertungen nur eine geringfügige Verbesserung aufzeigen, beeinflußt durch technische Probleme mit dem ersten Prototypen und einer zu geringen Eingewöhnungszeit der Probanden an das System, bekommen wir viel versprechende Kommentare von fast allen Studienteilnehmern. Dies zeigt, daß wir bereits einen wichtigen ersten Schritt zum Schließen der identifizierten Lücke geleistet haben und Orientierung&Mobilität für Menschen mit Sehschädigung damit verbessern konnten
A comparative study in real-time scene sonification for visually impaired people
In recent years, with the development of depth cameras and scene detection algorithms, a wide variety of electronic travel aids for visually impaired people have been proposed. However, it is still challenging to convey scene information to visually impaired people efficiently. In this paper, we propose three different auditory-based interaction methods, i.e., depth image sonification, obstacle sonification as well as path sonification, which convey raw depth images, obstacle information and path information respectively to visually impaired people. Three sonification methods are compared comprehensively through a field experiment attended by twelve visually impaired participants. The results show that the sonification of high-level scene information, such as the direction of pathway, is easier to learn and adapt, and is more suitable for point-to-point navigation. In contrast, through the sonification of low-level scene information, such as raw depth images, visually impaired people can understand the surrounding environment more comprehensively. Furthermore, there is no interaction method that is best suited for all participants in the experiment, and visually impaired individuals need a period of time to find the most suitable interaction method. Our findings highlight the features and the differences of three scene detection algorithms and the corresponding sonification methods. The results provide insights into the design of electronic travel aids, and the conclusions can also be applied in other fields, such as the sound feedback of virtual reality applications
Public Participation in the Development Process of a Mobility Assistance System for Visually Impaired Pedestrians
Blind and visually impaired people have to cope with the safe movement through public space and the (lack of) knowledge of spatial issues and walkable routes. These challenges often lead to a fear of accidents and collisions, frequently also of disorientation. This, in turn, can result in a reduced radius of action, restricted mobility, and later on, in social isolation. Against this background, the project TERRAIN aims at developing a technical guidance system for orientation and navigation in urban space. For the development of this assistance system, the project pursues an approach in which reflexive, responsive, and deliberative dimensions have been integrated to address the ethical, legal and social implications (ELSI) in a co-design process. This paper focuses on the participation of citizens independent of vision impairments in the project which provided a variety of relevant indications of impacts and potential technical adaptations from an ‘outer’ point of view. In addition, conclusions can be drawn about the existing desirability and acceptance of the technical solution among the potential users as well as their social environment of potential users. In addition, it turned out that the citizen participation process raised different expectations among the project partners. Therefore, this article evaluates the participation results from the perspective of the technology developers and the technology assessors
A Sound Approach Toward a Mobility Aid for Blind and Low-Vision Individuals
Reduced independent mobility of blind and low-vision individuals (BLVIs) cause considerable societal cost, burden on relatives, and reduced quality of life for the individuals, including increased anxiety, depression symptoms, need of assistance, risk of falls, and mortality. Despite the numerous electronic travel aids proposed since at least the 1940’s, along with ever-advancing technology, the mobility issues persist. A substantial reason for this is likely several and severe shortcomings of the field, both in regards to aid design and evaluation.In this work, these shortcomings are addressed with a generic design model called Desire of Use (DoU), which describes the desire of a given user to use an aid for a given activity. It is then applied on mobility of BLVIs (DoU-MoB), to systematically illuminate and structure possibly all related aspects that such an aid needs to aptly deal with, in order for it to become an adequate aid for the objective. These aspects can then both guide user-centered design as well as choice of test methods and measures.One such measure is then demonstrated in the Desire of Use Questionnaire for Mobility of Blind and Low-Vision Individuals (DoUQ-MoB), an aid-agnostic and comprehensive patient-reported outcome measure. The question construction originates from the DoU-MoB to ensure an encompassing focus on mobility of BLVIs, something that has been missing in the field. Since it is aid-agnostic it facilitates aid comparison, which it also actively promotes. To support the reliability of the DoUQ-MoB, it utilizes the best known practices of questionnaire design and has been validated once with eight orientation and mobility professionals, and six BLVIs. Based on this, the questionnaire has also been revised once.To allow for relevant and reproducible methodology, another tool presented herein is a portable virtual reality (VR) system called the Parrot-VR. It uses a hybrid control scheme of absolute rotation by tracking the user’s head in reality, affording intuitive turning; and relative movement where simple button presses on a controller moves the virtual avatar forward and backward, allowing for large-scale traversal while not walking physically. VR provides excellent reproducibility, making various aggregate movement analysis feasible, while it is also inherently safe. Meanwhile, the portability of the system facilitates testing near the participants, substantially increasing the number of potential blind and low-vision recruits for user tests.The thesis also gives a short account on the state of long-term testing in the field; it being short is mainly due to that there is not much to report. It then provides an initial investigation into possible outcome measures for such tests by taking instruments in use by Swedish orientation and mobility professionals as a starting point. Two of these are also piloted in an initial single-session trial with 19 BLVIs, and could plausibly be used for long-term tests after further evaluation.Finally, a discussion is presented regarding the Audomni project — the development of a primary mobility aid for BLVIs. Audomni is a visuo-auditory sensory supplementation device, which aims to take visual information and translate it to sound. A wide field-of-view, 3D-depth camera records the environment, which is then transformed to audio through the sonification algorithms of Audomni, and finally presented in a pair of open-ear headphones that do not block out environmental sounds. The design of Audomni leverages the DoU-MoB to ensure user-centric development and evaluation, in the aim of reaching an aid with such form and function that it grants the users better mobility, while the users still want to use it.Audomni has been evaluated with user tests twice, once in pilot tests with two BLVIs, and once in VR with a heterogenous set of 19 BLVIs, utilizing the Parrot-VR and the DoUQ-MoB. 76 % of responders (13 / 17) answered that it was very or extremely likely that they would want use Audomni along with their current aid. This might be the first result in the field demonstrating a majority of blind and low-vision participants reporting that they actually want to use a new electronic travel aid. This shows promise that eventual long-term tests will demonstrate an increased mobility of blind and low-vision users — the overarching project aim. Such results would ultimately mean that Audomni can become an aid that alleviates societal cost, reduces burden on relatives, and improves users’ quality of life and independence
- …