831 research outputs found
An Orientation & Mobility Aid for People with Visual Impairments
Orientierung&Mobilität (O&M) umfasst eine Reihe von Techniken für Menschen mit Sehschädigungen, die ihnen helfen, sich im Alltag zurechtzufinden. Dennoch benötigen sie einen umfangreichen und sehr aufwendigen Einzelunterricht mit O&M Lehrern, um diese Techniken in ihre täglichen Abläufe zu integrieren. Während einige dieser Techniken assistive Technologien benutzen, wie zum Beispiel den Blinden-Langstock, Points of Interest Datenbanken oder ein Kompass gestütztes Orientierungssystem, existiert eine unscheinbare Kommunikationslücke zwischen verfügbaren Hilfsmitteln und Navigationssystemen.
In den letzten Jahren sind mobile Rechensysteme, insbesondere Smartphones, allgegenwärtig geworden. Dies eröffnet modernen Techniken des maschinellen Sehens die Möglichkeit, den menschlichen Sehsinn bei Problemen im Alltag zu unterstützen, die durch ein nicht barrierefreies Design entstanden sind. Dennoch muss mit besonderer Sorgfalt vorgegangen werden, um dabei nicht mit den speziellen persönlichen Kompetenzen und antrainierten Verhaltensweisen zu kollidieren, oder schlimmstenfalls O&M Techniken sogar zu widersprechen.
In dieser Dissertation identifizieren wir eine räumliche und systembedingte Lücke zwischen Orientierungshilfen und Navigationssystemen für Menschen mit Sehschädigung. Die räumliche Lücke existiert hauptsächlich, da assistive Orientierungshilfen, wie zum Beispiel der Blinden-Langstock, nur dabei helfen können, die Umgebung in einem limitierten Bereich wahrzunehmen, während Navigationsinformationen nur sehr weitläufig gehalten sind. Zusätzlich entsteht diese Lücke auch systembedingt zwischen diesen beiden Komponenten — der Blinden-Langstock kennt die Route nicht, während ein Navigationssystem nahegelegene Hindernisse oder O&M Techniken nicht weiter betrachtet. Daher schlagen wir verschiedene Ansätze zum Schließen dieser Lücke vor, um die Verbindung und Kommunikation zwischen Orientierungshilfen und Navigationsinformationen zu verbessern und betrachten das Problem dabei aus beiden Richtungen. Um nützliche relevante Informationen bereitzustellen, identifizieren wir zuerst die bedeutendsten Anforderungen an assistive Systeme und erstellen einige Schlüsselkonzepte, die wir bei unseren Algorithmen und Prototypen beachten.
Existierende assistive Systeme zur Orientierung basieren hauptsächlich auf globalen Navigationssatellitensystemen. Wir versuchen, diese zu verbessern, indem wir einen auf Leitlinien basierenden Routing Algorithmus erstellen, der auf individuelle Bedürfnisse anpassbar ist und diese berücksichtigt. Generierte Routen sind zwar unmerklich länger, aber auch viel sicherer, gemäß den in Zusammenarbeit mit O&M Lehrern erstellten objektiven Kriterien. Außerdem verbessern wir die Verfügbarkeit von relevanten georeferenzierten Datenbanken, die für ein derartiges bedarfsgerechtes Routing benötigt werden. Zu diesem Zweck erstellen wir einen maschinellen Lernansatz, mit dem wir Zebrastreifen in Luftbildern erkennen, was auch über Ländergrenzen hinweg funktioniert, und verbessern dabei den Stand der Technik.
Um den Nutzen von Mobilitätsassistenz durch maschinelles Sehen zu optimieren, erstellen wir O&M Techniken nachempfundene Ansätze, um die räumliche Wahrnehmung der unmittelbaren Umgebung zu erhöhen. Zuerst betrachten wir dazu die verfügbare Freifläche und informieren auch über mögliche Hindernisse. Weiterhin erstellen wir einen neuartigen Ansatz, um die verfügbaren Leitlinien zu erkennen und genau zu lokalisieren, und erzeugen virtuelle Leitlinien, welche Unterbrechungen überbrücken und bereits frühzeitig Informationen über die nächste Leitlinie bereitstellen. Abschließend verbessern wir die Zugänglichkeit von Fußgängerübergängen, insbesondere Zebrastreifen und Fußgängerampeln, mit einem Deep Learning Ansatz.
Um zu analysieren, ob unsere erstellten Ansätze und Algorithmen einen tatsächlichen Mehrwert für Menschen mit Sehschädigung erzeugen, vollziehen wir ein kleines Wizard-of-Oz-Experiment zu unserem bedarfsgerechten Routing — mit einem sehr ermutigendem Ergebnis. Weiterhin führen wir eine umfangreichere Studie mit verschiedenen Komponenten und dem Fokus auf Fußgängerübergänge durch. Obwohl unsere statistischen Auswertungen nur eine geringfügige Verbesserung aufzeigen, beeinflußt durch technische Probleme mit dem ersten Prototypen und einer zu geringen Eingewöhnungszeit der Probanden an das System, bekommen wir viel versprechende Kommentare von fast allen Studienteilnehmern. Dies zeigt, daß wir bereits einen wichtigen ersten Schritt zum Schließen der identifizierten Lücke geleistet haben und Orientierung&Mobilität für Menschen mit Sehschädigung damit verbessern konnten
How a Diverse Research Ecosystem Has Generated New Rehabilitation Technologies: Review of NIDILRR’s Rehabilitation Engineering Research Centers
Over 50 million United States citizens (1 in 6 people in the US) have a developmental, acquired, or degenerative disability. The average US citizen can expect to live 20% of his or her life with a disability. Rehabilitation technologies play a major role in improving the quality of life for people with a disability, yet widespread and highly challenging needs remain. Within the US, a major effort aimed at the creation and evaluation of rehabilitation technology has been the Rehabilitation Engineering Research Centers (RERCs) sponsored by the National Institute on Disability, Independent Living, and Rehabilitation Research. As envisioned at their conception by a panel of the National Academy of Science in 1970, these centers were intended to take a “total approach to rehabilitation”, combining medicine, engineering, and related science, to improve the quality of life of individuals with a disability. Here, we review the scope, achievements, and ongoing projects of an unbiased sample of 19 currently active or recently terminated RERCs. Specifically, for each center, we briefly explain the needs it targets, summarize key historical advances, identify emerging innovations, and consider future directions. Our assessment from this review is that the RERC program indeed involves a multidisciplinary approach, with 36 professional fields involved, although 70% of research and development staff are in engineering fields, 23% in clinical fields, and only 7% in basic science fields; significantly, 11% of the professional staff have a disability related to their research. We observe that the RERC program has substantially diversified the scope of its work since the 1970’s, addressing more types of disabilities using more technologies, and, in particular, often now focusing on information technologies. RERC work also now often views users as integrated into an interdependent society through technologies that both people with and without disabilities co-use (such as the internet, wireless communication, and architecture). In addition, RERC research has evolved to view users as able at improving outcomes through learning, exercise, and plasticity (rather than being static), which can be optimally timed. We provide examples of rehabilitation technology innovation produced by the RERCs that illustrate this increasingly diversifying scope and evolving perspective. We conclude by discussing growth opportunities and possible future directions of the RERC program
Recommended from our members
Review of substitutive assistive tools and technologies for people with visual impairments: recent advancements and prospects
YesThe development of many tools and technologies for people with visual impairment has become a major priority in the
field of assistive technology research. However, many of these technology advancements have limitations in terms of the
human aspects of the user experience (e.g., usability, learnability, and time to user adaptation) as well as difficulties in
translating research prototypes into production. Also, there was no clear distinction between the assistive aids of adults
and children, as well as between “partial impairment” and “total blindness”. As a result of these limitations, the produced
aids have not gained much popularity and the intended users are still hesitant to utilise them. This paper presents a comprehensive review of substitutive interventions that aid in adapting to vision loss, centred on laboratory research studies
to assess user-system interaction and system validation. Depending on the primary cueing feedback signal offered to the
user, these technology aids are categorized as visual, haptics, or auditory-based aids. The context of use, cueing feedback
signals, and participation of visually impaired people in the evaluation are all considered while discussing these aids.
Based on the findings, a set of recommendations is suggested to assist the scientific community in addressing persisting
challenges and restrictions faced by both the totally blind and partially sighted people
Recommended from our members
Sensory Augmentation for Navigation in Difficult Urban Environments by People With Visual Impairment
Independent mobility in completing such tasks as walking through a town centre is taken for granted by well-bodied individuals. However, for those with a disability such as impairment of vision, mobility and navigation can become challenging tasks not easily undertaken. The barriers to access for blind and partially sighted individuals are increased when familiar navigational cues are removed in difficult urban environments such as Shared Space. The research consisted of investigating methods of navigation employed by people with visual impairment and designing a device to restore confidence to this group so as to lower the barriers of access to such environments.
Investigation was carried out through the deployment of a questionnaire; discussions with groups representing blind and partially sighted people; and a site visit to Shared Space environments. Statistical analysis was carried out on the results of the questionnaire to ascertain the navigational habits of blind and partially sighted individuals in different environments. From the analysis and the results of the discussions and site visit it was established that it would be socially acceptable to design a secondary aid to navigation that would complement the primary aids of long cane or guide dog. A concept experiment was carried out to test the idea that knowledge about changes in surface colour could help with navigation.
A prototype device that could be used by individuals with visual impairment to increase their confidence when navigating a difficult environment was designed, built and tested. Different programming methods were researched and trialled to effectively use machine vision to provide a solution to analyse video feed from a passive camera and return useful information to a blind or partially sighted user.
The device was tested indoors and outdoors and found to be effective at detecting changes in surface colour. Further work is needed to run the software on a more compact platform such as a mobile phone, but initial results show that the concept is viable and that the barriers that present to blind and partially sighted people navigating difficult urban environments can be much reduced through the use of this technology
On supporting university communities in indoor wayfinding: An inclusive design approach
Mobility can be defined as the ability of people to move, live and interact with the space. In this context, indoor mobility, in terms of indoor localization and wayfinding, is a relevant topic due to the challenges it presents, in comparison with outdoor mobility, where GPS is hardly exploited. Knowing how to move in an indoor environment can be crucial for people with disabilities, and in particular for blind users, but it can provide several advantages also to any person who is moving in an unfamiliar place. Following this line of thought, we employed an inclusive by design approach to implement and deploy a system that comprises an Internet of Things infrastructure and an accessible mobile application to provide wayfinding functions, targeting the University community. As a real word case study, we considered the University of Bologna, designing a system able to be deployed in buildings with different configurations and settings, considering also historical buildings. The final system has been evaluated in three different scenarios, considering three different target audiences (18 users in total): i. students with disabilities (i.e., visual and mobility impairments); ii. campus students; and iii. visitors and tourists. Results reveal that all the participants enjoyed the provided functions and the indoor localization strategy was fine enough to provide a good wayfinding experience
Assessment of Audio Interfaces for use in Smartphone Based Spatial Learning Systems for the Blind
Recent advancements in the field of indoor positioning and mobile computing promise development of smart phone based indoor navigation systems. Currently, the preliminary implementations of such systems only use visual interfaces—meaning that they are inaccessible to blind and low vision users. According to the World Health Organization, about 39 million people in the world are blind. This necessitates the need for development and evaluation of non-visual interfaces for indoor navigation systems that support safe and efficient spatial learning and navigation behavior. This thesis research has empirically evaluated several different approaches through which spatial information about the environment can be conveyed through audio. In the first experiment, blindfolded participants standing at an origin in a lab learned the distance and azimuth of target objects that were specified by four audio modes. The first three modes were perceptual interfaces and did not require cognitive mediation on the part of the user. The fourth mode was a non-perceptual mode where object descriptions were given via spatial language using clockface angles. After learning the targets through the four modes, the participants spatially updated the position of the targets and localized them by walking to each of them from two indirect waypoints. The results also indicate hand motion triggered mode to be better than the head motion triggered mode and comparable to auditory snapshot. In the second experiment, blindfolded participants learned target object arrays with two spatial audio modes and a visual mode. In the first mode, head tracking was enabled, whereas in the second mode hand tracking was enabled. In the third mode, serving as a control, the participants were allowed to learn the targets visually. We again compared spatial updating performance with these modes and found no significant performance differences between modes. These results indicate that we can develop 3D audio interfaces on sensor rich off the shelf smartphone devices, without the need of expensive head tracking hardware. Finally, a third study, evaluated room layout learning performance by blindfolded participants with an android smartphone. Three perceptual and one non-perceptual mode were tested for cognitive map development. As expected the perceptual interfaces performed significantly better than the non-perceptual language based mode in an allocentric pointing judgment and in overall subjective rating. In sum, the perceptual interfaces led to better spatial learning performance and higher user ratings. Also there is no significant difference in a cognitive map developed through spatial audio based on tracking user’s head or hand. These results have important implications as they support development of accessible perceptually driven interfaces for smartphones
MOBILE ASSISTIVE TECHNOLOGIES FOR PEOPLE WITH VISUAL IMPAIRMENT: SENSING AND CONVEYING INFORMATION TO SUPPORT ORIENTATION, MOBILITY AND ACCESS TO IMAGES
Smartphones are accessible to persons with visual impairment or blindness (VIB): screen reader technologies, integrated with mobile operating systems, enable non-visual interaction with the device. Also, features like GPS receivers, inertial sensors and cameras enable the development of Mobile Assistive Technologies (MATs) to support people with VIB. A preliminary analysis, conducted adopting an user-centric approach, highlighted some issues experienced by people with VIB in everyday activities from three main fields: orientation, mobility and access to images.
Traditional approaches to address these issues, based on assistive tools and technologies, have some limitations: in the field of mobility, for example, existing navigation support solutions (e.g. the white cane) cannot be used to perceive some environmental features like crosswalks or the current state of traffic lights; in the field of orientation, tactile maps adopted to develop cognitive maps of the environment are limited in the amount of information that can be represented on a single surface and by the lack of interactivity, two issues experienced also in other fields where access to graphical information is of paramount importance like, for example, didactics of STEM subjects.
This work presents new MATs that deal with these limitations by introducing novel solutions in different fields of Computer Science. Original computer vision techniques, designed to detect the presence of pedestrian crossings and the state of traffic lights, are used to sense information from the environment and support mobility of people with VIB. Novel sonification techniques are introduced to efficiently convey information with three different goals: first, to convey guidance information in urban crossings; second, to enhance the development of cognitive maps by augmenting tactile surfaces; third, to enable quick access to images.
Experience reported in this dissertation shows that the proposed MATs are effective in supporting people with VIB and, in general, that mobile devices are a versatile platform to enable affordable and pervasive access to assistive technologies. Involving target users in the evaluation of MATs emerged as a major challenge in this work. However, it is shown how such challenge can be addressed by adopting large scale evaluation techniques typical of HCI research
Applications of MEMS Gyroscope for Human Gait Analysis
After decades of development, quantitative instruments for human gait analysis have become an important tool for revealing underlying pathologies manifested by gait abnormalities. However, the gold standard instruments (e.g., optical motion capture systems) are commonly expensive and complex while needing expert operation and maintenance and thereby be limited to a small number of specialized gait laboratories. Therefore, in current clinical settings, gait analysis still mainly relies on visual observation and assessment. Due to recent developments in microelectromechanical systems (MEMS) technology, the cost and size of gyroscopes are decreasing, while the accuracy is being improved, which provides an effective way for qualifying gait features. This chapter aims to give a close examination of human gait patterns (normal and abnormal) using gyroscope-based wearable technology. Both healthy subjects and hemiparesis patients participated in the experiment, and experimental results show that foot-mounted gyroscopes could assess gait abnormalities in both temporal and spatial domains. Gait analysis systems constructed of wearable gyroscopes can be more easily used in both clinical and home environments than their gold standard counterparts, which have few requirements for operation, maintenance, and working environment, thereby suggesting a promising future for gait analysis
- …