577 research outputs found

    How much spatial information is lost in the sensory substitution process? Comparing visual, tactile, and auditory approaches

    Get PDF
    Sensory substitution devices (SSDs) can convey visuospatial information through spatialised auditory or tactile stimulation using wearable technology. However, the level of information loss associated with this transformation is unknown. In this study novice users discriminated the location of two objects at 1.2m using devices that transformed a 16x 8 depth map into spatially distributed patterns of light, sound, or touch on the abdomen. Results showed that through active sensing, participants could discriminate the vertical position of objects to a visual angle of 1°, 14°, and 21°, and their distance to 2cm, 8cm, and 29cm using these visual, auditory, and haptic SSDs respectively. Visual SSDs significantly outperformed auditory and tactile SSDs on vertical localisation, whereas for depth perception, all devices significantly differed from one another (visual > auditory > haptic). Our findings highlight the high level of acuity possible for SSDs even with low spatial resolutions (e.g. 16 8) and quantify the level of information loss attributable to this transformation for the SSD user. Finally, we discuss ways of closing this ‘modality gap’ found in SSDs and conclude that this process is best benchmarked against performance with SSDs that return to their primary modality (e.g. visuospatial into visual)

    Schematisation in Hard-copy Tactile Orientation Maps

    Get PDF
    This dissertation investigates schematisation of computer-generated tactile orientation maps that support mediation of spatial knowledge of unknown urban environments. Computergenerated tactile orientation maps are designed to provide the blind with an overall impression of their surroundings. Their details are displayed by means of elevated features that are created by embossers and can be distinguished by touch. The initial observation of this dissertation states that only very little information is actually transported through tactile maps owing to the coarse resolution of tactual senses and the cognitive effort involved in the serial exploration of tactile maps. However, the differences between computer-generated, embossed tactile maps and manufactured, deep-drawn tactile maps are significant. Therefore the possibilities and confines of communicating information through tactile maps produced with embossers is a primary area of research. This dissertation has been able to demonstrate that the quality of embossed prints is an almost equal alternative to traditionally manufactured deep-drawn maps. Their great advantage is fast and individual production and (apart from the initial procurement costs for the printer)low price, accessibility and easy understanding without the need of prior time-consuming training. Simplification of tactile maps is essential, even more so than in other maps. It can be achieved by selecting a limited number from all map elements available. Qualitative simplification through schematisation may present an additional option to simplification through quantitative selection. In this context schematisation is understood as cognitively motivated simplification of geometry and synchronised maintenance of topology. Rather than further reducing the number of displayed objects, the investigation concentrates on how the presentation of different forms of streets (natural vs. straightened) and junctions (natural vs. prototypical) affects the transfer of knowledge. In a second area of research, a thesis establishes that qualitative simplification of tactile orientation maps through schematisation can enhance their usability and make them easier to understand than maps that have not been schematised. The dissertation shows that simplifying street forms and limiting them to prototypical junctions does not only accelerate map exploration but also has a beneficial influence on retention performance. The majority of participants that took part in the investigation selected a combination of both as their preferred display option. Tactile maps that have to be tediously explored through touch, uncovering every detail, complicate attaining a first impression or an overall perception. A third area of research is examined, establishing which means could facilitate map readersâ options to discover certain objects on the map quickly and without possessing a complete overview. Three types of aids are examined: guiding lines leading from the frame of the map to the object, position indicators represented by position markers at the frame of the map and coordinate specifications found within a grid on the map. The dissertation shows that all three varieties can be realised by embossers. Although a guiding line proves to be fast in size A4 tactile maps containing only one target object and few distracting objects, it also impedes further exploration of the map (similar to the grid). In the following, advantages and drawbacks of the various aids in this and other applications are discussed. In conclusion the dissertation elaborates on the linking points of all three examinations. They connect and it is argued that cognitively motivated simplification should be a principle of construction for embossed tactile orientation maps in order to support their use and comprehension. A summary establishes the recommendations that result from this dissertation regarding construction of tactile orientation maps considering the limitations through embosser constraints. Then I deliberate how to adapt schematisation of other maps contingent to intended function, previous knowledge of the map reader, and the relation between the time in which knowledge is acquired and the time it is employed. Closing the dissertation, I provide an insight into its confines and deductions and finish with a prospective view to possible transfers of the findings to other applications, e.g. multimedia or interactive maps on pin-matrix displays and devices

    Principles and Guidelines for Advancement of Touchscreen-Based Non-visual Access to 2D Spatial Information

    Get PDF
    Graphical materials such as graphs and maps are often inaccessible to millions of blind and visually-impaired (BVI) people, which negatively impacts their educational prospects, ability to travel, and vocational opportunities. To address this longstanding issue, a three-phase research program was conducted that builds on and extends previous work establishing touchscreen-based haptic cuing as a viable alternative for conveying digital graphics to BVI users. Although promising, this approach poses unique challenges that can only be addressed by schematizing the underlying graphical information based on perceptual and spatio-cognitive characteristics pertinent to touchscreen-based haptic access. Towards this end, this dissertation empirically identified a set of design parameters and guidelines through a logical progression of seven experiments. Phase I investigated perceptual characteristics related to touchscreen-based graphical access using vibrotactile stimuli, with results establishing three core perceptual guidelines: (1) a minimum line width of 1mm should be maintained for accurate line-detection (Exp-1), (2) a minimum interline gap of 4mm should be used for accurate discrimination of parallel vibrotactile lines (Exp-2), and (3) a minimum angular separation of 4mm should be used for accurate discrimination of oriented vibrotactile lines (Exp-3). Building on these parameters, Phase II studied the core spatio-cognitive characteristics pertinent to touchscreen-based non-visual learning of graphical information, with results leading to the specification of three design guidelines: (1) a minimum width of 4mm should be used for supporting tasks that require tracing of vibrotactile lines and judging their orientation (Exp-4), (2) a minimum width of 4mm should be maintained for accurate line tracing and learning of complex spatial path patterns (Exp-5), and (3) vibrotactile feedback should be used as a guiding cue to support the most accurate line tracing performance (Exp-6). Finally, Phase III demonstrated that schematizing line-based maps based on these design guidelines leads to development of an accurate cognitive map. Results from Experiment-7 provide theoretical evidence in support of learning from vision and touch as leading to the development of functionally equivalent amodal spatial representations in memory. Findings from all seven experiments contribute to new theories of haptic information processing that can guide the development of new touchscreen-based non-visual graphical access solutions

    Spatial representation and visual impairement - Developmental trends and new technological tools for assessment and rehabilitation

    Get PDF
    It is well known that perception is mediated by the five sensory modalities (sight, hearing, touch, smell and taste), which allows us to explore the world and build a coherent spatio-temporal representation of the surrounding environment. Typically, our brain collects and integrates coherent information from all the senses to build a reliable spatial representation of the world. In this sense, perception emerges from the individual activity of distinct sensory modalities, operating as separate modules, but rather from multisensory integration processes. The interaction occurs whenever inputs from the senses are coherent in time and space (Eimer, 2004). Therefore, spatial perception emerges from the contribution of unisensory and multisensory information, with a predominant role of visual information for space processing during the first years of life. Despite a growing body of research indicates that visual experience is essential to develop spatial abilities, to date very little is known about the mechanisms underpinning spatial development when the visual input is impoverished (low vision) or missing (blindness). The thesis's main aim is to increase knowledge about the impact of visual deprivation on spatial development and consolidation and to evaluate the effects of novel technological systems to quantitatively improve perceptual and cognitive spatial abilities in case of visual impairments. Chapter 1 summarizes the main research findings related to the role of vision and multisensory experience on spatial development. Overall, such findings indicate that visual experience facilitates the acquisition of allocentric spatial capabilities, namely perceiving space according to a perspective different from our body. Therefore, it might be stated that the sense of sight allows a more comprehensive representation of spatial information since it is based on environmental landmarks that are independent of body perspective. Chapter 2 presents original studies carried out by me as a Ph.D. student to investigate the developmental mechanisms underpinning spatial development and compare the spatial performance of individuals with affected and typical visual experience, respectively visually impaired and sighted. Overall, these studies suggest that vision facilitates the spatial representation of the environment by conveying the most reliable spatial reference, i.e., allocentric coordinates. However, when visual feedback is permanently or temporarily absent, as in the case of congenital blindness or blindfolded individuals, respectively, compensatory mechanisms might support the refinement of haptic and auditory spatial coding abilities. The studies presented in this chapter will validate novel experimental paradigms to assess the role of haptic and auditory experience on spatial representation based on external (i.e., allocentric) frames of reference. Chapter 3 describes the validation process of new technological systems based on unisensory and multisensory stimulation, designed to rehabilitate spatial capabilities in case of visual impairment. Overall, the technological validation of new devices will provide the opportunity to develop an interactive platform to rehabilitate spatial impairments following visual deprivation. Finally, Chapter 4 summarizes the findings reported in the previous Chapters, focusing the attention on the consequences of visual impairment on the developmental of unisensory and multisensory spatial experience in visually impaired children and adults compared to sighted peers. It also wants to highlight the potential role of novel experimental tools to validate the use to assess spatial competencies in response to unisensory and multisensory events and train residual sensory modalities under a multisensory rehabilitation

    Spatial Auditory Maps for Blind Travellers

    Get PDF
    Empirical research shows that blind persons who have the ability and opportunity to access geographic map information tactually, benefit in their mobility. Unfortunately, tangible maps are not found in large numbers. Economics is the leading explanation: tangible maps are expensive to build, duplicate and distribute. SAM, short for Spatial Auditory Map, is a prototype created to address the unavail- ability of tangible maps. SAM presents geographic information to a blind person encoded in sound. A blind person receives maps electronically and accesses them using a small in- expensive digitalizing tablet connected to a PC. The interface provides location-dependent sound as a stylus is manipulated by the user, plus a schematic visual representation for users with residual vision. The assessment of SAM on a group of blind participants suggests that blind users can learn unknown environments as complex as the ones represented by tactile maps - in the same amount of reading time. This research opens new avenues in visualization techniques, promotes alternative communication methods, and proposes a human-computer interaction framework for conveying map information to a blind person

    Seeing with ears: how we create an auditory representation of space with echoes and its relation with other senses

    Get PDF
    Spatial perception is the capability that allows us to learn about the environment. All our senses are involved in creating a representation of the external world. When we create the representation of space we rely primarily on visual information, but it is the integration with the other senses that allows us a more global and truthful representation of it. While the influence of vision and the integration of different senses among each other in spatial perception has been widely investigated, many questions remain about the role of the acoustic system in space perception and how it can be influenced by the other senses. Give an answer to these questions on healthy people can help to better understand whether the same \u201crules\u201d can be applied to, for example, people that have lost vision in the early stages of development. Understanding how spatial perception works in blind people from birth is essential to then develop rehabilitative methodologies or technologies to help these people to provide for lack of vision, since vision is the main source of spatial information. For this reason, one of the main scientific objective of this thesis is to increase knowledge about auditory spatial perception in sighted and visually impaired people, thanks to the development of new tasks to assess spatial abilities. Moreover, I focus my attention on a recent investigative topic in humans, i.e. echolocation. Echolocation has a great potential in terms of improvement regarding space and navigation skills for people with visual disabilities. Several studies demonstrate how the use of this technique can be favorable in the absence of vision, both on the level perceptual level and also at the social level. Based in the importance of echolocation, we developed some tasks to test the ability of novice people and we undergo the participants to an echolocation training to see how long does it take to manage this technique (in simple task). Instead of using blind individuals, we decide to test the ability of novice sighted people to see whether technique is blind related or not and whether it is possible to create a representation of space using echolocatio

    Auditory Displays and Assistive Technologies: the use of head movements by visually impaired individuals and their implementation in binaural interfaces

    Get PDF
    Visually impaired people rely upon audition for a variety of purposes, among these are the use of sound to identify the position of objects in their surrounding environment. This is limited not just to localising sound emitting objects, but also obstacles and environmental boundaries, thanks to their ability to extract information from reverberation and sound reflections- all of which can contribute to effective and safe navigation, as well as serving a function in certain assistive technologies thanks to the advent of binaural auditory virtual reality. It is known that head movements in the presence of sound elicit changes in the acoustical signals which arrive at each ear, and these changes can improve common auditory localisation problems in headphone-based auditory virtual reality, such as front-to-back reversals. The goal of the work presented here is to investigate whether the visually impaired naturally engage head movement to facilitate auditory perception and to what extent it may be applicable to the design of virtual auditory assistive technology. Three novel experiments are presented; a field study of head movement behaviour during navigation, a questionnaire assessing the self-reported use of head movement in auditory perception by visually impaired individuals (each comparing visually impaired and sighted participants) and an acoustical analysis of inter-aural differences and cross- correlations as a function of head angle and sound source distance. It is found that visually impaired people self-report using head movement for auditory distance perception. This is supported by head movements observed during the field study, whilst the acoustical analysis showed that interaural correlations for sound sources within 5m of the listener were reduced as head angle or distance to sound source were increased, and that interaural differences and correlations in reflected sound were generally lower than that of direct sound. Subsequently, relevant guidelines for designers of assistive auditory virtual reality are proposed

    Auditory Displays for People with Visual Impairments during Travel

    Get PDF
    Menschen mit Blindheit oder Sehbehinderungen begegnen beim Reisen zahlreichen Barrieren, was sich auf die Lebensqualität auswirkt. Obwohl spezielle elektronische Reisehilfen schon seit vielen Jahren im Mittelpunkt der Forschung stehen, werden sie von der Zielgruppe nach wie vor kaum genutzt. Dies liegt unter anderem daran, dass die von den Nutzern benötigten Informationen von der Technologie nur unzureichend bereitgestellt werden. Außerdem entsprechen die Schnittstellen selten den Bedürfnissen der Nutzer. In der vorliegender Arbeit gehen wir auf diese Defizite ein und definieren die Anforderungen für barrierefreies Reisen in Bezug auf den Informationsbedarf (Was muss vermittelt werden?) und die nichtfunktionalen Anforderungen (Wie muss es vermittelt werden?). Außerdem schlagen wir verschiedene auditive Displays vor, die die Bedürfnisse von Menschen mit Sehbeeinträchtigungen während einer Reise berücksichtigen. Wir entwerfen, implementieren und evaluieren unsere Schnittstellen nach einem nutzerzentriertem Ansatz, wobei wir während des gesamten Prozesses Nutzer und Experten aus diesem Bereich einbeziehen. In einem ersten Schritt erheben wir den Informationsbedarf von Menschen mit Behinderungen im Allgemeinen und von Menschen mit Sehbeeinträchtigungen im Besonderen, wenn sie sich in Gebäuden bewegen. Außerdem vergleichen wir die gesammelten Informationen mit dem, was derzeit in OpenStreetMap (OSM), einer freien geografischen Datenbank, kartiert werden kann, und machen Vorschläge zur Schließung der Lücke. Unser Ziel ist es, die Kartierung aller benötigten Informationen zu ermöglichen, um sie in Lösungen zur Unterstützung des unabhängigen Reisens zu verwenden. Nachdem wir die Frage beantwortet haben, welche Informationen benötigt werden, gehen wir weiter und beantworten die Frage, wie diese den Nutzern vermittelt werden können. Wir definieren eine Sammlung nicht-funktionaler Anforderungen, die wir in einer Befragung mit 22 Mobilitätstrainern verfeinern und bewerten. Anschließend schlagen wir eine Grammatik - oder anders ausgedrückt, eine strukturierte Art der Informationsvermittlung - für Navigationsanweisungen bei Reisen im Freien vor, die Straßenränder, das Vorhandensein von Gehwegen und Kreuzungen berücksichtigt - alles wichtige Informationen für Menschen mit Sehbeeinträchtigungen. Darüber hinaus können mit unserer Grammatik auch Orientierungspunkte, Sehenswürdigkeiten und Hindernisse vermittelt werden, was die Reise zu einem ganzheitlichen und sichereren Erlebnis macht. Wir implementieren unsere Grammatik in einen bestehenden Prototyp und evaluieren sie mit der Zielgruppe. Es hat sich gezeigt, dass in Gebäuden Beschreibungen der Umgebung die Erstellung von mentalen Karten unterstützen und damit die Erkundung und spontane Entscheidungsfindung besser fördern als Navigationsanweisungen. Wir definieren daher eine Grammatik für die Vermittlung von Informationen über die Umgebung in Innenräumen für Menschen mit Sehbeeinträchtigungen. Wir bewerten die Grammatik in einer Online-Studie mit 8 Nutzern aus der Zielgruppe. Wir zeigen, dass die Nutzer strukturierte Sätze mit fester Wortreihenfolge benötigen. Schließlich implementieren wir die Grammatik als Proof-of-Concept in eine bestehende prototypische App. Sprachausgabe ist zwar Stand der Technik im Bereich der Ausgabeschnittstellen für Menschen mit Sehbeeinträchtigungen, hat aber auch Nachteile: es ist für Menschen mit Leseschwäche unzugänglich und kann für manche Nutzer zu langsam sein. Wir nehmen uns dieses Problems an und untersuchen den Einsatz von Sonifikation in Form von auditiven Symbolen in Kombination mit Parameter-Mapping zur Vermittlung von Informationen über Objekte und deren Verortung in der Umgebung. Da eine erste Evaluierung positive Ergebnisse lieferte, erstellten wir in einem nutzerzentrierten Entwicklungsansatz einen Datensatz mit kurzen auditiven Symbolen für 40 Alltagsgegenstände. Wir evaluieren den Datensatz mit 16 blinden Menschen und zeigen, dass die Töne intuitiv sind. Schließlich vergleichen wir in einer Nutzerstudie mit 5 Teilnehmern Sprachausgabe mit nicht-sprachlicher Sonifikation. Wir zeigen, dass Sonifikation für die Vermittlung von groben Informationen über Objekte in der Umgebung genau so gut geeignet ist wie Sprache, was die Benutzerfreundlichkeit angeht. Abschließend listen wir einige Vorteile von Sprache und Sonifikation auf, die zum Vergleich und als Entscheidungshilfe dienen sollen. Diese Arbeit befasst sich mit den Bedürfnissen von Menschen mit Sehbeeinträchtigungen während der Reise in Bezug auf die benötigten Informationen und Schnittstellen. In einem nutzerzentrierten Ansatz schlagen wir verschiedene akustische Schnittstellen vor, die auf sprachlicher und nicht-sprachlicher Sonifikation basieren. Anhand mehrerer Nutzerstudien, an denen sowohl Nutzer als auch Experten beteiligt sind, entwerfen, implementieren und evaluieren wir unsere Schnittstellen. Wir zeigen, dass elektronische Reisehilfen in der Lage sein müssen, große Mengen an Informationen auf strukturierte Weise zu vermitteln, jedoch angepasst an den Nutzungskontext und die Präferenzen und Fähigkeiten der Nutzer
    corecore