2,429 research outputs found

    Testing Two Tools for Multimodal Navigation

    Get PDF
    The latest smartphones with GPS, electronic compasses, directional audio, touch screens, and so forth, hold a potential for location-based services that are easier to use and that let users focus on their activities and the environment around them. Rather than interpreting maps, users can search for information by pointing in a direction and database queries can be created from GPS location and compass data. Users can also get guidance to locations through point and sweep gestures, spatial sound, and simple graphics. This paper describes two studies testing two applications with multimodal user interfaces for navigation and information retrieval. The applications allow users to search for information and get navigation support using combinations of point and sweep gestures, nonspeech audio, graphics, and text. Tests show that users appreciated both applications for their ease of use and for allowing users to interact directly with the surrounding environment

    Interactive audio-tactile maps for visually impaired people

    Get PDF
    International audienceVisually impaired people face important challenges related to orientation and mobility. Indeed, 56% of visually impaired people in France declared having problems concerning autonomous mobility. These problems often mean that visually impaired people travel less, which influences their personal and professional life and can lead to exclusion from society. Therefore this issue presents a social challenge as well as an important research area. Accessible geographic maps are helpful for acquiring knowledge about a city's or neighborhood's configuration, as well as selecting a route to reach a destination. Traditionally, raised-line paper maps with braille text have been used. These maps have proved to be efficient for the acquisition of spatial knowledge by visually impaired people. Yet, these maps possess significant limitations. For instance, due to the specificities of the tactile sense only a limited amount of information can be displayed on a single map, which dramatically increases the number of maps that are needed. For the same reason, it is difficult to represent specific information such as distances. Finally, braille labels are used for textual descriptions but only a small percentage of the visually impaired population reads braille. In France 15% of blind people are braille readers and only 10% can read and write. In the United States, fewer than 10% of the legally blind people are braille readers and only 10% of blind children actually learn braille. Recent technological advances have enabled the design of interactive maps with the aim to overcome these limitations. Indeed, interactive maps have the potential to provide a broad spectrum of the population with spatial knowledge, irrespective of age, impairment, skill level, or other factors. To this regard, they might be an efficient means for providing visually impaired people with access to geospatial information. In this paper we give an overview of our research on making geographic maps accessible to visually impaired people

    Map design for visually impaired people: past, present, and future research

    Get PDF
    International audienceOrientation and mobility are amongst the most important challenges for visually impaired people. Tactile maps can provide them with spatial knowledge of their environment, thereby reducing fear related to travelling in space. To date, raised-line paper maps have been used to make geographic information accessible, but these paper maps have significant limitations with regards to content and the presentation of information. Recent advances in technology may help to design usable interactive maps that overcome such limitations. In this paper, we first review different accessible map concepts. We then present our design of an interactive map prototype, and provide evidence of this interactive map’s high user satisfaction and efficiency as compared to a regular raised-line paper map. To conclude, we suggest that advances in interactive technologies (e.g., haptic touch surfaces) provide a unique opportunity to design usable maps in the near future

    Virtual Exploration of Underwater Archaeological Sites : Visualization and Interaction in Mixed Reality Environments

    Get PDF
    This paper describes the ongoing developments in Photogrammetry and Mixed Reality for the Venus European project (Virtual ExploratioN of Underwater Sites, http://www.venus-project.eu). The main goal of the project is to provide archaeologists and the general public with virtual and augmented reality tools for exploring and studying deep underwater archaeological sites out of reach of divers. These sites have to be reconstructed in terms of environment (seabed) and content (artifacts) by performing bathymetric and photogrammetric surveys on the real site and matching points between geolocalized pictures. The base idea behind using Mixed Reality techniques is to offer archaeologists and general public new insights on the reconstructed archaeological sites allowing archaeologists to study directly from within the virtual site and allowing the general public to immersively explore a realistic reconstruction of the sites. Both activities are based on the same VR engine but drastically differ in the way they present information. General public activities emphasize the visually and auditory realistic aspect of the reconstruction while archaeologists activities emphasize functional aspects focused on the cargo study rather than realism which leads to the development of two parallel VR demonstrators. This paper will focus on several key points developed for the reconstruction process as well as both VR demonstrators (archaeological and general public) issues. The ?rst developed key point concerns the densi?cation of seabed points obtained through photogrammetry in order to obtain high quality terrain reproduction. The second point concerns the development of the Virtual and Augmented Reality (VR/AR) demonstrators for archaeologists designed to exploit the results of the photogrammetric reconstruction. And the third point concerns the development of the VR demonstrator for general public aimed at creating awareness of both the artifacts that were found and of the process with which they were discovered by recreating the dive process from ship to seabed

    The Graphical Access Challenge for People with Visual Impairments: Positions and Pathways Forward

    Get PDF
    Graphical access is one of the most pressing challenges for individuals who are blind or visually impaired. This chapter discusses some of the factors underlying the graphics access challenge, reviews prior approaches to addressing this long-standing information access barrier, and describes some promising new solutions. We specifically focus on touchscreen-based smart devices, a relatively new class of information access technologies, which our group believes represent an exemplary model of user-centered, needs-based design. We highlight both the challenges and the vast potential of these technologies for alleviating the graphics accessibility gap and share the latest results in this line of research. We close with recommendations on ideological shifts in mindset about how we approach solving this vexing access problem, which will complement both technological and perceptual advancements that are rapidly being uncovered through a growing research community in this domain

    Testing Two Tools for Multimodal Navigation

    Get PDF
    The latest smartphones with GPS, electronic compasses, directional audio, touch screens, and so forth, hold a potential for location-based services that are easier to use and that let users focus on their activities and the environment around them. Rather than interpreting maps, users can search for information by pointing in a direction and database queries can be created from GPS location and compass data. Users can also get guidance to locations through point and sweep gestures, spatial sound, and simple graphics. This paper describes two studies testing two applications with multimodal user interfaces for navigation and information retrieval. The applications allow users to search for information and get navigation support using combinations of point and sweep gestures, nonspeech audio, graphics, and text. Tests show that users appreciated both applications for their ease of use and for allowing users to interact directly with the surrounding environment

    Touch- and Walkable Virtual Reality to Support Blind and Visually Impaired Peoples‘ Building Exploration in the Context of Orientation and Mobility

    Get PDF
    Der Zugang zu digitalen Inhalten und Informationen wird immer wichtiger fĂŒr eine erfolgreiche Teilnahme an der heutigen, zunehmend digitalisierten Zivilgesellschaft. Solche Informationen werden meist visuell prĂ€sentiert, was den Zugang fĂŒr blinde und sehbehinderte Menschen einschrĂ€nkt. Die grundlegendste Barriere ist oft die elementare Orientierung und MobilitĂ€t (und folglich die soziale MobilitĂ€t), einschließlich der Erlangung von Kenntnissen ĂŒber unbekannte GebĂ€ude vor deren Besuch. Um solche Barrieren zu ĂŒberbrĂŒcken, sollten technische Hilfsmittel entwickelt und eingesetzt werden. Es ist ein Kompromiss zwischen technologisch niedrigschwellig zugĂ€nglichen und verbreitbaren Hilfsmitteln und interaktiv-adaptiven, aber komplexen Systemen erforderlich. Die Anpassung der Technologie der virtuellen RealitĂ€t (VR) umfasst ein breites Spektrum an Entwicklungs- und Entscheidungsoptionen. Die Hauptvorteile der VR-Technologie sind die erhöhte InteraktivitĂ€t, die Aktualisierbarkeit und die Möglichkeit, virtuelle RĂ€ume und Modelle als Abbilder von realen RĂ€umen zu erkunden, ohne dass reale Gefahren und die begrenzte VerfĂŒgbarkeit von sehenden Helfern auftreten. Virtuelle Objekte und Umgebungen haben jedoch keine physische Beschaffenheit. Ziel dieser Arbeit ist es daher zu erforschen, welche VR-Interaktionsformen sinnvoll sind (d.h. ein angemessenes Verbreitungspotenzial bieten), um virtuelle ReprĂ€sentationen realer GebĂ€ude im Kontext von Orientierung und MobilitĂ€t berĂŒhrbar oder begehbar zu machen. Obwohl es bereits inhaltlich und technisch disjunkte Entwicklungen und Evaluationen zur VR-Technologie gibt, fehlt es an empirischer Evidenz. ZusĂ€tzlich bietet diese Arbeit einen Überblick ĂŒber die verschiedenen Interaktionen. Nach einer Betrachtung der menschlichen Physiologie, Hilfsmittel (z.B. taktile Karten) und technologischen Eigenschaften wird der aktuelle Stand der Technik von VR vorgestellt und die Anwendung fĂŒr blinde und sehbehinderte Nutzer und der Weg dorthin durch die EinfĂŒhrung einer neuartigen Taxonomie diskutiert. Neben der Interaktion selbst werden Merkmale des Nutzers und des GerĂ€ts, der Anwendungskontext oder die nutzerzentrierte Entwicklung bzw. Evaluation als Klassifikatoren herangezogen. BegrĂŒndet und motiviert werden die folgenden Kapitel durch explorative AnsĂ€tze, d.h. im Bereich 'small scale' (mit sogenannten Datenhandschuhen) und im Bereich 'large scale' (mit einer avatargesteuerten VR-Fortbewegung). Die folgenden Kapitel fĂŒhren empirische Studien mit blinden und sehbehinderten Nutzern durch und geben einen formativen Einblick, wie virtuelle Objekte in Reichweite der HĂ€nde mit haptischem Feedback erfasst werden können und wie verschiedene Arten der VR-Fortbewegung zur Erkundung virtueller Umgebungen eingesetzt werden können. Daraus werden gerĂ€teunabhĂ€ngige technologische Möglichkeiten und auch Herausforderungen fĂŒr weitere Verbesserungen abgeleitet. Auf der Grundlage dieser Erkenntnisse kann sich die weitere Forschung auf Aspekte wie die spezifische Gestaltung interaktiver Elemente, zeitlich und rĂ€umlich kollaborative Anwendungsszenarien und die Evaluation eines gesamten Anwendungsworkflows (d.h. Scannen der realen Umgebung und virtuelle Erkundung zu Trainingszwecken sowie die Gestaltung der gesamten Anwendung in einer langfristig barrierefreien Weise) konzentrieren.Access to digital content and information is becoming increasingly important for successful participation in today's increasingly digitized civil society. Such information is mostly presented visually, which restricts access for blind and visually impaired people. The most fundamental barrier is often basic orientation and mobility (and consequently, social mobility), including gaining knowledge about unknown buildings before visiting them. To bridge such barriers, technological aids should be developed and deployed. A trade-off is needed between technologically low-threshold accessible and disseminable aids and interactive-adaptive but complex systems. The adaptation of virtual reality (VR) technology spans a wide range of development and decision options. The main benefits of VR technology are increased interactivity, updatability, and the possibility to explore virtual spaces as proxies of real ones without real-world hazards and the limited availability of sighted assistants. However, virtual objects and environments have no physicality. Therefore, this thesis aims to research which VR interaction forms are reasonable (i.e., offering a reasonable dissemination potential) to make virtual representations of real buildings touchable or walkable in the context of orientation and mobility. Although there are already content and technology disjunctive developments and evaluations on VR technology, there is a lack of empirical evidence. Additionally, this thesis provides a survey between different interactions. Having considered the human physiology, assistive media (e.g., tactile maps), and technological characteristics, the current state of the art of VR is introduced, and the application for blind and visually impaired users and the way to get there is discussed by introducing a novel taxonomy. In addition to the interaction itself, characteristics of the user and the device, the application context, or the user-centered development respectively evaluation are used as classifiers. Thus, the following chapters are justified and motivated by explorative approaches, i.e., in the group of 'small scale' (using so-called data gloves) and in the scale of 'large scale' (using an avatar-controlled VR locomotion) approaches. The following chapters conduct empirical studies with blind and visually impaired users and give formative insight into how virtual objects within hands' reach can be grasped using haptic feedback and how different kinds of VR locomotion implementation can be applied to explore virtual environments. Thus, device-independent technological possibilities and also challenges for further improvements are derived. On the basis of this knowledge, subsequent research can be focused on aspects such as the specific design of interactive elements, temporally and spatially collaborative application scenarios, and the evaluation of an entire application workflow (i.e., scanning the real environment and exploring it virtually for training purposes, as well as designing the entire application in a long-term accessible manner)
    • 

    corecore