2,958 research outputs found

    The SmartVision Navigation Prototype for Blind Users

    Get PDF
    The goal of the Portuguese project "SmartVision: active vision for the blind" is to develop a small, portable and cheap yet intelligent and reliable system for assisting the blind and visually impaired while navigating autonomously, both in- and outdoor. In this article we present an overview of the prototype, design issues, and its different modules which integrate GPS and Wi-Fi localisation with a GIS, passive RFID tags, and computer vision. The prototype addresses global navigation for going to some destiny, by following known landmarks stored in the GIS in combination with path optimisation, and local navigation with path and obstacle detection just beyond the reach of the white cane. The system does not replace the white cane but complements it, in order to alert the user to looming hazards. In addition, computer vision is used to identify objects on shelves, for example in a pantry or refrigerator. The user-friendly interface consists of a four-button hand-held box, a vibration actuator in the handle of the white cane, and speech synthesis. In the near future, passive RFID tags will be complemented by active tags for marking navigation landmarks, and speech recognition may complement or substitute the vibration actuator

    The SmartVision navigation prototype for the blind

    Get PDF
    The goal of the project "SmartVision: active vision for the blind" is to develop a small and portable but intelligent and reliable system for assisting the blind and visually impaired while navigating autonomously, both outdoor and indoor. In this paper we present an overview of the prototype, design issues, and its different modules which integrate a GIS with GPS, Wi-Fi, RFID tags and computer vision. The prototype addresses global navigation by following known landmarks, local navigation with path tracking and obstacle avoidance, and object recognition. The system does not replace the white cane, but extends it beyond its reach. The user-friendly interface consists of a 4-button hand-held box, a vibration actuator in the handle of the cane, and speech synthesis. A future version may also employ active RFID tags for marking navigation landmarks, and speech recognition may complement speech synthesis

    Indoor localization and navigation for blind persons using visual landmarks and a GIS

    Get PDF
    In an unfamiliar environment we spot and explore all available information which might guide us to a desired location. This largely unconscious processing is done by our trained sensory a nd cognitive systems. These recognize and memorize sets of landmarks which allow us to create a mental map of the envi ronment, and this map enables us to navigate by exploiting very few but the most important landmarks stored in our memory. We present a system which integrates a geographic information system of a building with visu al landmarks for localizing the user in the building and for tracing and validating a route for the user's navigation. Hence, the developed system complements the white cane for improving the user's autonomy during indoor navigation. Although de signed for visually impaired persons, the system can be used by any person for wayfinding in a complex building

    Investigating context-aware clues to assist navigation for visually impaired people

    Get PDF
    It is estimated that 7.4 million people in Europe are visually impaired [1]. Limitations of traditional mobility aids (i.e. white canes and guide dogs) coupled with a proliferation of context-aware technologies (e.g. Electronic Travel Aids, Global Positioning Systems and Geographical Information Systems), have stimulated research and development into navigational systems for the visually impaired. However, current research appears very technology focused, which has led to an insufficient appreciation of Human Computer Interaction, in particular task/requirements analysis and notions of contextual interactions. The study reported here involved a smallscale investigation into how visually impaired people interact with their environmental context during micro-navigation (through immediate environment) and/or macro-navigation (through distant environment) on foot. The purpose was to demonstrate the heterogeneous nature of visually impaired people in interaction with their environmental context. Results from a previous study involving sighted participants were used for comparison. Results revealed that when describing a route, visually impaired people vary in their use of different types of navigation clues - both as a group, when compared with sighted participants, and as individuals. Usability implications and areas for further work are identified and discussed

    Review of Machine Vision-Based Electronic Travel Aids

    Get PDF
    Visual impaired people have navigation and mobility problems on the road. Up to now, many approaches have been conducted to help them navigate around using different sensing techniques. This paper reviews several machine vision- based Electronic Travel Aids (ETAs) and compares them with those using other sensing techniques. The functionalities of machine vision-based ETAs are classified from low-level image processing such as detecting the road regions and obstacles to high-level functionalities such as recognizing the digital tags and texts. In addition, the characteristics of the ETA systems for blind people are particularly discussed

    A new direction for applied geography

    Get PDF

    Navigation framework using visual landmarks and a GIS

    Get PDF
    In an unfamiliar environment we spot and explore all available information which might guide us to a desired location. This largely unconscious processing is done by our trained sensory and cognitive systems. These recognise and memorise sets of landmarks which allow us to create a mental map of the environment, and this map enables us to navigate by exploiting very few but the most important landmarks stored in our memory. In this paper we present a route planning, localisation and navigation system which works in real time. It integrates a geographic information system of a building with visual landmarks for localising the user and for validating the navigation route. Although designed for visually impaired persons, the system can also be employed to assist or transport persons with reduced mobility in way finding in a complex building. © 2013 The Authors. Published by Elsevier B.V

    The Application of Geographic Information Systems to Support Wayfinding for People with Visual Impairments or Blindness

    Get PDF
    People with visual impairments or legal blindness are relying on differing, comprehensive information utilized for their individual mobility. Increasing the personal mobility of people with disabilities and thereby achieving a self-determined life are major steps toward a more inclusive society. Research and applications on mobility issues of people with visual impairments or blindness mainly focus on technical applications or assistive orientation and navigation devices, and less work is covering the individual needs, e.g., regarding the information required for wayfinding. Moreover, active participation of people with disabilities in research and development is still limited. ways2see offers a new online application to support individual mobility in context of pre-trip planning for people with visual impairments or blindness based on a Geographic Information System (GIS). Obstacles, barriers, landmarks, orientation hints, and directions for wayfinding are generated by user profiles. The underlying network for GIS analysis is designed as pedestrian network. This individually coded network approach integrates sidewalks and different types of crossings and implements various orientation and navigation attributes. ways2see integrates three research realms: firstly, implementing a participative and transdisciplinary research design; secondly, integrating personalized information aligned with the individual user needs; and thirdly, presenting result of GIS analysis through an accessible designed user interface

    IO Vision – an integrated system to support the visually impaired

    Get PDF
    Security questions are one of the techniques used to recover passwords. The main limitation of security questions is that users find strong answers difficult to remember. This leads users to trade-off security for the convenience of an improved memorability. Previous research found that increased fun and enjoyment can lead to an enhanced memorability, which provides a better learning experience. Hence, we empirically investigate whether a serious game has the potential of improving the memorability of strong answers to security questions. For our serious game, we adopted the popular “4 Pics 1 word” mobile game because of its use of pictures and cues, which psychology research found to be important to help with memorability. Our findings indicate that the proposed serious game could potentially improve the memorability of answers to security questions. This potential improvement in memorability, could eventually help reduce the trade-off between usability and security in fall-back authentication
    corecore