2,594 research outputs found

    A Wearable RFID-Based Navigation System for the Visually Impaired

    Full text link
    Recent studies have focused on developing advanced assistive devices to help blind or visually impaired people. Navigation is challenging for this community; however, developing a simple yet reliable navigation system is still an unmet need. This study targets the navigation problem and proposes a wearable assistive system. We developed a smart glove and shoe set based on radio-frequency identification technology to assist visually impaired people with navigation and orientation in indoor environments. The system enables the user to find the directions through audio feedback. To evaluate the device's performance, we designed a simple experimental setup. The proposed system has a simple structure and can be personalized according to the user's requirements. The results identified that the platform is reliable, power efficient, and accurate enough for indoor navigation.Comment: 6 pages, 6 figures, 3 table

    Review of Machine Vision-Based Electronic Travel Aids

    Get PDF
    Visual impaired people have navigation and mobility problems on the road. Up to now, many approaches have been conducted to help them navigate around using different sensing techniques. This paper reviews several machine vision- based Electronic Travel Aids (ETAs) and compares them with those using other sensing techniques. The functionalities of machine vision-based ETAs are classified from low-level image processing such as detecting the road regions and obstacles to high-level functionalities such as recognizing the digital tags and texts. In addition, the characteristics of the ETA systems for blind people are particularly discussed

    Image recognition-based architecture to enhance inclusive mobility of visually impaired people in smart and urban environments

    Get PDF
    The demographic growth that we have witnessed in recent years, which is expected to increase in the years to come, raises emerging challenges worldwide regarding urban mobility, both in transport and pedestrian movement. The sustainable development of cities is also intrinsically linked to urban planning and mobility strategies. The tasks of navigation and orientation in cities are something that we resort to today with great frequency, especially in unknown cities and places. Current navigation solutions refer to the precision aspect as a big challenge, especially between buildings in city centers. In this paper, we focus on the segment of visually impaired people and how they can obtain information about where they are when, for some reason, they have lost their orientation. Of course, the challenges are different and much more challenging in this situation and with this population segment. GPS, a technique widely used for navigation in outdoor environments, does not have the precision we need or the most beneficial type of content because the information that a visually impaired person needs when lost is not the name of the street or the coordinates but a reference point. Therefore, this paper includes the proposal of a conceptual architecture for outdoor positioning of visually impaired people using the Landmark Positioning approach.5311-8814-F0ED | Sara Maria da Cruz Maia de Oliveira PaivaN/

    Inclusive mobility solution for visually impaired people using Google Cloud Vision

    Get PDF
    Mobility in cities is of particular and growing importance nowadays due to the demographic increase and the existence of people with reduced mobility, as is the case of visually impaired people. Of the various situations where mobility represents a challenge, obtaining the notion of positioning, at times when the person loses track of where he is and becomes disoriented, can be extremely useful and a way to contribute to greater autonomy for this segment of people. This paper proposes a visual positioning system using the Google Cloud Vision API. The architecture includes a mobile application that captures an image via the mobile phone and sends it to a backend server that makes use of Google Cloud Vision to recognize the image, which may consist of text, logos or landmarks. In a first phase, the solution was evaluated individually and, in a second phase, on a route chosen in the city of Braga, in Portugal. Logo recognition achieved an accuracy of 98% and proved to be sensitive to image resolution. The frontal text recognition obtained an accuracy of 100% while the lateral recognition and at a 3 meters distance obtained lower values, with worse results in images with more text and of reduced dimensions. Landmark recognition always returned the correct result, although the average accuracy is 82%. The processing time was around 3 seconds in tests done with Wi-Fi network and about 2 seconds in field tests made with mobile network. The obtained results prove the adequacy of using this solution to be adapted in a real scenario.5311-8814-F0ED | Sara Maria da Cruz Maia de Oliveira PaivaN/

    A cultural heritage experience for visually impaired people

    Get PDF
    In recent years, we have assisted to an impressive advance of computer vision algorithms, based on image processing and artificial intelligence. Among the many applications of computer vision, in this paper we investigate on the potential impact for enhancing the cultural and physical accessibility of cultural heritage sites. By using a common smartphone as a mediation instrument with the environment, we demonstrate how convolutional networks can be trained for recognizing monuments in the surroundings of the users, thus enabling the possibility of accessing contents associated to the monument itself, or new forms of fruition for visually impaired people. Moreover, computer vision can also support autonomous mobility of people with visual disabilities, for identifying pre-defined paths in the cultural heritage sites, and reducing the distance between digital and real world

    Wayfinding and Navigation for People with Disabilities Using Social Navigation Networks

    Get PDF
    To achieve safe and independent mobility, people usually depend on published information, prior experience, the knowledge of others, and/or technology to navigate unfamiliar outdoor and indoor environments. Today, due to advances in various technologies, wayfinding and navigation systems and services are commonplace and are accessible on desktop, laptop, and mobile devices. However, despite their popularity and widespread use, current wayfinding and navigation solutions often fail to address the needs of people with disabilities (PWDs). We argue that these shortcomings are primarily due to the ubiquity of the compute-centric approach adopted in these systems and services, where they do not benefit from the experience-centric approach. We propose that following a hybrid approach of combining experience-centric and compute-centric methods will overcome the shortcomings of current wayfinding and navigation solutions for PWDs

    A Navigation and Augmented Reality System for Visually Impaired People

    Get PDF
    In recent years, we have assisted with an impressive advance in augmented reality systems and computer vision algorithms, based on image processing and artificial intelligence. Thanks to these technologies, mainstream smartphones are able to estimate their own motion in 3D space with high accuracy. In this paper, we exploit such technologies to support the autonomous mobility of people with visual disabilities, identifying pre-defined virtual paths and providing context information, reducing the distance between the digital and real worlds. In particular, we present ARIANNA+, an extension of ARIANNA, a system explicitly designed for visually impaired people for indoor and outdoor localization and navigation. While ARIANNA is based on the assumption that landmarks, such as QR codes, and physical paths (composed of colored tapes, painted lines, or tactile pavings) are deployed in the environment and recognized by the camera of a common smartphone, ARIANNA+ eliminates the need for any physical support thanks to the ARKit library, which we exploit to build a completely virtual path. Moreover, ARIANNA+ adds the possibility for the users to have enhanced interactions with the surrounding environment, through convolutional neural networks (CNNs) trained to recognize objects or buildings and enabling the possibility of accessing contents associated with them. By using a common smartphone as a mediation instrument with the environment, ARIANNA+ leverages augmented reality and machine learning for enhancing physical accessibility. The proposed system allows visually impaired people to easily navigate in indoor and outdoor scenarios simply by loading a previously recorded virtual path and providing automatic guidance along the route, through haptic, speech, and sound feedback

    Indoor navigation for the visually impaired : enhancements through utilisation of the Internet of Things and deep learning

    Get PDF
    Wayfinding and navigation are essential aspects of independent living that heavily rely on the sense of vision. Walking in a complex building requires knowing exact location to find a suitable path to the desired destination, avoiding obstacles and monitoring orientation and movement along the route. People who do not have access to sight-dependent information, such as that provided by signage, maps and environmental cues, can encounter challenges in achieving these tasks independently. They can rely on assistance from others or maintain their independence by using assistive technologies and the resources provided by smart environments. Several solutions have adapted technological innovations to combat navigation in an indoor environment over the last few years. However, there remains a significant lack of a complete solution to aid the navigation requirements of visually impaired (VI) people. The use of a single technology cannot provide a solution to fulfil all the navigation difficulties faced. A hybrid solution using Internet of Things (IoT) devices and deep learning techniques to discern the patterns of an indoor environment may help VI people gain confidence to travel independently. This thesis aims to improve the independence and enhance the journey of VI people in an indoor setting with the proposed framework, using a smartphone. The thesis proposes a novel framework, Indoor-Nav, to provide a VI-friendly path to avoid obstacles and predict the user s position. The components include Ortho-PATH, Blue Dot for VI People (BVIP), and a deep learning-based indoor positioning model. The work establishes a novel collision-free pathfinding algorithm, Orth-PATH, to generate a VI-friendly path via sensing a grid-based indoor space. Further, to ensure correct movement, with the use of beacons and a smartphone, BVIP monitors the movements and relative position of the moving user. In dark areas without external devices, the research tests the feasibility of using sensory information from a smartphone with a pre-trained regression-based deep learning model to predict the user s absolute position. The work accomplishes a diverse range of simulations and experiments to confirm the performance and effectiveness of the proposed framework and its components. The results show that Indoor-Nav is the first type of pathfinding algorithm to provide a novel path to reflect the needs of VI people. The approach designs a path alongside walls, avoiding obstacles, and this research benchmarks the approach with other popular pathfinding algorithms. Further, this research develops a smartphone-based application to test the trajectories of a moving user in an indoor environment
    • …
    corecore