1,919 research outputs found

    A Navigation and Augmented Reality System for Visually Impaired People

    Get PDF
    In recent years, we have assisted with an impressive advance in augmented reality systems and computer vision algorithms, based on image processing and artificial intelligence. Thanks to these technologies, mainstream smartphones are able to estimate their own motion in 3D space with high accuracy. In this paper, we exploit such technologies to support the autonomous mobility of people with visual disabilities, identifying pre-defined virtual paths and providing context information, reducing the distance between the digital and real worlds. In particular, we present ARIANNA+, an extension of ARIANNA, a system explicitly designed for visually impaired people for indoor and outdoor localization and navigation. While ARIANNA is based on the assumption that landmarks, such as QR codes, and physical paths (composed of colored tapes, painted lines, or tactile pavings) are deployed in the environment and recognized by the camera of a common smartphone, ARIANNA+ eliminates the need for any physical support thanks to the ARKit library, which we exploit to build a completely virtual path. Moreover, ARIANNA+ adds the possibility for the users to have enhanced interactions with the surrounding environment, through convolutional neural networks (CNNs) trained to recognize objects or buildings and enabling the possibility of accessing contents associated with them. By using a common smartphone as a mediation instrument with the environment, ARIANNA+ leverages augmented reality and machine learning for enhancing physical accessibility. The proposed system allows visually impaired people to easily navigate in indoor and outdoor scenarios simply by loading a previously recorded virtual path and providing automatic guidance along the route, through haptic, speech, and sound feedback

    Implementation of a Blind navigation method in outdoors/indoors areas

    Full text link
    According to WHO statistics, the number of visually impaired people is increasing annually. One of the most critical necessities for visually impaired people is the ability to navigate safely. This paper proposes a navigation system based on the visual slam and Yolo algorithm using monocular cameras. The proposed system consists of three steps: obstacle distance estimation, path deviation detection, and next-step prediction. Using the ORB-SLAM algorithm, the proposed method creates a map from a predefined route and guides the users to stay on the route while notifying them if they deviate from it. Additionally, the system utilizes the YOLO algorithm to detect obstacles along the route and alert the user. The experimental results, obtained by using a laptop camera, show that the proposed system can run in 30 frame per second while guiding the user within predefined routes of 11 meters in indoors and outdoors. The accuracy of the positioning system is 8cm, and the system notifies the users if they deviate from the predefined route by more than 60 cm.Comment: 14 pages, 6 figures and 6 table

    SLAM for Visually Impaired People: A Survey

    Full text link
    In recent decades, several assistive technologies for visually impaired and blind (VIB) people have been developed to improve their ability to navigate independently and safely. At the same time, simultaneous localization and mapping (SLAM) techniques have become sufficiently robust and efficient to be adopted in the development of assistive technologies. In this paper, we first report the results of an anonymous survey conducted with VIB people to understand their experience and needs; we focus on digital assistive technologies that help them with indoor and outdoor navigation. Then, we present a literature review of assistive technologies based on SLAM. We discuss proposed approaches and indicate their pros and cons. We conclude by presenting future opportunities and challenges in this domain.Comment: 26 pages, 5 tables, 3 figure

    Iterative Design and Prototyping of Computer Vision Mediated Remote Sighted Assistance

    Get PDF
    Remote sighted assistance (RSA) is an emerging navigational aid for people with visual impairments (PVI). Using scenario-based design to illustrate our ideas, we developed a prototype showcasing potential applications for computer vision to support RSA interactions. We reviewed the prototype demonstrating real-world navigation scenarios with an RSA expert, and then iteratively refined the prototype based on feedback. We reviewed the refined prototype with 12 RSA professionals to evaluate the desirability and feasibility of the prototyped computer vision concepts. The RSA expert and professionals were engaged by, and reacted insightfully and constructively to the proposed design ideas. We discuss what we learned about key resources, goals, and challenges of the RSA prosthetic practice through our iterative prototype review, as well as implications for the design of RSA systems and the integration of computer vision technologies into RSA

    A cultural heritage experience for visually impaired people

    Get PDF
    In recent years, we have assisted to an impressive advance of computer vision algorithms, based on image processing and artificial intelligence. Among the many applications of computer vision, in this paper we investigate on the potential impact for enhancing the cultural and physical accessibility of cultural heritage sites. By using a common smartphone as a mediation instrument with the environment, we demonstrate how convolutional networks can be trained for recognizing monuments in the surroundings of the users, thus enabling the possibility of accessing contents associated to the monument itself, or new forms of fruition for visually impaired people. Moreover, computer vision can also support autonomous mobility of people with visual disabilities, for identifying pre-defined paths in the cultural heritage sites, and reducing the distance between digital and real world

    ENHANCING USERS’ EXPERIENCE WITH SMART MOBILE TECHNOLOGY

    Get PDF
    The aim of this thesis is to investigate mobile guides for use with smartphones. Mobile guides have been successfully used to provide information, personalisation and navigation for the user. The researcher also wanted to ascertain how and in what ways mobile guides can enhance users' experience. This research involved designing and developing web based applications to run on smartphones. Four studies were conducted, two of which involved testing of the particular application. The applications tested were a museum mobile guide application and a university mobile guide mapping application. Initial testing examined the prototype work for the ‘Chronology of His Majesty Sultan Haji Hassanal Bolkiah’ application. The results were used to assess the potential of using similar mobile guides in Brunei Darussalam’s museums. The second study involved testing of the ‘Kent LiveMap’ application for use at the University of Kent. Students at the university tested this mapping application, which uses crowdsourcing of information to provide live data. The results were promising and indicate that users' experience was enhanced when using the application. Overall results from testing and using the two applications that were developed as part of this thesis show that mobile guides have the potential to be implemented in Brunei Darussalam’s museums and on campus at the University of Kent. However, modifications to both applications are required to fulfil their potential and take them beyond the prototype stage in order to be fully functioning and commercially viable

    Wayfinding and Navigation for People with Disabilities Using Social Navigation Networks

    Get PDF
    To achieve safe and independent mobility, people usually depend on published information, prior experience, the knowledge of others, and/or technology to navigate unfamiliar outdoor and indoor environments. Today, due to advances in various technologies, wayfinding and navigation systems and services are commonplace and are accessible on desktop, laptop, and mobile devices. However, despite their popularity and widespread use, current wayfinding and navigation solutions often fail to address the needs of people with disabilities (PWDs). We argue that these shortcomings are primarily due to the ubiquity of the compute-centric approach adopted in these systems and services, where they do not benefit from the experience-centric approach. We propose that following a hybrid approach of combining experience-centric and compute-centric methods will overcome the shortcomings of current wayfinding and navigation solutions for PWDs
    • 

    corecore