38 research outputs found

    A HoloLens Application to Aid People who are Visually Impaired in Navigation Tasks

    Get PDF
    Day-to-day activities such as navigation and reading can be particularly challenging for people with visual impairments. Reading text on signs may be especially difficult for people who are visually impaired because signs have variable color, contrast, and size. Indoors, signage may include office, classroom, restroom, and fire evacuation signs. Outdoors, they may include street signs, bus numbers, and store signs. Depending on the level of visual impairment, just identifying where signs exist can be a challenge. Using Microsoft\u27s HoloLens, an augmented reality device, I designed and implemented the TextSpotting application that helps those with low vision identify and read indoor signs so that they can navigate text-heavy environments. The application can provide both visual information and auditory information. In addition to developing the application, I conducted a user study to test its effectiveness. Participants were asked to find a room in an unfamiliar hallway. Those that used the TextSpotting application completed the task less quickly yet reported higher levels of ease, comfort, and confidence, indicating the application\u27s limitations and potential in providing an effective means to navigate unknown environments via signage

    A Navigation and Augmented Reality System for Visually Impaired People

    Get PDF
    In recent years, we have assisted with an impressive advance in augmented reality systems and computer vision algorithms, based on image processing and artificial intelligence. Thanks to these technologies, mainstream smartphones are able to estimate their own motion in 3D space with high accuracy. In this paper, we exploit such technologies to support the autonomous mobility of people with visual disabilities, identifying pre-defined virtual paths and providing context information, reducing the distance between the digital and real worlds. In particular, we present ARIANNA+, an extension of ARIANNA, a system explicitly designed for visually impaired people for indoor and outdoor localization and navigation. While ARIANNA is based on the assumption that landmarks, such as QR codes, and physical paths (composed of colored tapes, painted lines, or tactile pavings) are deployed in the environment and recognized by the camera of a common smartphone, ARIANNA+ eliminates the need for any physical support thanks to the ARKit library, which we exploit to build a completely virtual path. Moreover, ARIANNA+ adds the possibility for the users to have enhanced interactions with the surrounding environment, through convolutional neural networks (CNNs) trained to recognize objects or buildings and enabling the possibility of accessing contents associated with them. By using a common smartphone as a mediation instrument with the environment, ARIANNA+ leverages augmented reality and machine learning for enhancing physical accessibility. The proposed system allows visually impaired people to easily navigate in indoor and outdoor scenarios simply by loading a previously recorded virtual path and providing automatic guidance along the route, through haptic, speech, and sound feedback

    Real-time optimisation-based path planning for visually impaired people in dynamic environments

    Get PDF
    Most existing outdoor assistive mobility solutions notify Visually Impaired People (VIP) about potential collisions but fail to provide Optimal Local Collision-Free Path Planning (OLCFPP) to enable the VIP to get out of the way effectively. In this paper, we propose MinD, the first VIP OLCFPP scheme that notifies the VIP of the shortest path required to avoid Critical Moving Objects (CMOs), like cars, motorcycles, etc. This simultaneously accounts for the VIP's mobility constraints, the different CMO types and movement patterns, and predicted collision times, conducting a safety prediction trajectory analysis of the optimal path for the VIP to move in. We implement a real-world prototype to conduct extensive outdoor experiments that record the aforementioned parameters, and this populates our simulations for evaluation against the state-of-the-art. Experimental results demonstrate that MinD outperforms the Artificial Potential Field (APF) approach in effectively planning a short collision-free route, requiring only 1.69m of movement on average, shorter than APF by 90.23%, with a 0% collision rate; adapting to the VIP's mobility limitations and provides a high safe time separation (>5.35s on average compared to APF). MinD also shows near real-time performance, with decisions taking only 0.04s processing time on a standard off-the-shelf laptop

    Smart Assistive Technology for People with Visual Field Loss

    Get PDF
    Visual field loss results in the lack of ability to clearly see objects in the surrounding environment, which affects the ability to determine potential hazards. In visual field loss, parts of the visual field are impaired to varying degrees, while other parts may remain healthy. This defect can be debilitating, making daily life activities very stressful. Unlike blind people, people with visual field loss retain some functional vision. It would be beneficial to intelligently augment this vision by adding computer-generated information to increase the users' awareness of possible hazards by providing early notifications. This thesis introduces a smart hazard attention system to help visual field impaired people with their navigation using smart glasses and a real-time hazard classification system. This takes the form of a novel, customised, machine learning-based hazard classification system that can be integrated into wearable assistive technology such as smart glasses. The proposed solution provides early notifications based on (1) the visual status of the user and (2) the motion status of the detected object. The presented technology can detect multiple objects at the same time and classify them into different hazard types. The system design in this work consists of four modules: (1) a deep learning-based object detector to recognise static and moving objects in real-time, (2) a Kalman Filter-based multi-object tracker to track the detected objects over time to determine their motion model, (3) a Neural Network-based classifier to determine the level of danger for each hazard using its motion features extracted while the object is in the user's field of vision, and (4) a feedback generation module to translate the hazard level into a smart notification to increase user's cognitive perception using the healthy vision within the visual field. For qualitative system testing, normal and personalised defected vision models were implemented. The personalised defected vision model was created to synthesise the visual function for the people with visual field defects. Actual central and full-field test results were used to create a personalised model that is used in the feedback generation stage of this system, where the visual notifications are displayed in the user's healthy visual area. The proposed solution will enhance the quality of life for people suffering from visual field loss conditions. This non-intrusive, wearable hazard detection technology can provide obstacle avoidance solution, and prevent falls and collisions early with minimal information

    Analysis of Navigation Assistants for Blind and Visually Impaired People: A Systematic Review

    Get PDF
    Over the last few decades, the development in the field of navigation and routing devices has become a hindering task for the researchers to develop smart and intelligent guiding mechanism at indoor and outdoor locations for blind and visually impaired people (BVIPs). The existing research need to be analysed from a historical perception including early research on the first electronic travel aids to the use of modern artificial vision models for the navigation of BVIPs. Diverse approaches such as: e-cane or guide dog, infrared-based cane, laser based walker and many others are proposed for the navigation of BVIPs. But most of these techniques have limitations such as: infrared and ultrasonic based assistance has short range capacities for object detection. While laser based assistance can harm other people if it directly hit them on their eyes or any other part of the body. These trade-offs are critical to bring this technology in practice.To systematically assess, analyze, and identify the primary studies in this specialized field and provide an overview of the trends and empirical evidence in the proposed field. This systematic research work is performed by defining a set of relevant keywords, formulating four research questions, defining selection criteria for the articles, and synthesizing the empirical evidence in this area. Our pool of studies include 191 most relevant articles to the proposed field reported between 2011 and 2020 (a portion of 2020 is included). This systematic mapping will help the researchers, engineers, and practitioners to make more authentic decisions for finding gaps in the available navigation assistants and suggest a new and enhanced smart assistant application accordingly to ensure safety and accurate guidance of the BVIPs. This research work have several implications in particular the impact of reducing fatalities and major injuries of BVIPs.Qatar University [IRCC-2020-009]

    Wonder Vision-A Hybrid Way-finding System to assist people with Visual Impairment

    Get PDF
    We use multi-sensory information to find our ways around environments. Among these, vision plays a crucial part in way-finding tasks, such as perceiving landmarks and layouts. People with impaired vision may find it difficult to move around in unfamiliar environments because they are unable to use their eyesight to capture critical information. Limiting vision can affect how people interact with their environment, especially for navigation. Individuals with varying degrees of vision will require a different level of way-finding aids. Blind people rely heavily on white canes, whereas low-vision patients could choose from magnifiers for amplifying signs, or even GPS mobile applications to acquire knowledge before their arrival. The purpose of this study is to investigate the in-situ challenges of way-finding for persons with visual impairments. With the methodologies of Research through Design (RTD) and User-centered Design (UCD), I conducted online user research and created a series of iterative prototypes towards a final one: Wonder Vision. It is a hybrid way-finding system that combines Augmented Reality (AR) and Voice User Interface (VUI) to assist people with visual impairments. The descriptive evaluation method suggests Wonder Vision as a possible solution for helping people with visual impairments to find their way toward their goals

    Indoor Mapping and Reconstruction with Mobile Augmented Reality Sensor Systems

    Get PDF
    Augmented Reality (AR) ermöglicht es, virtuelle, dreidimensionale Inhalte direkt innerhalb der realen Umgebung darzustellen. Anstatt jedoch beliebige virtuelle Objekte an einem willkĂŒrlichen Ort anzuzeigen, kann AR Technologie auch genutzt werden, um Geodaten in situ an jenem Ort darzustellen, auf den sich die Daten beziehen. Damit eröffnet AR die Möglichkeit, die reale Welt durch virtuelle, ortbezogene Informationen anzureichern. Im Rahmen der vorliegenen Arbeit wird diese Spielart von AR als "Fused Reality" definiert und eingehend diskutiert. Der praktische Mehrwert, den dieses Konzept der Fused Reality bietet, lĂ€sst sich gut am Beispiel seiner Anwendung im Zusammenhang mit digitalen GebĂ€udemodellen demonstrieren, wo sich gebĂ€udespezifische Informationen - beispielsweise der Verlauf von Leitungen und Kabeln innerhalb der WĂ€nde - lagegerecht am realen Objekt darstellen lassen. Um das skizzierte Konzept einer Indoor Fused Reality Anwendung realisieren zu können, mĂŒssen einige grundlegende Bedingungen erfĂŒllt sein. So kann ein bestimmtes GebĂ€ude nur dann mit ortsbezogenen Informationen augmentiert werden, wenn von diesem GebĂ€ude ein digitales Modell verfĂŒgbar ist. Zwar werden grĂ¶ĂŸere Bauprojekt heutzutage oft unter Zuhilfename von Building Information Modelling (BIM) geplant und durchgefĂŒhrt, sodass ein digitales Modell direkt zusammen mit dem realen GebĂ€ude ensteht, jedoch sind im Falle Ă€lterer BestandsgebĂ€ude digitale Modelle meist nicht verfĂŒgbar. Ein digitales Modell eines bestehenden GebĂ€udes manuell zu erstellen, ist zwar möglich, jedoch mit großem Aufwand verbunden. Ist ein passendes GebĂ€udemodell vorhanden, muss ein AR GerĂ€t außerdem in der Lage sein, die eigene Position und Orientierung im GebĂ€ude relativ zu diesem Modell bestimmen zu können, um Augmentierungen lagegerecht anzeigen zu können. Im Rahmen dieser Arbeit werden diverse Aspekte der angesprochenen Problematik untersucht und diskutiert. Dabei werden zunĂ€chst verschiedene Möglichkeiten diskutiert, Indoor-GebĂ€udegeometrie mittels Sensorsystemen zu erfassen. Anschließend wird eine Untersuchung prĂ€sentiert, inwiefern moderne AR GerĂ€te, die in der Regel ebenfalls ĂŒber eine Vielzahl an Sensoren verfĂŒgen, ebenfalls geeignet sind, als Indoor-Mapping-Systeme eingesetzt zu werden. Die resultierenden Indoor Mapping DatensĂ€tze können daraufhin genutzt werden, um automatisiert GebĂ€udemodelle zu rekonstruieren. Zu diesem Zweck wird ein automatisiertes, voxel-basiertes Indoor-Rekonstruktionsverfahren vorgestellt. Dieses wird außerdem auf der Grundlage vierer zu diesem Zweck erfasster DatensĂ€tze mit zugehörigen Referenzdaten quantitativ evaluiert. Desweiteren werden verschiedene Möglichkeiten diskutiert, mobile AR GerĂ€te innerhalb eines GebĂ€udes und des zugehörigen GebĂ€udemodells zu lokalisieren. In diesem Kontext wird außerdem auch die Evaluierung einer Marker-basierten Indoor-Lokalisierungsmethode prĂ€sentiert. Abschließend wird zudem ein neuer Ansatz, Indoor-Mapping DatensĂ€tze an den Achsen des Koordinatensystems auszurichten, vorgestellt

    Linking Physical Objects to Their Digital Twins via Fiducial Markers Designed for Invisibility to Humans

    Get PDF
    The ability to label and track physical objects that are assets in digital representations of the world is foundational to many complex systems. Simple, yet powerful methods such as bar- and QR-codes have been highly successful, e.g. in the retail space, but the lack of security, limited information content and impossibility of seamless integration with the environment have prevented a large-scale linking of physical objects to their digital twins. This paper proposes to link digital assets created through building information modeling (BIM) with their physical counterparts using fiducial markers with patterns defined by cholesteric spherical reflectors (CSRs), selective retroreflectors produced using liquid crystal self-assembly. The markers leverage the ability of CSRs to encode information that is easily detected and read with computer vision while remaining practically invisible to the human eye. We analyze the potential of a CSR-based infrastructure from the perspective of BIM, critically reviewing the outstanding challenges in applying this new class of functional materials, and we discuss extended opportunities arising in assisting autonomous mobile robots to reliably navigate human-populated environments, as well as in augmented reality
    corecore