1,792 research outputs found

    Comparative analysis of computer-vision and BLE technology based indoor navigation systems for people with visual impairments

    Get PDF
    Background: Considerable number of indoor navigation systems has been proposed to augment people with visual impairments (VI) about their surroundings. These systems leverage several technologies, such as computer-vision, Bluetooth low energy (BLE), and other techniques to estimate the position of a user in indoor areas. Computer-vision based systems use several techniques including matching pictures, classifying captured images, recognizing visual objects or visual markers. BLE based system utilizes BLE beacons attached in the indoor areas as the source of the radio frequency signal to localize the position of the user. Methods: In this paper, we examine the performance and usability of two computer-vision based systems and BLE-based system. The first system is computer-vision based system, called CamNav that uses a trained deep learning model to recognize locations, and the second system, called QRNav, that utilizes visual markers (QR codes) to determine locations. A field test with 10 blindfolded users has been conducted while using the three navigation systems. Results: The obtained results from navigation experiment and feedback from blindfolded users show that QRNav and CamNav system is more efficient than BLE based system in terms of accuracy and usability. The error occurred in BLE based application is more than 30% compared to computer vision based systems including CamNav and QRNav. Conclusions: The developed navigation systems are able to provide reliable assistance for the participants during real time experiments. Some of the participants took minimal external assistance while moving through the junctions in the corridor areas. Computer vision technology demonstrated its superiority over BLE technology in assistive systems for people with visual impairments. - 2019 The Author(s).Scopu

    Deep Thermal Imaging: Proximate Material Type Recognition in the Wild through Deep Learning of Spatial Surface Temperature Patterns

    Get PDF
    We introduce Deep Thermal Imaging, a new approach for close-range automatic recognition of materials to enhance the understanding of people and ubiquitous technologies of their proximal environment. Our approach uses a low-cost mobile thermal camera integrated into a smartphone to capture thermal textures. A deep neural network classifies these textures into material types. This approach works effectively without the need for ambient light sources or direct contact with materials. Furthermore, the use of a deep learning network removes the need to handcraft the set of features for different materials. We evaluated the performance of the system by training it to recognise 32 material types in both indoor and outdoor environments. Our approach produced recognition accuracies above 98% in 14,860 images of 15 indoor materials and above 89% in 26,584 images of 17 outdoor materials. We conclude by discussing its potentials for real-time use in HCI applications and future directions.Comment: Proceedings of the 2018 CHI Conference on Human Factors in Computing System

    Mixed Reality Browsers and Pedestrian Navigation in Augmented Cities

    No full text
    International audienceIn this paper, We use a declarative format for positional audio with synchronization between audio chunks using SMIL. This format has been specifically designed for the type of audio used in AR applications. The audio engine associated to this format is running on mobile platforms (iOS, Android). Our MRB browser called IXE use a format based on volunteered geographic information (OpenStreetMap) and OSM documents for IXE can be fully authored in side OSM editors like JOSM. This is in contrast with the other AR browsers like Layar, Juniao, Wikitude, which use a Point of Interest (POI) based format having no notion of ways. This introduces a fundamental difference and in some senses a duality relation between IXE and the other AR browsers. In IXE, Augmented Virtuality (AV) navigation along a route (composed of ways) is central and AR interaction with objects is delegated to associate 3D activities. In AR browsers, navigation along a route is delegated to associated map activities and AR interaction with objects is central. IXE supports multiple tracking technologies and therefore allows both indoor navigation in buildings and outdoor navigation at the level of sidewalks. A first android version of the IXE browser will be released at the end of 2013. Being based on volunteered geographic it will allow building accessible pedestrian networks in augmented cities

    Mixed Reality Browsers and Pedestrian Navigation in Augmented Cities

    Get PDF
    International audienceIn this paper, We use a declarative format for positional audio with synchronization between audio chunks using SMIL. This format has been specifically designed for the type of audio used in AR applications. The audio engine associated to this format is running on mobile platforms (iOS, Android). Our MRB browser called IXE use a format based on volunteered geographic information (OpenStreetMap) and OSM documents for IXE can be fully authored in side OSM editors like JOSM. This is in contrast with the other AR browsers like Layar, Juniao, Wikitude, which use a Point of Interest (POI) based format having no notion of ways. This introduces a fundamental difference and in some senses a duality relation between IXE and the other AR browsers. In IXE, Augmented Virtuality (AV) navigation along a route (composed of ways) is central and AR interaction with objects is delegated to associate 3D activities. In AR browsers, navigation along a route is delegated to associated map activities and AR interaction with objects is central. IXE supports multiple tracking technologies and therefore allows both indoor navigation in buildings and outdoor navigation at the level of sidewalks. A first android version of the IXE browser will be released at the end of 2013. Being based on volunteered geographic it will allow building accessible pedestrian networks in augmented cities

    Opportunities for Supporting Self-efficacy through Orientation & Mobility Training Technologies for Blind and Partially Sighted People

    Get PDF
    Orientation and mobility (O&M) training provides essential skills and techniques for safe and independent mobility for blind and partially sighted (BPS) people. The demand for O&M training is increasing as the number of people living with vision impairment increases. Despite the growing portfolio of HCI research on assistive technologies (AT), few studies have examined the experiences of BPS people during O&M training, including the use of technology to aid O&M training. To address this gap, we conducted semi-structured interviews with 20 BPS people and 8 Mobility and Orientation Trainers (MOT). The interviews were thematically analysed and organised into four overarching themes discussing factors influencing the self-efficacy belief of BPS people: Tools and Strategies for O&M training, Technology Use in O&M Training, Changing Personal and Social Circumstances, and Social Influences. We further highlight opportunities for combinations of multimodal technologies to increase access to and effectiveness of O&M training

    On supporting university communities in indoor wayfinding: An inclusive design approach

    Get PDF
    Mobility can be defined as the ability of people to move, live and interact with the space. In this context, indoor mobility, in terms of indoor localization and wayfinding, is a relevant topic due to the challenges it presents, in comparison with outdoor mobility, where GPS is hardly exploited. Knowing how to move in an indoor environment can be crucial for people with disabilities, and in particular for blind users, but it can provide several advantages also to any person who is moving in an unfamiliar place. Following this line of thought, we employed an inclusive by design approach to implement and deploy a system that comprises an Internet of Things infrastructure and an accessible mobile application to provide wayfinding functions, targeting the University community. As a real word case study, we considered the University of Bologna, designing a system able to be deployed in buildings with different configurations and settings, considering also historical buildings. The final system has been evaluated in three different scenarios, considering three different target audiences (18 users in total): i. students with disabilities (i.e., visual and mobility impairments); ii. campus students; and iii. visitors and tourists. Results reveal that all the participants enjoyed the provided functions and the indoor localization strategy was fine enough to provide a good wayfinding experience

    D3.1 User expectations and cross-modal interaction

    Get PDF
    This document is deliverable D3.1 “User expectations and cross-modal in-teraction” and presents user studies to understand expectations and reac-tions to content presentation methods for mobile AR applications and rec-ommendations to realize an interface and interaction design in accordance with user needs or disabilities
    corecore