3 research outputs found

    Vision-based Assistive Indoor Localization

    Full text link
    An indoor localization system is of significant importance to the visually impaired in their daily lives by helping them localize themselves and further navigate an indoor environment. In this thesis, a vision-based indoor localization solution is proposed and studied with algorithms and their implementations by maximizing the usage of the visual information surrounding the users for an optimal localization from multiple stages. The contributions of the work include the following: (1) Novel combinations of a daily-used smart phone with a low-cost lens (GoPano) are used to provide an economic, portable, and robust indoor localization service for visually impaired people. (2) New omnidirectional features (omni-features) extracted from 360 degrees field-of-view images are proposed to represent visual landmarks of indoor positions, and then used as on-line query keys when a user asks for localization services. (3) A scalable and light-weight computation and storage solution is implemented by transferring big database storage and computational heavy querying procedure to the cloud. (4) Real-time query performance of 14 fps is achieved with a Wi-Fi connection by identifying and implementing both data and task parallelism using many-core NVIDIA GPUs. (5) Rene localization via 2D-to-3D and 3D-to-3D geometric matching and automatic path planning for efficient environmental modeling by utilizing architecture AutoCAD floor plans. This dissertation first provides a description of assistive indoor localization problem with its detailed connotations as well as overall methodology. Then related work in indoor localization and automatic path planning for environmental modeling is surveyed. After that, the framework of omnidirectional-vision-based indoor assistive localization is introduced. This is followed by multiple refine localization strategies such as 2D-to-3D and 3D-to-3D geometric matching approaches. Finally, conclusions and a few promising future research directions are provided

    Real-time indoor assistive localization with mobile omnidirectional vision and cloud GPU acceleration

    Full text link
    In this paper we propose a real-time assistive localization approach to help blind and visually impaired people in navigating an indoor environment. The system consists of a mobile vision front end with a portable panoramic lens mounted on a smart phone, and a remote image feature-based database of the scene on a GPU-enabled server. Compact and elective omnidirectional image features are extracted and represented in the smart phone front end, and then transmitted to the server in the cloud. These features of a short video clip are used to search the database of the indoor environment via image-based indexing to find the location of the current view within the database, which is associated with floor plans of the environment. A median-filter-based multi-frame aggregation strategy is used for single path modeling, and a 2D multi-frame aggregation strategy based on the candidates’ distribution densities is used for multi-path environmental modeling to provide a final location estimation. To deal with the high computational cost in searching a large database for a realistic navigation application, data parallelism and task parallelism properties are identified in the database indexing process, and computation is accelerated by using multi-core CPUs and GPUs. User-friendly HCI particularly for the visually impaired is designed and implemented on an iPhone, which also supports system configurations and scene modeling for new environments. Experiments on a database of an eight-floor building are carried out to demonstrate the capacity of the proposed system, with real-time response (14 fps) and robust localization results

    A Mobile Cyber-Physical System Framework for Aiding People with Visual Impairment

    Full text link
    It is a challenging problem for researchers and engineers in the assistive technology (AT) community to provide suitable solutions for visually impaired people (VIPs) through AT to meet orientation, navigation and mobility (ONM) needs. Given the spectrum of assistive technologies currently available for the purposes of aiding VIPs with ONM, our literature review and survey have shown that there is a reluctance to adopt these technological solutions in the VIP community. Motivated by these findings, we think it critical to re-examine and rethink the approaches that have been taken. It is our belief that we need to take a different and innovative approach to solving this problem. We propose an integrated mobile cyber-physical system framework (MCPSF) with an \u27agent\u27 and a \u27smart environment\u27 to address VIP\u27s ONM needs in urban settings. For example, one of the essential needs for VIPs is to make street navigation easier and safer for them as pedestrians. In a busy city neighborhood, crossing a street is problematic for VIPs: knowing if it is safe; knowing when to cross; and being sure to remain on path and not collide or interfere with objects and people. These remain issues keeping VIPs from a truly independent lifestyle. In this dissertation, we propose a framework based on mobile cyber-physical systems (MCPS) to address VIP\u27s ONM needs. The concept of mobile cyber-physical systems is intended to bridge the physical space we live in with a cyberspace filled with unique information coming from IoT devices (Internet of Things) which are part of Smart City infrastructure. The devices in the IoT may be embedded in different kinds of physical structures. People with vision loss or other special needs may have difficulties in comprehending or perceiving signals directly in the physical space, but they can make such connections in cyberspace. These cyber connections and real-time information exchanges will enable and enhance their interactions in the physical space and help them better navigate through city streets and street crossings. As part of the dissertation work, we designed and implemented a proof of concept prototype with essential functions to aid VIP’s for their ONM needs. We believe our research and prototype experience opened a new approach to further research areas that might enhance ONM functions beyond our prototype with potential commercial product development
    corecore