6 research outputs found

    A Highly Accurate And Reliable Data Fusion Framework For Guiding The Visually Impaired

    Get PDF
    The world has approximately 285 million visually impaired (VI) people according to a report by the World Health Organization. Thirty-nine million people are estimated to be blind, whereas 246 million people are estimated to have impaired vision. An important factor that motivated this research is the fact that 90% of VI people live in developing countries. Several systems have been designed to improve the quality of the life of VI people and support the mobility of VI people. Unfortunately, none of these systems provides a complete solution for VI people, and the systems are very expensive. Therefore, this work presents an intelligent framework that includes several types of sensors embedded in a wearable device to support the visually impaired (VI) community. The proposed work is based on an integration of sensor-based and computer vision-based techniques in order to introduce an efficient and economical visual device. The designed algorithm is divided to two components: obstacle detection and collision avoidance. The system has been implemented and tested in real-time scenarios. A video dataset of 30 videos and an average of 700 frames per video was fed to the system for the testing purpose. The achieved 96.53% accuracy rate of the proposed sequence of techniques that are used for real-time detection component is based on a wide detection view that used two camera modules and a detection range of approximately 9 meters. The 98% accuracy rate was obtained for a larger dataset. However, the main contribution in this work is the proposed novel collision avoidance approach that is based on the image depth and fuzzy control rules. Through the use of x-y coordinate system, we were able to map the input frames, whereas each frame was divided into three areas vertically and further 1/3 of the height of that frame horizontally in order to specify the urgency of any existing obstacles within that frame. In addition, we were able to provide precise information to help the VI user in avoiding front obstacles using the fuzzy logic. The strength of this proposed approach is that it aids the VI users in avoiding 100% of all detected objects. Once the device is initialized, the VI user can confidently enter unfamiliar surroundings. Therefore, this implemented device can be described as accurate, reliable, friendly, light, and economically accessible that facilitates the mobility of VI people and does not require any previous knowledge of the surrounding environment. Finally, our proposed approach was compared with most efficient introduced techniques and proved to outperform them

    Sensor-Based Assistive Devices for Visually-Impaired People: Current Status, Challenges, and Future Directions

    Get PDF
    The World Health Organization (WHO) reported that there are 285 million visually impaired people worldwide. Among these individuals, there are 39 million who are totally blind. There have been several systems designed to support visually-impaired people and to improve the quality of their lives. Unfortunately, most of these systems are limited in their capabilities. In this paper, we present a comparative survey of the wearable and portable assistive devices for visuallyimpaired people in order to show the progress in assistive technology for this group of people. Thus, the contribution of this literature survey is to discuss in detail the most significant devices that are presented in the literature to assist this population and highlight the improvements, advantages, disadvantages, and accuracy. Our aim is to address and present most of the issues of these systems to pave the way for other researchers to design devices that ensure safety and independent mobility to visually-impaired people.https://doi.org/10.3390/s1703056

    GIVE-ME: Gamification In Virtual Environments for Multimodal Evaluation - A Framework

    Full text link
    In the last few decades, a variety of assistive technologies (AT) have been developed to improve the quality of life of visually impaired people. These include providing an independent means of travel and thus better access to education and places of work. There is, however, no metric for comparing and benchmarking these technologies, especially multimodal systems. In this dissertation, we propose GIVE-ME: Gamification In Virtual Environments for Multimodal Evaluation, a framework which allows for developers and consumers to assess their technologies in a functional and objective manner. This framework is based on three foundations: multimodality, gamification, and virtual reality. It facilitates fuller and more controlled data collection, rapid prototyping and testing of multimodal ATs, benchmarking heterogeneous ATs, and conversion of these evaluation tools into simulation or training tools. Our contributions include: (1) a unified evaluation framework: via developing an evaluative approach for multimodal visual ATs; (2) a sustainable evaluation: by employing virtual environments and gamification techniques to create engaging games for users, while collecting experimental data for analysis; (3) a novel psychophysics evaluation: enabling researchers to conduct psychophysics evaluation despite the experiment being a navigational task; and (4) a novel collaborative environment: enabling developers to rapid prototype and test their ATs with users in an early stakeholder involvement that fosters communication between developers and users. This dissertation first provides a background in assistive technologies and motivation for the framework. This is followed by detailed description of the GIVE-ME Framework, with particular attention to its user interfaces, foundations, and components. Then four applications are presented that describe how the framework is applied. Results and discussions are also presented for each application. Finally, both conclusions and a few directions for future work are presented in the last chapter

    Semi-supervised Regression with Generative Adversarial Networks Using Minimal Labeled Data

    Full text link
    This work studies the generalization of semi-supervised generative adversarial networks (GANs) to regression tasks. A novel feature layer contrasting optimization function, in conjunction with a feature matching optimization, allows the adversarial network to learn from unannotated data and thereby reduce the number of labels required to train a predictive network. An analysis of simulated training conditions is performed to explore the capabilities and limitations of the method. In concert with the semi-supervised regression GANs, an improved label topology and upsampling technique for multi-target regression tasks are shown to reduce data requirements. Improvements are demonstrated on a wide variety of vision tasks, including dense crowd counting, age estimation, and automotive steering angle prediction. With training data limitations arguably being the most restrictive component of deep learning, methods which reduce data requirements hold immense value. The methods proposed here are general-purpose and can be incorporated into existing network architectures with little or no modifications to the existing structure

    Vision-based Assistive Indoor Localization

    Full text link
    An indoor localization system is of significant importance to the visually impaired in their daily lives by helping them localize themselves and further navigate an indoor environment. In this thesis, a vision-based indoor localization solution is proposed and studied with algorithms and their implementations by maximizing the usage of the visual information surrounding the users for an optimal localization from multiple stages. The contributions of the work include the following: (1) Novel combinations of a daily-used smart phone with a low-cost lens (GoPano) are used to provide an economic, portable, and robust indoor localization service for visually impaired people. (2) New omnidirectional features (omni-features) extracted from 360 degrees field-of-view images are proposed to represent visual landmarks of indoor positions, and then used as on-line query keys when a user asks for localization services. (3) A scalable and light-weight computation and storage solution is implemented by transferring big database storage and computational heavy querying procedure to the cloud. (4) Real-time query performance of 14 fps is achieved with a Wi-Fi connection by identifying and implementing both data and task parallelism using many-core NVIDIA GPUs. (5) Rene localization via 2D-to-3D and 3D-to-3D geometric matching and automatic path planning for efficient environmental modeling by utilizing architecture AutoCAD floor plans. This dissertation first provides a description of assistive indoor localization problem with its detailed connotations as well as overall methodology. Then related work in indoor localization and automatic path planning for environmental modeling is surveyed. After that, the framework of omnidirectional-vision-based indoor assistive localization is introduced. This is followed by multiple refine localization strategies such as 2D-to-3D and 3D-to-3D geometric matching approaches. Finally, conclusions and a few promising future research directions are provided

    A Mobile Cyber-Physical System Framework for Aiding People with Visual Impairment

    Full text link
    It is a challenging problem for researchers and engineers in the assistive technology (AT) community to provide suitable solutions for visually impaired people (VIPs) through AT to meet orientation, navigation and mobility (ONM) needs. Given the spectrum of assistive technologies currently available for the purposes of aiding VIPs with ONM, our literature review and survey have shown that there is a reluctance to adopt these technological solutions in the VIP community. Motivated by these findings, we think it critical to re-examine and rethink the approaches that have been taken. It is our belief that we need to take a different and innovative approach to solving this problem. We propose an integrated mobile cyber-physical system framework (MCPSF) with an \u27agent\u27 and a \u27smart environment\u27 to address VIP\u27s ONM needs in urban settings. For example, one of the essential needs for VIPs is to make street navigation easier and safer for them as pedestrians. In a busy city neighborhood, crossing a street is problematic for VIPs: knowing if it is safe; knowing when to cross; and being sure to remain on path and not collide or interfere with objects and people. These remain issues keeping VIPs from a truly independent lifestyle. In this dissertation, we propose a framework based on mobile cyber-physical systems (MCPS) to address VIP\u27s ONM needs. The concept of mobile cyber-physical systems is intended to bridge the physical space we live in with a cyberspace filled with unique information coming from IoT devices (Internet of Things) which are part of Smart City infrastructure. The devices in the IoT may be embedded in different kinds of physical structures. People with vision loss or other special needs may have difficulties in comprehending or perceiving signals directly in the physical space, but they can make such connections in cyberspace. These cyber connections and real-time information exchanges will enable and enhance their interactions in the physical space and help them better navigate through city streets and street crossings. As part of the dissertation work, we designed and implemented a proof of concept prototype with essential functions to aid VIP’s for their ONM needs. We believe our research and prototype experience opened a new approach to further research areas that might enhance ONM functions beyond our prototype with potential commercial product development
    corecore