63 research outputs found

    Real-Time Obstacle Detection System in Indoor Environment for the Visually Impaired Using Microsoft Kinect Sensor

    Get PDF
    Any mobility aid for the visually impaired people should be able to accurately detect and warn about nearly obstacles. In this paper, we present a method for support system to detect obstacle in indoor environment based on Kinect sensor and 3D-image processing. Color-Depth data of the scene in front of the user is collected using the Kinect with the support of the standard framework for 3D sensing OpenNI and processed by PCL library to extract accurate 3D information of the obstacles. The experiments have been performed with the dataset in multiple indoor scenarios and in different lighting conditions. Results showed that our system is able to accurately detect the four types of obstacle: walls, doors, stairs, and a residual class that covers loose obstacles on the floor. Precisely, walls and loose obstacles on the floor are detected in practically all cases, whereas doors are detected in 90.69% out of 43 positive image samples. For the step detection, we have correctly detected the upstairs in 97.33% out of 75 positive images while the correct rate of downstairs detection is lower with 89.47% from 38 positive images. Our method further allows the computation of the distance between the user and the obstacles

    Sensor-Based Assistive Devices for Visually-Impaired People: Current Status, Challenges, and Future Directions

    Get PDF
    The World Health Organization (WHO) reported that there are 285 million visually impaired people worldwide. Among these individuals, there are 39 million who are totally blind. There have been several systems designed to support visually-impaired people and to improve the quality of their lives. Unfortunately, most of these systems are limited in their capabilities. In this paper, we present a comparative survey of the wearable and portable assistive devices for visuallyimpaired people in order to show the progress in assistive technology for this group of people. Thus, the contribution of this literature survey is to discuss in detail the most significant devices that are presented in the literature to assist this population and highlight the improvements, advantages, disadvantages, and accuracy. Our aim is to address and present most of the issues of these systems to pave the way for other researchers to design devices that ensure safety and independent mobility to visually-impaired people.https://doi.org/10.3390/s1703056

    A Highly Accurate And Reliable Data Fusion Framework For Guiding The Visually Impaired

    Get PDF
    The world has approximately 285 million visually impaired (VI) people according to a report by the World Health Organization. Thirty-nine million people are estimated to be blind, whereas 246 million people are estimated to have impaired vision. An important factor that motivated this research is the fact that 90% of VI people live in developing countries. Several systems have been designed to improve the quality of the life of VI people and support the mobility of VI people. Unfortunately, none of these systems provides a complete solution for VI people, and the systems are very expensive. Therefore, this work presents an intelligent framework that includes several types of sensors embedded in a wearable device to support the visually impaired (VI) community. The proposed work is based on an integration of sensor-based and computer vision-based techniques in order to introduce an efficient and economical visual device. The designed algorithm is divided to two components: obstacle detection and collision avoidance. The system has been implemented and tested in real-time scenarios. A video dataset of 30 videos and an average of 700 frames per video was fed to the system for the testing purpose. The achieved 96.53% accuracy rate of the proposed sequence of techniques that are used for real-time detection component is based on a wide detection view that used two camera modules and a detection range of approximately 9 meters. The 98% accuracy rate was obtained for a larger dataset. However, the main contribution in this work is the proposed novel collision avoidance approach that is based on the image depth and fuzzy control rules. Through the use of x-y coordinate system, we were able to map the input frames, whereas each frame was divided into three areas vertically and further 1/3 of the height of that frame horizontally in order to specify the urgency of any existing obstacles within that frame. In addition, we were able to provide precise information to help the VI user in avoiding front obstacles using the fuzzy logic. The strength of this proposed approach is that it aids the VI users in avoiding 100% of all detected objects. Once the device is initialized, the VI user can confidently enter unfamiliar surroundings. Therefore, this implemented device can be described as accurate, reliable, friendly, light, and economically accessible that facilitates the mobility of VI people and does not require any previous knowledge of the surrounding environment. Finally, our proposed approach was compared with most efficient introduced techniques and proved to outperform them

    Smart Assistive Technology for People with Visual Field Loss

    Get PDF
    Visual field loss results in the lack of ability to clearly see objects in the surrounding environment, which affects the ability to determine potential hazards. In visual field loss, parts of the visual field are impaired to varying degrees, while other parts may remain healthy. This defect can be debilitating, making daily life activities very stressful. Unlike blind people, people with visual field loss retain some functional vision. It would be beneficial to intelligently augment this vision by adding computer-generated information to increase the users' awareness of possible hazards by providing early notifications. This thesis introduces a smart hazard attention system to help visual field impaired people with their navigation using smart glasses and a real-time hazard classification system. This takes the form of a novel, customised, machine learning-based hazard classification system that can be integrated into wearable assistive technology such as smart glasses. The proposed solution provides early notifications based on (1) the visual status of the user and (2) the motion status of the detected object. The presented technology can detect multiple objects at the same time and classify them into different hazard types. The system design in this work consists of four modules: (1) a deep learning-based object detector to recognise static and moving objects in real-time, (2) a Kalman Filter-based multi-object tracker to track the detected objects over time to determine their motion model, (3) a Neural Network-based classifier to determine the level of danger for each hazard using its motion features extracted while the object is in the user's field of vision, and (4) a feedback generation module to translate the hazard level into a smart notification to increase user's cognitive perception using the healthy vision within the visual field. For qualitative system testing, normal and personalised defected vision models were implemented. The personalised defected vision model was created to synthesise the visual function for the people with visual field defects. Actual central and full-field test results were used to create a personalised model that is used in the feedback generation stage of this system, where the visual notifications are displayed in the user's healthy visual area. The proposed solution will enhance the quality of life for people suffering from visual field loss conditions. This non-intrusive, wearable hazard detection technology can provide obstacle avoidance solution, and prevent falls and collisions early with minimal information

    Novel Framework for Outdoor Mobility Assistance and Auditory Display for Visually Impaired People

    Get PDF
    Outdoor mobility of Visually Impaired People (VIPs) has always been challenging due to the dynamically varying scenes and environmental states. Variety of systems have been introduced to assist VIPs’ mobility that include sensor mounted canes and use of machine intelligence. However, these systems are not reliable when used to navigate the VIPs in dynamically changing environments. The associated challenges are the robust sensing and avoiding diverse types of obstacles, dynamically modelling the changing environmental states (e.g. moving objects, road-works), and effective communication to interpret the environmental states and hazards. In this paper, we propose an intelligent wearable auditory display framework that will process real-time video and multi-sensor data streams to: a) identify the type of obstacles, b) recognize the surrounding scene/objects and corresponding attributes (e.g. geometry, size, shape, distance from user), c) automatically generate the descriptive information about the recognized obstacle/objects and attributes, d) produce accurate, precise and reliable spatial information and corresponding instructions in audio-visual form to assist and navigate VIPs safely with or without the assistance of traditional means

    GIVE-ME: Gamification In Virtual Environments for Multimodal Evaluation - A Framework

    Full text link
    In the last few decades, a variety of assistive technologies (AT) have been developed to improve the quality of life of visually impaired people. These include providing an independent means of travel and thus better access to education and places of work. There is, however, no metric for comparing and benchmarking these technologies, especially multimodal systems. In this dissertation, we propose GIVE-ME: Gamification In Virtual Environments for Multimodal Evaluation, a framework which allows for developers and consumers to assess their technologies in a functional and objective manner. This framework is based on three foundations: multimodality, gamification, and virtual reality. It facilitates fuller and more controlled data collection, rapid prototyping and testing of multimodal ATs, benchmarking heterogeneous ATs, and conversion of these evaluation tools into simulation or training tools. Our contributions include: (1) a unified evaluation framework: via developing an evaluative approach for multimodal visual ATs; (2) a sustainable evaluation: by employing virtual environments and gamification techniques to create engaging games for users, while collecting experimental data for analysis; (3) a novel psychophysics evaluation: enabling researchers to conduct psychophysics evaluation despite the experiment being a navigational task; and (4) a novel collaborative environment: enabling developers to rapid prototype and test their ATs with users in an early stakeholder involvement that fosters communication between developers and users. This dissertation first provides a background in assistive technologies and motivation for the framework. This is followed by detailed description of the GIVE-ME Framework, with particular attention to its user interfaces, foundations, and components. Then four applications are presented that describe how the framework is applied. Results and discussions are also presented for each application. Finally, both conclusions and a few directions for future work are presented in the last chapter

    Clinical Decision Support Systems with Game-based Environments, Monitoring Symptoms of Parkinson’s Disease with Exergames

    Get PDF
    Parkinson’s Disease (PD) is a malady caused by progressive neuronal degeneration, deriving in several physical and cognitive symptoms that worsen with time. Like many other chronic diseases, it requires constant monitoring to perform medication and therapeutic adjustments. This is due to the significant variability in PD symptomatology and progress between patients. At the moment, this monitoring requires substantial participation from caregivers and numerous clinic visits. Personal diaries and questionnaires are used as data sources for medication and therapeutic adjustments. The subjectivity in these data sources leads to suboptimal clinical decisions. Therefore, more objective data sources are required to better monitor the progress of individual PD patients. A potential contribution towards more objective monitoring of PD is clinical decision support systems. These systems employ sensors and classification techniques to provide caregivers with objective information for their decision-making. This leads to more objective assessments of patient improvement or deterioration, resulting in better adjusted medication and therapeutic plans. Hereby, the need to encourage patients to actively and regularly provide data for remote monitoring remains a significant challenge. To address this challenge, the goal of this thesis is to combine clinical decision support systems with game-based environments. More specifically, serious games in the form of exergames, active video games that involve physical exercise, shall be used to deliver objective data for PD monitoring and therapy. Exergames increase engagement while combining physical and cognitive tasks. This combination, known as dual-tasking, has been proven to improve rehabilitation outcomes in PD: recent randomized clinical trials on exergame-based rehabilitation in PD show improvements in clinical outcomes that are equal or superior to those of traditional rehabilitation. In this thesis, we present an exergame-based clinical decision support system model to monitor symptoms of PD. This model provides both objective information on PD symptoms and an engaging environment for the patients. The model is elaborated, prototypically implemented and validated in the context of two of the most prominent symptoms of PD: (1) balance and gait, as well as (2) hand tremor and slowness of movement (bradykinesia). While balance and gait affections increase the risk of falling, hand tremors and bradykinesia affect hand dexterity. We employ Wii Balance Boards and Leap Motion sensors, and digitalize aspects of current clinical standards used to assess PD symptoms. In addition, we present two dual-tasking exergames: PDDanceCity for balance and gait, and PDPuzzleTable for tremor and bradykinesia. We evaluate the capability of our system for assessing the risk of falling and the severity of tremor in comparison with clinical standards. We also explore the statistical significance and effect size of the data we collect from PD patients and healthy controls. We demonstrate that the presented approach can predict an increased risk of falling and estimate tremor severity. Also, the target population shows a good acceptance of PDDanceCity and PDPuzzleTable. In summary, our results indicate a clear feasibility to implement this system for PD. Nevertheless, long-term randomized clinical trials are required to evaluate the potential of PDDanceCity and PDPuzzleTable for physical and cognitive rehabilitation effects
    • …
    corecore