3,086 research outputs found

    Radar and RGB-depth sensors for fall detection: a review

    Get PDF
    This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing

    Activity Recognition for IoT Devices Using Fuzzy Spatio-Temporal Features as Environmental Sensor Fusion

    Get PDF
    The IoT describes a development field where new approaches and trends are in constant change. In this scenario, new devices and sensors are offering higher precision in everyday life in an increasingly less invasive way. In this work, we propose the use of spatial-temporal features by means of fuzzy logic as a general descriptor for heterogeneous sensors. This fuzzy sensor representation is highly efficient and enables devices with low computing power to develop learning and evaluation tasks in activity recognition using light and efficient classifiers. To show the methodology's potential in real applications, we deploy an intelligent environment where new UWB location devices, inertial objects, wearable devices, and binary sensors are connected with each other and describe daily human activities. We then apply the proposed fuzzy logic-based methodology to obtain spatial-temporal features to fuse the data from the heterogeneous sensor devices. A case study developed in the UJAmISmart Lab of the University of Jaen (Jaen, Spain) shows the encouraging performance of the methodology when recognizing the activity of an inhabitant using efficient classifiers

    Smart aging : utilisation of machine learning and the Internet of Things for independent living

    Get PDF
    Smart aging utilises innovative approaches and technology to improve older adults’ quality of life, increasing their prospects of living independently. One of the major concerns the older adults to live independently is “serious fall”, as almost a third of people aged over 65 having a fall each year. Dementia, affecting nearly 9% of the same age group, poses another significant issue that needs to be identified as early as possible. Existing fall detection systems from the wearable sensors generate many false alarms; hence, a more accurate and secure system is necessary. Furthermore, there is a considerable gap to identify the onset of cognitive impairment using remote monitoring for self-assisted seniors living in their residences. Applying biometric security improves older adults’ confidence in using IoT and makes it easier for them to benefit from smart aging. Several publicly available datasets are pre-processed to extract distinctive features to address fall detection shortcomings, identify the onset of dementia system, and enable biometric security to wearable sensors. These key features are used with novel machine learning algorithms to train models for the fall detection system, identifying the onset of dementia system, and biometric authentication system. Applying a quantitative approach, these models are tested and analysed from the test dataset. The fall detection approach proposed in this work, in multimodal mode, can achieve an accuracy of 99% to detect a fall. Additionally, using 13 selected features, a system for detecting early signs of dementia is developed. This system has achieved an accuracy rate of 93% to identify a cognitive decline in the older adult, using only some selected aspects of their daily activities. Furthermore, the ML-based biometric authentication system uses physiological signals, such as ECG and Photoplethysmogram, in a fusion mode to identify and authenticate a person, resulting in enhancement of their privacy and security in a smart aging environment. The benefits offered by the fall detection system, early detection and identifying the signs of dementia, and the biometric authentication system, can improve the quality of life for the seniors who prefer to live independently or by themselves

    Eyewear Computing \u2013 Augmenting the Human with Head-Mounted Wearable Assistants

    Get PDF
    The seminar was composed of workshops and tutorials on head-mounted eye tracking, egocentric vision, optics, and head-mounted displays. The seminar welcomed 30 academic and industry researchers from Europe, the US, and Asia with a diverse background, including wearable and ubiquitous computing, computer vision, developmental psychology, optics, and human-computer interaction. In contrast to several previous Dagstuhl seminars, we used an ignite talk format to reduce the time of talks to one half-day and to leave the rest of the week for hands-on sessions, group work, general discussions, and socialising. The key results of this seminar are 1) the identification of key research challenges and summaries of breakout groups on multimodal eyewear computing, egocentric vision, security and privacy issues, skill augmentation and task guidance, eyewear computing for gaming, as well as prototyping of VR applications, 2) a list of datasets and research tools for eyewear computing, 3) three small-scale datasets recorded during the seminar, 4) an article in ACM Interactions entitled \u201cEyewear Computers for Human-Computer Interaction\u201d, as well as 5) two follow-up workshops on \u201cEgocentric Perception, Interaction, and Computing\u201d at the European Conference on Computer Vision (ECCV) as well as \u201cEyewear Computing\u201d at the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp)

    HABITAT: An IoT Solution for Independent Elderly

    Get PDF
    In this work, a flexible and extensive digital platform for Smart Homes is presented, exploiting the most advanced technologies of the Internet of Things, such as Radio Frequency Identification, wearable electronics, Wireless Sensor Networks, and Artificial Intelligence. Thus, the main novelty of the paper is the system-level description of the platform flexibility allowing the interoperability of different smart devices. This research was developed within the framework of the operative project HABITAT (Home Assistance Based on the Internet of Things for the Autonomy of Everybody), aiming at developing smart devices to support elderly people both in their own houses and in retirement homes, and embedding them in everyday life objects, thus reducing the expenses for healthcare due to the lower need for personal assistance, and providing a better life quality to the elderly users

    Human activity mining in multi-occupancy contexts based on nearby interaction under a fuzzy approach

    Get PDF
    Multioccupation encompasses real-life environments in which people interact in the same common space. Recognizing activities in this context for each inhabitant has been challenging and complex. This work presents a fuzzy knowledge-based system for mining human activities in multi-occupancy contexts based on nearby interaction based on the Ultra-wideband. First, interest zone spatial location is modelled using a straightforward fuzzy logic approach, enabling discriminating short-term event interactions. Second, linguistic protoforms use fuzzy rules to describe long-term events for mining human activities in a multi-occupancy context. A data set with multimodal sensors has been collected and labelled to exhibit the application of the approach. The results show an encouraging performance (0.9 precision) in the discrimination of multiple occupations
    corecore