5,228 research outputs found
Radar and RGB-depth sensors for fall detection: a review
This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing
A novel monitoring system for fall detection in older people
Indexación: Scopus.This work was supported in part by CORFO - CENS 16CTTS-66390 through the National Center on Health Information Systems, in part by the National Commission for Scientific and Technological Research (CONICYT) through the Program STIC-AMSUD 17STIC-03: ‘‘MONITORing for ehealth," FONDEF ID16I10449 ‘‘Sistema inteligente para la gestión y análisis de la dotación de camas en la red asistencial del sector público’’, and in part by MEC80170097 ‘‘Red de colaboración cientÃfica entre universidades nacionales e internacionales para la estructuración del doctorado y magister en informática médica en la Universidad de ValparaÃso’’. The work of V. H. C. De Albuquerque was supported by the Brazilian National Council for Research and Development (CNPq), under Grant 304315/2017-6.Each year, more than 30% of people over 65 years-old suffer some fall. Unfortunately, this can generate physical and psychological damage, especially if they live alone and they are unable to get help. In this field, several studies have been performed aiming to alert potential falls of the older people by using different types of sensors and algorithms. In this paper, we present a novel non-invasive monitoring system for fall detection in older people who live alone. Our proposal is using very-low-resolution thermal sensors for classifying a fall and then alerting to the care staff. Also, we analyze the performance of three recurrent neural networks for fall detections: Long short-term memory (LSTM), gated recurrent unit, and Bi-LSTM. As many learning algorithms, we have performed a training phase using different test subjects. After several tests, we can observe that the Bi-LSTM approach overcome the others techniques reaching a 93% of accuracy in fall detection. We believe that the bidirectional way of the Bi-LSTM algorithm gives excellent results because the use of their data is influenced by prior and new information, which compares to LSTM and GRU. Information obtained using this system did not compromise the user's privacy, which constitutes an additional advantage of this alternative. © 2013 IEEE.https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=842305
Recommended from our members
Adaptive thermal sensor array placement for human segmentation and occupancy estimation
Thermal sensor array (TSA) offers privacy-preserving, low-cost, and non-invasive features, which makes it suitable for various indoor applications such as anomaly detection, health monitoring, home security, and monitoring energy efficiency applications. Previous approaches to human-centred applications using the TSA usually relied on the use of a fixed sensor location to make the human-sensor distance and the human presence shape fixed. However, placing this sensor in different locations and new indoor environments can pose a significant challenge. In this paper, a novel framework based on a deep convolutional encoder-decoder network is proposed to address this challenge in real-life deployment. The framework presents a semantic segmentation of the human presence and estimates the occupancy in indoor-environment. It is also capable to segment the human presence and counts the number of people from different sensor locations, indoor environments, and human to sensor distance. Furthermore, the impact of the distance on the human presence using the TSA is investigated. The framework is evaluated to estimate the occupancy in different sensor locations, the number of occupants, environments, and human distance with classification and regression machine learning approaches. This paper shows that the classification approach using the adaptive boosting algorithm is an accurate approach which has achieves an accuracy of 98.43% and 100% from vertical and overhead sensor locations respectively
Privacy-preserving Social Distance Monitoring on Microcontrollers with Low-Resolution Infrared Sensors and CNNs
Low-resolution infrared (IR) array sensors offer a
low-cost, low-power, and privacy-preserving alternative to optical
cameras and smartphones/wearables for social distance monitoring in indoor spaces, permitting the recognition of basic shapes,
without revealing the personal details of individuals. In this work,
we demonstrate that an accurate detection of social distance
violations can be achieved processing the raw output of a 8x8
IR array sensor with a small-sized Convolutional Neural Network
(CNN). Furthermore, the CNN can be executed directly on a
Microcontroller (MCU)-based sensor node.
With results on a newly collected open dataset, we show that
our best CNN achieves 86.3% balanced accuracy, significantly
outperforming the 61% achieved by a state-of-the-art deterministic algorithm. Changing the architectural parameters of the
CNN, we obtain a rich Pareto set of models, spanning 70.5-86.3%
accuracy and 0.18-75k parameters. Deployed on a STM32L476RG
MCU, these models have a latency of 0.73-5.33ms, with an energy
consumption per inference of 9.38-68.57µJ
Device-free indoor localisation with non-wireless sensing techniques : a thesis by publications presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Electronics and Computer Engineering, Massey University, Albany, New Zealand
Global Navigation Satellite Systems provide accurate and reliable outdoor positioning to support a large number of applications across many sectors. Unfortunately, such systems do not operate reliably inside buildings due to the signal degradation caused by the absence of a clear line of sight with the satellites. The past two decades have therefore seen intensive research into the development of Indoor Positioning System (IPS). While considerable progress has been made in the indoor localisation discipline, there is still no widely adopted solution. The proliferation of Internet of Things (IoT) devices within the modern built environment provides an opportunity to localise human subjects by utilising such ubiquitous networked devices. This thesis presents the development, implementation and evaluation of several passive indoor positioning systems using ambient Visible Light Positioning (VLP), capacitive-flooring, and thermopile sensors (low-resolution thermal cameras). These systems position the human subject in a device-free manner (i.e., the subject is not required to be instrumented). The developed systems improve upon the state-of-the-art solutions by offering superior position accuracy whilst also using more robust and generalised test setups. The developed passive VLP system is one of the first reported solutions making use of ambient light to position a moving human subject. The capacitive-floor based system improves upon the accuracy of existing flooring solutions as well as demonstrates the potential for automated fall detection. The system also requires very little calibration, i.e., variations of the environment or subject have very little impact upon it. The thermopile positioning system is also shown to be robust to changes in the environment and subjects. Improvements are made over the current literature by testing across multiple environments and subjects whilst using a robust ground truth system. Finally, advanced machine learning methods were implemented and benchmarked against a thermopile dataset which has been made available for other researchers to use
Sensor System for Rescue Robots
A majority of rescue worker fatalities are a result of on-scene responses. Existing technologies help assist the first responders in scenarios of no light, and there even exist robots that can navigate radioactive areas. However, none are able to be both quickly deployable and enter hard to reach or unsafe areas in an emergency event such as an earthquake or storm that damages a structure. In this project we created a sensor platform system to augment existing robotic solutions so that rescue workers can search for people in danger while avoiding preventable injury or death and saving time and resources. Our results showed that we were able to map out a 2D map of the room with updates for robot motion on a display while also showing a live thermal image in front of the system. The system is also capable of taking a digital picture from a triggering event and then displaying it on the computer screen. We discovered that data transfer plays a huge role in making different programs like Arduino and Processing interact with each other. Consequently, this needs to be accounted for when improving our project. In particular our project is wired right now but should deliver data wirelessly to be of any practical use. Furthermore, we dipped our feet into SLAM technologies and if our project were to become autonomous, more research into the algorithms would make this autonomy feasible
Hardware for recognition of human activities: a review of smart home and AAL related technologies
Activity recognition (AR) from an applied perspective of ambient assisted living (AAL) and smart homes (SH) has become a subject of great interest. Promising a better quality of life, AR applied in contexts such as health, security, and energy consumption can lead to solutions capable of reaching even the people most in need. This study was strongly motivated because levels of development, deployment, and technology of AR solutions transferred to society and industry are based on software development, but also depend on the hardware devices used. The current paper identifies contributions to hardware uses for activity recognition through a scientific literature review in the Web of Science (WoS) database. This work found four dominant groups of technologies used for AR in SH and AAL—smartphones, wearables, video, and electronic components—and two emerging technologies: Wi-Fi and assistive robots. Many of these technologies overlap across many research works. Through bibliometric networks analysis, the present review identified some gaps and new potential combinations of technologies for advances in this emerging worldwide field and their uses. The review also relates the use of these six technologies in health conditions, health care, emotion recognition, occupancy, mobility, posture recognition, localization, fall detection, and generic activity recognition applications. The above can serve as a road map that allows readers to execute approachable projects and deploy applications in different socioeconomic contexts, and the possibility to establish networks with the community involved in this topic. This analysis shows that the research field in activity recognition accepts that specific goals cannot be achieved using one single hardware technology, but can be using joint solutions, this paper shows how such technology works in this regard
- …