105 research outputs found
Classification and Decision-Theoretic Framework for Detecting and Reporting Unseen Falls
Detecting falls is critical for an activity recognition system to ensure the well being of an individual. However, falls occur rarely and infrequently, therefore sufficient data for them may not be available during training of the classifiers. Building a fall detection system in the absence of fall data is very challenging and can severely undermine the generalization capabilities of an activity recognition system. In this thesis, we present ideas from both classification and decision theory perspectives to handle scenarios when the training data for falls is not available. In traditional decision theoretic approaches, the utilities (or conversely costs) to report/not-report a fall or a non-fall are treated equally or the costs are deduced from the datasets, both of which are flawed. However, these costs are either difficult to compute or only available from domain experts. Therefore, in a typical fall detection system, we neither have a good model for falls nor an accurate estimate of utilities. In this thesis, we make contributions to handle both of these situations.
In recent years, Hidden Markov Models (HMMs) have been used to model temporal dynamics of human activities. HMMs are generally built for normal activities and a threshold based on the log-likelihood of the training data is used to identify unseen falls. We show that such formulation to identify unseen fall activities is ill-posed for this problem. We present a new approach for the identification of falls using wearable devices in the absence of their training data but with plentiful data for normal Activities of Daily Living (ADL). We propose three 'X-Factor' Hidden Markov Model (XHMMs) approaches, which are similar to the traditional HMMs but have ``inflated'' output covariances (observation models). To estimate the inflated covariances, we propose a novel cross validation method to remove 'outliers' or deviant sequences from the ADL that serves as proxies for the unseen falls and allow learning the XHMMs using only normal activities. We tested the proposed XHMM approaches on three activity recognition datasets and show high detection rates for unseen falls. We also show that supervised classification methods perform poorly when very limited fall data is available during the training phase.
We present a novel decision-theoretic approach to fall detection (dtFall) that aims to tackle the core problem when the model for falls and information about the costs/utilities associated with them is unavailable. We theoretically show that the expected regret will always be positive using dtFall instead of a maximum likelihood classifier. We present a new method to parameterize unseen falls such that training situations with no fall data can be handled. We also identify problems with theoretical thresholding to identify falls using decision theoretic modelling when training data for fall data is absent, and present an empirical thresholding technique to handle imperfect models for falls and non-falls. We also develop a new cost model based on severity of falls to provide an operational range of utilities. We present results on three activity recognition datasets, and show how the results may generalize to the difficult problem of fall detection in the real world. Under the condition when falls occur sporadically and rarely in the test set, the results show that (a) knowing the difference in the cost between a reported fall and a false alarm is useful, (b) as the cost of false alarm gets bigger this becomes more significant, and (c) the difference in the cost of between a reported and non-reported fall is not that useful
Multimodal radar sensing for ambient assisted living
Data acquired from health and behavioural monitoring of daily life activities can be exploited to provide real-time medical and nursing service with affordable cost and higher efficiency. A variety of sensing technologies for this purpose have been developed and presented in the literature, for instance, wearable IMU (Inertial Measurement Unit) to measure acceleration and angular speed of the person, cameras to record the images or video sequence, PIR (Pyroelectric infrared) sensor to detect the presence of the person based on Pyroelectric Effect, and radar to estimate distance and radial velocity of the person.
Each sensing technology has pros and cons, and may not be optimal for the tasks. It is possible to leverage the strength of all these sensors through information fusion in a multimodal fashion. The fusion can take place at three different levels, namely, i) signal level where commensurate data are combined, ii) feature level where feature vectors of different sensors are concatenated and iii) decision level where confidence level or prediction label of classifiers are used to generate a new output. For each level, there are different fusion algorithms, the key challenge here is mainly on choosing the best existing fusion algorithm and developing novel fusion algorithms that more suitable for the current application.
The fundamental contribution of this thesis is therefore exploring possible information fusion between radar, primarily FMCW (Frequency Modulated Continuous Wave) radar, and wearable IMU, between distributed radar sensors, and between UWB impulse radar and pressure sensor array. The objective is to sense and classify daily activities patterns, gait styles and micro-gestures as well as producing early warnings of high-risk events such as falls. Initially, only “snapshot” activities (single activity within a short X-s measurement) have been collected and analysed for verifying the accuracy improvement due to information fusion. Then continuous activities (activities that are performed one after another with random duration and transitions) have been collected to simulate the real-world case scenario. To overcome the drawbacks of conventional sliding-window approach on continuous data, a Bi-LSTM (Bidirectional Long Short-Term Memory) network is proposed to identify the transitions of daily activities. Meanwhile, a hybrid fusion framework is presented to exploit the power of soft and hard fusion. Moreover, a trilateration-based signal level fusion method has been successfully applied on the range information of three UWB (Ultra-wideband) impulse radar and the results show comparable performance as using micro-Doppler signature, at the price of much less computation loads. For classifying ‘snapshot’ activities, fusion between radar and wearable shows approximately 12% accuracy improvement compared to using radar only, whereas for classifying continuous activities and gaits, our proposed hybrid fusion and trilateration-based signal level improves roughly 6.8% (before 89%, after 95.8%) and 7.3% (before 85.4%, after 92.7%), respectively
Mobile Health Technologies
Mobile Health Technologies, also known as mHealth technologies, have emerged, amongst healthcare providers, as the ultimate Technologies-of-Choice for the 21st century in delivering not only transformative change in healthcare delivery, but also critical health information to different communities of practice in integrated healthcare information systems. mHealth technologies nurture seamless platforms and pragmatic tools for managing pertinent health information across the continuum of different healthcare providers. mHealth technologies commonly utilize mobile medical devices, monitoring and wireless devices, and/or telemedicine in healthcare delivery and health research. Today, mHealth technologies provide opportunities to record and monitor conditions of patients with chronic diseases such as asthma, Chronic Obstructive Pulmonary Diseases (COPD) and diabetes mellitus. The intent of this book is to enlighten readers about the theories and applications of mHealth technologies in the healthcare domain
Recommended from our members
Automated human fall recognition from visual data
Falls are one of the greatest risks for the older adults living alone at home. This research presents a novel visual-based fall detection approach to support independent living for older adults in an indoor environment. The aim of the research was to investigate appropriate methods for detecting falls through analysing the motion and shape of the human body.
Several techniques for automatically detecting falls have been proposed. The existing technologies can be classified into three main groups of fall detectors, namely: ambient device-based, wearable sensor-based and computer vision-based techniques. Ambient device-based techniques use vibration or pressure sensors to capture the sound and vibration for detecting the presence and position of a person. Although these devices are inexpensive and do not disturb the user, the detection rate is rather low and many false alarms are generated. Wearable devices use different sensors such as accelerometer and gyroscopes to capture the human body movement information and detect falls. However, older adults often forget to wear them. Wearable sensors are also known to be too invasive as they require wearing and carrying various uncomfortable devices. Much work has been undertaken to investigate the use of visual-based sensors for fall detection using single, multiple, and omnidirectional cameras.
The proposed research reported in this thesis uses a single camera to detect a moving object using a background subtraction algorithm. The next step is to extract robust features which describe the change in human shape and to discriminate falls from other activities like lying and sitting. These features are based on motion, change in the human shape feature, projection histogram features and temporal change of head position. Features extracted from the human silhouette are finally fed into various machine learning classifiers for fall detection evaluation.
The ability to distinguish a fall action depends mainly on the quality of the classifier inputs, therefore, the features of the extracted human silhouette play a key role in the effectiveness and robustness of detecting human falls. In this research, the timed Motion History Image (tMHI) method is applied for motion segmentation. In addition, the motion information was combined with other features extracted from the fitted ellipse around the human body to discriminate actual fall from other activities.
Fall detection methods can be divided into two main categories; thresh- old based methods and machine learning-based methods. This research presents threshold-based methods to distinguish between Activities of Daily Living (ADL) and falls. Fall events can be detected if the measured features values higher than pre-determined threshold values. Results show that falls can be distinguished from ADL with an accuracy of 99:82%, using our recording dataset. In addition, various machine learning methods were compared to evaluate their abilities to accurately detecting falls. Experimental results show efficiency and reliability of the proposed fall detection approach with high fall detection rate of 99:60% and low false alarm 2:62% tested with UR Fall Detection dataset. Additionally, A set of experiments have been conducted using our recording dataset, the results indicate that the proposed approach achieves high fall detection rate 99:94% and low false alarm 0:02%
Development of a human fall detection system based on depth maps
Assistive care related products are increasingly in demand with the recent
developments in health sector associated technologies. There are several studies
concerned in improving and eliminating barriers in providing quality health care
services to all people, especially elderly who live alone and those who cannot move
from their home for various reasons such as disable, overweight. Among them, human
fall detection systems play an important role in our daily life, because fall is the main
obstacle for elderly people to live independently and it is also a major health concern
due to aging population. The three basic approaches used to develop human fall
detection systems include some sort of wearable devices, ambient based devices or
non-invasive vision based devices using live cameras. Most of such systems are either
based on wearable or ambient sensor which is very often rejected by users due to the
high false alarm and difficulties in carrying them during their daily life activities. Thus,
this study proposes a non-invasive human fall detection system based on the height,
velocity, statistical analysis, fall risk factors and position of the subject using depth
information from Microsoft Kinect sensor. Classification of human fall from other
activities of daily life is accomplished using height and velocity of the subject
extracted from the depth information after considering the fall risk level of the user.
Acceleration and activity detection are also employed if velocity and height fail to
classify the activity. Finally position of the subject is identified for fall confirmation
or statistical analysis is conducted to verify the fall event. From the experimental
results, the proposed system was able to achieve an average accuracy of 98.3% with
sensitivity of 100% and specificity of 97.7%. The proposed system accurately
distinguished all the fall events from other activities of daily life
Snoopy: Sniffing Your Smartwatch Passwords via Deep Sequence Learning
Demand for smartwatches has taken off in recent years with new models which can run independently from smartphones and provide more useful features, becoming first-class mobile platforms. One can access online banking or even make payments on a smartwatch without a paired phone. This makes smartwatches more attractive and vulnerable to malicious attacks, which to date have been largely overlooked. In this paper, we demonstrate Snoopy, a password extraction and inference system which is able to accurately infer passwords entered on Android/Apple watches within 20 attempts, just by eavesdropping on motion sensors. Snoopy uses a uniform framework to extract the segments of motion data when passwords are entered, and uses novel deep neural networks to infer the actual passwords. We evaluate the proposed Snoopy system in the real-world with data from 362 participants and show that our system offers a ~ 3-fold improvement in the accuracy of inferring passwords compared to the state-of-the-art, without consuming excessive energy or computational resources. We also show that Snoopy is very resilient to user and device heterogeneity: it can be trained on crowd-sourced motion data (e.g. via Amazon Mechanical Turk), and then used to attack passwords from a new user, even if they are wearing a different model. This paper shows that, in the wrong hands, Snoopy can potentially cause serious leaks of sensitive information. By raising awareness, we invite the community and manufacturers to revisit the risks of continuous motion sensing on smart wearable devices
Instrumentation of a cane to detect and prevent falls
Dissertação de mestrado integrado em Engenharia Biomédica (área de especialização em Eletrónica Médica)The number of falls is growing as the main cause of injuries and deaths in the geriatric community. As a result, the cost of treating the injuries associated with falls is also increasing. Thus, the development of fall-related strategies with the capability of real-time monitoring without user restriction is imperative. Due to their advantages, daily life accessories can be a solution to embed fall-related systems, and canes are no exception. Moreover, gait assessment might be capable of enhancing the capability of cane usage for older cane users. Therefore, reducing, even more, the possibility of possible falls amongst them. Summing up, it is crucial the development of strategies that recognize states of fall, the step before a fall (pre-fall step) and the different cane events continuously throughout a stride. This thesis aims to develop strategies capable of identifying these situations based on a cane system that collects both inertial and force information, the Assistive Smart Cane (ASCane).
The strategy regarding the detection of falls consisted of testing the data acquired with the ASCane with three different fixed multi-threshold fall detection algorithms, one dynamic multi-threshold and machine learning methods from the literature. They were tested and modified to account the use of a cane. The best performance resulted in a sensitivity and specificity of 96.90% and 98.98%, respectively.
For the detection of the different cane events in controlled and real-life situations, a state-of-the-art finite-state-machine gait event detector was modified to account the use of a cane and benchmarked against a ground truth system. Moreover, a machine learning study was completed involving eight feature selection methods and nine different machine learning classifiers. Results have shown that the accuracy of the classifiers was quite acceptable and presented the best results with 98.32% of overall accuracy for controlled situations and 94.82% in daily-life situations.
Regarding pre-fall step detection, the same machine learning approach was accomplished. The models were very accurate (Accuracy = 98.15%) and with the implementation of an online post-processing filter, all the false positive detections were eliminated, and a fall was able to be detected 1.019s before the end of the corresponding pre-fall step and 2.009s before impact.O número de quedas tornou-se uma das principais causas de lesões e mortes na comunidade geriátrica. Como resultado, o custo do tratamento das lesões também aumenta. Portanto, é necessário o desenvolvimento de estratégias relacionadas com quedas e que exibam capacidade de monitorização em tempo real sem colocar restrições ao usuário. Devido às suas vantagens, os acessórios do dia-a-dia podem ser uma solução para incorporar sistemas relacionados com quedas, sendo que as bengalas não são exceção. Além disso, a avaliação da marcha pode ser capaz de aprimorar a capacidade de uso de uma bengala para usuários mais idosos. Desta forma, é crucial o desenvolvimento de estratégias que reconheçam estados de queda, do passo anterior a uma queda e dos diferentes eventos da marcha de uma bengala. Esta dissertação tem como objetivo desenvolver estratégias capazes de identificar as situações anteriormente descritas com base num sistema incorporado numa bengala que coleta informações inerciais e de força, a Assistive Smart Cane (ASCane).
A estratégia referente à deteção de quedas consistiu em testar os dados adquiridos através da ASCane com três algoritmos de deteção de quedas (baseados em thresholds fixos), com um algoritmo de thresholds dinâmicos e diferentes classificadores de machine learning encontrados na literatura. Estes métodos foram testados e modificados para dar conta do uso de informação adquirida através de uma bengala. O melhor desempenho alcançado em termos de sensibilidade e especificidade foi de 96,90% e 98,98%, respetivamente.
Relativamente à deteção dos diferentes eventos da ASCane em situações controladas e da vida real, um detetor de eventos da marcha foi e comparado com um sistema de ground truth. Além disso, foi também realizado um estudo de machine learning envolvendo oito métodos de seleção de features e nove classificadores diferentes de machine learning. Os resultados mostraram que a precisão dos classificadores foi bastante aceitável e apresentou, como melhores resultados, 98,32% de precisão para situações controladas e 94.82% para situações do dia-a-dia.
No que concerne à deteção de passos pré-queda, a mesma abordagem de machine learning foi realizada. Os modelos foram precisos (precisão = 98,15%) e com a implementação de um filtro de pós-processamento, todas as deteções de falsos positivos foram eliminadas e uma queda foi passível de ser detetada 1,019s antes do final do respetivo passo de pré-queda e 2.009s antes do impacto
- …