1,082 research outputs found

    Magnetic and radar sensing for multimodal remote health monitoring

    Get PDF
    With the increased life expectancy and rise in health conditions related to aging, there is a need for new technologies that can routinely monitor vulnerable people, identify their daily pattern of activities and any anomaly or critical events such as falls. This paper aims to evaluate magnetic and radar sensors as suitable technologies for remote health monitoring purpose, both individually and fusing their information. After experiments and collecting data from 20 volunteers, numerical features has been extracted in both time and frequency domains. In order to analyse and verify the validation of fusion method for different classifiers, a Support Vector Machine with a quadratic kernel, and an Artificial Neural Network with one and multiple hidden layers have been implemented. Furthermore, for both classifiers, feature selection has been performed to obtain salient features. Using this technique along with fusion, both classifiers can detect 10 different activities with an accuracy rate of approximately 96%. In cases where the user is unknown to the classifier, an accuracy of approximately 92% is maintained

    Radar and RGB-depth sensors for fall detection: a review

    Get PDF
    This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing

    Multisensor Data Fusion for Human Activities Classification and Fall Detection

    Get PDF
    Significant research exists on the use of wearable sensors in the context of assisted living for activities recognition and fall detection, whereas radar sensors have been studied only recently in this domain. This paper approaches the performance limitation of using individual sensors, especially for classification of similar activities, by implementing information fusion of features extracted from experimental data collected by different sensors, namely a tri-axial accelerometer, a micro-Doppler radar, and a depth camera. Preliminary results confirm that combining information from heterogeneous sensors improves the overall performance of the system. The classification accuracy attained by means of this fusion approach improves by 11.2% compared to radar-only use, and by 16.9% compared to the accelerometer. Furthermore, adding features extracted from a RGB-D Kinect sensor, the overall classification accuracy increases up to 91.3%

    Micro-doppler-based in-home aided and unaided walking recognition with multiple radar and sonar systems

    Get PDF
    Published in IET Radar, Sonar and Navigation. Online first 21/06/2016.The potential for using micro-Doppler signatures as a basis for distinguishing between aided and unaided gaits is considered in this study for the purpose of characterising normal elderly gait and assessment of patient recovery. In particular, five different classes of mobility are considered: normal unaided walking, walking with a limp, walking using a cane or tripod, walking with a walker, and using a wheelchair. This presents a challenging classification problem as the differences in micro-Doppler for these activities can be quite slight. Within this context, the performance of four different radar and sonar systems – a 40 kHz sonar, a 5.8 GHz wireless pulsed Doppler radar mote, a 10 GHz X-band continuous wave (CW) radar, and a 24 GHz CW radar – is evaluated using a broad range of features. Performance improvements using feature selection is addressed as well as the impact on performance of sensor placement and potential occlusion due to household objects. Results show that nearly 80% correct classification can be achieved with 10 s observations from the 24 GHz CW radar, whereas 86% performance can be achieved with 5 s observations of sonar

    Bi-LSTM network for multimodal continuous human activity recognition and fall detection

    Get PDF
    This paper presents a framework based on multi-layer bi-LSTM network (bidirectional Long Short-Term Memory) for multimodal sensor fusion to sense and classify daily activities’ patterns and high-risk events such as falls. The data collected in this work are continuous activity streams from FMCW radar and three wearable inertial sensors on the wrist, waist, and ankle. Each activity has a variable duration in the data stream so that the transitions between activities can happen at random times within the stream, without resorting to conventional fixed-duration snapshots. The proposed bi-LSTM implements soft feature fusion between wearable sensors and radar data, as well as two robust hard-fusion methods using the confusion matrices of both sensors. A novel hybrid fusion scheme is then proposed to combine soft and hard fusion to push the classification performances to approximately 96% accuracy in identifying continuous activities and fall events. These fusion schemes implemented with the proposed bi-LSTM network are compared with conventional sliding window approach, and all are validated with realistic “leaving one participant out” (L1PO) method (i.e. testing subjects unknown to the classifier). The developed hybrid-fusion approach is capable of stabilizing the classification performance among different participants in terms of reducing accuracy variance of up to 18.1% and increasing minimum, worst-case accuracy up to 16.2%

    Personnel recognition and gait classification based on multistatic micro-doppler signatures using deep convolutional neural networks

    Get PDF
    In this letter, we propose two methods for personnel recognition and gait classification using deep convolutional neural networks (DCNNs) based on multistatic radar micro-Doppler signatures. Previous DCNN-based schemes have mainly focused on monostatic scenarios, whereas directional diversity offered by multistatic radar is exploited in this letter to improve classification accuracy. We first propose the voted monostatic DCNN (VMo-DCNN) method, which trains DCNNs on each receiver node separately and fuses the results by binary voting. By merging the fusion step into the network architecture, we further propose the multistatic DCNN (Mul-DCNN) method, which performs slightly better than VMo-DCNN. These methods are validated on real data measured with a 2.4-GHz multistatic radar system. Experimental results show that the Mul-DCNN achieves over 99% accuracy in armed/unarmed gait classification using only 20% training data and similar performance in two-class personnel recognition using 50% training data, which are higher than the accuracy obtained by performing DCNN on a single radar node

    A multi-sensory approach for remote health monitoring of older people

    Get PDF
    Growing life expectancy and increasing incidence of multiple chronic health conditions are significant societal challenges. Different technologies have been proposed to address these issues, detect critical events, such as stroke or falls, and monitor automatically human activities for health condition inference and anomaly detection. This paper aims to investigate two types of sensing technologies proposed for assisted living: wearable and radar sensors. First, different feature selection methods are validated and compared in terms of accuracy and computational loads. Then, information fusion is applied to enhance activity classification accuracy combining the two sensors. Improvements in classification accuracy of approximately 12% using feature level fusion are achieved with both support vector machine s (SVMs) and k-nearest neighbor (KNN) classifiers. Decision-level fusion schemes are also investigated, yielding classification accuracy in the order of 97%-98%

    Effect of sparsity-aware time–frequency analysis on dynamic hand gesture classification with radar micro-Doppler signatures

    Get PDF
    Dynamic hand gesture recognition is of great importance in human-computer interaction. In this study, the authors investigate the effect of sparsity-driven time-frequency analysis on hand gesture classification. The time-frequency spectrogram is first obtained by sparsity-driven time-frequency analysis. Then three empirical micro-Doppler features are extracted from the time-frequency spectrogram and a support vector machine is used to classify six kinds of dynamic hand gestures. The experimental results on measured data demonstrate that, compared to traditional time-frequency analysis techniques, sparsity-driven time-frequency analysis provides improved accuracy and robustness in dynamic hand gesture classification

    Radar Human Motion Recognition Using Motion States and Two-Way Classifications

    Full text link
    We perform classification of activities of daily living (ADL) using a Frequency-Modulated Continuous Waveform (FMCW) radar. In particular, we consider contiguous motions that are inseparable in time. Both the micro-Doppler signature and range-map are used to determine transitions from translation (walking) to in-place motions and vice versa, as well as to provide motion onset and the offset times. The possible classes of activities post and prior to the translation motion can be separately handled by forward and background classifiers. The paper describes ADL in terms of states and transitioning actions, and sets a framework to deal with separable and inseparable contiguous motions. It is shown that considering only the physically possible classes of motions stemming from the current motion state improves classification rates compared to incorporating all ADL for any given time

    Activity recognition with cooperative radar systems at C and K band

    Get PDF
    Remote health monitoring is a key component in the future of healthcare with predictive and fall risk estimation applications required in great need and with urgency. Radar, through the exploitation of the micro-Doppler effect, is able to generate signatures that can be classified automatically. In this work, features from two different radar systems operating at C band and K band have been used together co-operatively to classify ten indoor human activities with data from 20 subjects with a support vector machine classifier. Feature selection has been applied to remove redundancies and find a set of salient features for the radar systems, individually and in the fused scenario. Using the aforementioned methods, we show improvements in the classification accuracy for the systems from 75 and 70% for the radar systems individually, up to 89% when fused
    corecore