67 research outputs found

    Cross-Frequency Classification of Indoor Activities with DNN Transfer Learning

    Get PDF
    Remote, non-contact recognition of human motion and activities is central to health monitoring in assisted living facilities, but current systems face the problems of training compatibility, minimal training data sets and a lack of interoperability between radar sensors at different frequencies. This paper represents a first work to consider the efficacy of deep neural networks (DNNs) and transfer learning to bridge the gap in phenomenology that results when multiple types of radars simultaneously observe human activity. Six different human activities are recorded indoors simultaneously with 5.8 GHz and 25 GHz radars. Firstly, the bottleneck feature performance of the DNNs show that a baseline of 76% is achieved. On models trained only with 25 GHz data when 5.8 GHz data is used for testing 81% accuracy is achieved. in absence of a large dataset for radar at a certain frequency, we demonstrate information from a different frequency radar is better suited for generating the classification models than optical images and by using time-velocity diagrams (TVD), a degree of interoperability can be achieved

    Cross-Frequency Classification of Indoor Activities with DNN Transfer Learning

    Get PDF
    Remote, non-contact recognition of human motion and activities is central to health monitoring in assisted living facilities, but current systems face the problems of training compatibility, minimal training data sets and a lack of interoperability between radar sensors at different frequencies. This paper represents a first work to consider the efficacy of deep neural networks (DNNs) and transfer learning to bridge the gap in phenomenology that results when multiple types of radars simultaneously observe human activity. Six different human activities are recorded indoors simultaneously with 5.8 GHz and 25 GHz radars. Firstly, the bottleneck feature performance of the DNNs show that a baseline of 76% is achieved. On models trained only with 25 GHz data when 5.8 GHz data is used for testing 81% accuracy is achieved. in absence of a large dataset for radar at a certain frequency, we demonstrate information from a different frequency radar is better suited for generating the classification models than optical images and by using time-velocity diagrams (TVD), a degree of interoperability can be achieved

    Advanced Human Activity Recognition through Data Augmentation and Feature Concatenation of Micro-Doppler Signatures

    Get PDF
    Developing accurate classification models for radar-based Human Activity Recognition (HAR), capable of solving real-world problems, depends heavily on the amount of available data. In this paper, we propose a simple, effective, and generalizable data augmentation strategy along with preprocessing for micro-Doppler signatures to enhance recognition performance. By leveraging the decomposition properties of the Discrete Wavelet Transform (DWT), new samples are generated with distinct characteristics that do not overlap with those of the original samples. The micro-Doppler signatures are projected onto the DWT space for the decomposition process using the Haar wavelet. The returned decomposition components are used in different configurations to generate new data. Three new samples are obtained from a single spectrogram, which increases the amount of training data without creating duplicates. Next, the augmented samples are processed using the Sobel filter. This step allows each sample to be expanded into three representations, including the gradient in the x-direction (Dx), y-direction (Dy), and both x- and y-directions (Dxy). These representations are used as input for training a three-input convolutional neural network-long short-term memory support vector machine (CNN-LSTM-SVM) model. We have assessed the feasibility of our solution by evaluating it on three datasets containing micro-Doppler signatures of human activities, including Frequency Modulated Continuous Wave (FMCW) 77 GHz, FMCW 24 GHz, and Impulse Radio Ultra-Wide Band (IR-UWB) 10 GHz datasets. Several experiments have been carried out to evaluate the model\u27s performance with the inclusion of additional samples. The model was trained from scratch only on the augmented samples and tested on the original samples. Our augmentation approach has been thoroughly evaluated using various metrics, including accuracy, precision, recall, and F1-score. The results demonstrate a substantial improvement in the recognition rate and effectively alleviate the overfitting effect. Accuracies of 96.47%, 94.27%, and 98.18% are obtained for the FMCW 77 GHz, FMCW 24 GHz, and IR- UWB 10 GHz datasets, respectively. The findings of the study demonstrate the utility of DWT to enrich micro-Doppler training samples to improve HAR performance. Furthermore, the processing step was found to be efficient in enhancing the classification accuracy, achieving 96.78%, 96.32%, and 100% for the FMCW 77 GHz, FMCW 24 GHz, and IR-UWB 10 GHz datasets, respectively

    Interferometric Radar for Activity Recognition and Benchmarking in Different Radar Geometries

    Get PDF
    Radar micro-Doppler signatures have been proposed for human activity classification for surveillance and ambient assisted living in healthcare-related applications. A known issue is the performance reduction when the target is moving tangentially to the line-of-sight of the radar. Multiple techniques have been proposed to address this, such as multistatic radar and to some extent, interferometric radar. A simulator is presented to generate synthetic data representative of 8 different radar systems (including configurations as monostatic, multistatic, and interferometric) to quantify classification performances as a function of aspect angles and deployment geometries. This simulator allows an unbiased performance evaluation of the different radar systems. 6 human activities are considered with signatures originating from motion-captured data of 14 different subjects. The results show that interferometric radar data with fusion outperforms the other methods with over 97.6% accuracy consistently across all aspect angles, as well as the potential for simplified indoor deployment

    The Human Activity Radar Challenge: benchmarking based on the ‘Radar signatures of human activities’ dataset from Glasgow University

    Get PDF
    Radar is an extremely valuable sensing technology for detecting moving targets and measuring their range, velocity, and angular positions. When people are monitored at home, radar is more likely to be accepted by end-users, as they already use WiFi, is perceived as privacy-preserving compared to cameras, and does not require user compliance as wearable sensors do. Furthermore, it is not affected by lighting condi-tions nor requires artificial lights that could cause discomfort in the home environment. So, radar-based human activities classification in the context of assisted living can empower an aging society to live at home independently longer. However, challenges remain as to the formulation of the most effective algorithms for radar-based human activities classification and their validation. To promote the exploration and cross-evaluation of different algorithms, our dataset released in 2019 was used to benchmark various classification approaches. The challenge was open from February 2020 to December 2020. A total of 23 organizations worldwide, forming 12 teams from academia and industry, participated in the inaugural Radar Challenge, and submitted 188 valid entries to the challenge. This paper presents an overview and evaluation of the approaches used for all primary contributions in this inaugural challenge. The proposed algorithms are summarized, and the main parameters affecting their performances are analyzed

    Radar based discrete and continuous activity recognition for assisted living

    Get PDF
    In an era of digital transformation, there is an appetite for automating the monitoring process of motions and actions by individuals who are part of a society increasingly getting older on average. ”Activity recognition” is where sensors use motion information from participants who are wearing a wearable sensor or are in the field of view of a remote sensor which, coupled with machine learning algorithms, can automatically identify the movement or action the person is undertaking. Radar is a nascent sensor for this application, having been proposed in literature as an effective privacy-compliant sensor that can track movements of the body effectively. The methods of recording movements are separated into two types where ’Discrete’ movements provide an overview of a single activity within a fixed interval of time, while ’Continuous’ activities present sequences of activities performed in a series with variable duration and uncertain transitions, making these a challenging and yet much more realistic classification problem. In this thesis, first an overview of the technology of continuous wave (CW) and frequency modulated continuous wave (FMCW) radars and the machine learning algorithms and classification concepts is provided. Following this, state of the art for activity recognition with radar is presented and the key papers and significant works are discussed. The remaining chapters of this thesis discuss the research topics where contributions were made. This is commenced through analysing the effect of the physiology of the subject under test, to show that age can have an effect on the radar readings on the target. This is followed by porting existing radar recognition technologies and presenting novel use of radar based gait recognition to detect lameness in animals. Reverting to the human-centric application, improvements to activity recognition on humans and its accuracy was demonstrated by utilising features from different domains with feature selection and using different sensing technologies cooperatively. Finally, using a Bi-long short term memory (LSTM) based network, improved recognition of continuous activities and activity transitions without human-dependent feature extraction was demonstrated. Accuracy rate of 97% was achieved through sensor fusion and feature selection for discrete activities and for continuous activities, the Bi-LSTM achieved 92% accuracy with a sole radar sensor

    Sequential human gait classification with distributed radar sensor fusion

    Get PDF
    This paper presents different information fusion approaches to classify human gait patterns and falls in a radar sensors network. The human gaits classified in this work are both individual and sequential, continuous gait collected by a FMCW radar and three UWB pulse radar placed at different spatial locations. Sequential gaits are those containing multiple gait styles performed one after the other, with natural transitions in between, including fall events developing from walking gait in some cases. The proposed information fusion approaches operate at signal and decision level. For the signal level combination, a simple trilateration algorithm is implemented on the range data from the 3 UWB radar sensors, achieving good classification results with the proposed Bi-LSTM (Bidirectional LSTM neural network) as classifier, without exploiting conventional micro-Doppler information. For the decision level fusion, the classification results of individual radars using the Bi-LSTM network are combined with a robust Naive Bayes Combiner (NBC), and this showed subsequent improvement compared to the single radar case thanks to multi-perspective views of the subjects. Compared to conventional SVM and Random Forest classifiers, the proposed approach yields +20% and +17% improvement in the classification accuracy of individual gaits for the range-only trilateration method and NBC decision fusion method, respectively. When classifying sequential gaits, the overall accuracy for the two proposed methods reaches 93% and 90%, with validation via a ’leaving one participant out’ approach to test the robustness with subjects unknown to the network
    • 

    corecore