109 research outputs found

    Feature diversity for optimized human micro-doppler classification using multistatic radar

    Get PDF
    This paper investigates the selection of different combinations of features at different multistatic radar nodes, depending on scenario parameters, such as aspect angle to the target and signal-to-noise ratio, and radar parameters, such as dwell time, polarisation, and frequency band. Two sets of experimental data collected with the multistatic radar system NetRAD are analysed for two separate problems, namely the classification of unarmed vs potentially armed multiple personnel, and the personnel recognition of individuals based on walking gait. The results show that the overall classification accuracy can be significantly improved by taking into account feature diversity at each radar node depending on the environmental parameters and target behaviour, in comparison with the conventional approach of selecting the same features for all nodes

    Personnel recognition and gait classification based on multistatic micro-doppler signatures using deep convolutional neural networks

    Get PDF
    In this letter, we propose two methods for personnel recognition and gait classification using deep convolutional neural networks (DCNNs) based on multistatic radar micro-Doppler signatures. Previous DCNN-based schemes have mainly focused on monostatic scenarios, whereas directional diversity offered by multistatic radar is exploited in this letter to improve classification accuracy. We first propose the voted monostatic DCNN (VMo-DCNN) method, which trains DCNNs on each receiver node separately and fuses the results by binary voting. By merging the fusion step into the network architecture, we further propose the multistatic DCNN (Mul-DCNN) method, which performs slightly better than VMo-DCNN. These methods are validated on real data measured with a 2.4-GHz multistatic radar system. Experimental results show that the Mul-DCNN achieves over 99% accuracy in armed/unarmed gait classification using only 20% training data and similar performance in two-class personnel recognition using 50% training data, which are higher than the accuracy obtained by performing DCNN on a single radar node

    Bistatic Human micro-Doppler Signatures for Classification of Indoor Activities

    Get PDF
    This paper presents the analysis of human micro- Doppler signatures collected by a bistatic radar system to classify different indoor activities. Tools for automatic classification of different activities will enable the implementation and deployment of systems for monitoring life patterns of people and identifying fall events or anomalies which may be related to early signs of deteriorating physical health or cognitive capabilities. The preliminary results presented here show that the information within the micro-Doppler signatures can be successfully exploited for automatic classification, with accuracy up to 98%, and that the multi-perspective view on the target provided by bistatic data can contribute to enhance the overall system performance

    Practical classification of different moving targets using automotive radar and deep neural networks

    Get PDF
    In this work, the authors present results for classification of different classes of targets (car, single and multiple people, bicycle) using automotive radar data and different neural networks. A fast implementation of radar algorithms for detection, tracking, and micro-Doppler extraction is proposed in conjunction with the automotive radar transceiver TEF810X and microcontroller unit SR32R274 manufactured by NXP Semiconductors. Three different types of neural networks are considered, namely a classic convolutional network, a residual network, and a combination of convolutional and recurrent network, for different classification problems across the four classes of targets recorded. Considerable accuracy (close to 100% in some cases) and low latency of the radar pre-processing prior to classification (∼0.55 s to produce a 0.5 s long spectrogram) are demonstrated in this study, and possible shortcomings and outstanding issues are discussed

    Radar for Assisted Living in the Context of Internet of Things for Health and Beyond

    Get PDF
    This paper discusses the place of radar for assisted living in the context of IoT for Health and beyond. First, the context of assisted living and the urgency to address the problem is described. The second part gives a literature review of existing sensing modalities for assisted living and explains why radar is an upcoming preferred modality to address this issue. The third section presents developments in machine learning that helps improve performances in classification especially with deep learning with a reflection on lessons learned from it. The fourth section introduces recent published work from our research group in the area that shows promise with multimodal sensor fusion for classification and long short-term memory applied to early stages in the radar signal processing chain. Finally, we conclude with open challenges still to be addressed in the area and open to future research directions in animal welfare

    Magnetic and radar sensing for multimodal remote health monitoring

    Get PDF
    With the increased life expectancy and rise in health conditions related to aging, there is a need for new technologies that can routinely monitor vulnerable people, identify their daily pattern of activities and any anomaly or critical events such as falls. This paper aims to evaluate magnetic and radar sensors as suitable technologies for remote health monitoring purpose, both individually and fusing their information. After experiments and collecting data from 20 volunteers, numerical features has been extracted in both time and frequency domains. In order to analyse and verify the validation of fusion method for different classifiers, a Support Vector Machine with a quadratic kernel, and an Artificial Neural Network with one and multiple hidden layers have been implemented. Furthermore, for both classifiers, feature selection has been performed to obtain salient features. Using this technique along with fusion, both classifiers can detect 10 different activities with an accuracy rate of approximately 96%. In cases where the user is unknown to the classifier, an accuracy of approximately 92% is maintained

    DopNet:A Deep Convolutional Neural Network to Recognize Armed and Unarmed Human Targets

    Get PDF
    The work presented in this paper aims to distinguish between armed or unarmed personnel using multi-static radar data and advanced Doppler processing. We propose two modified Deep Convolutional Neural Networks (DCNN) termed SCDopNet and MC-DopNet for mono-static and multi-static micro- Doppler signature (μ-DS) classification. Differentiating armed and unarmed walking personnel is challenging due to the effect of aspect angle and channel diversity in real-world scenarios. In addition, DCNN easily overfits the relatively small-scale μ-DS dataset. To address these problems, the work carried out in this paper makes three key contributions: first, two effective schemes including data augmentation operation and a regularization term are proposed to train SC-DopNet from scratch. Next, a factor analysis of the SC-DopNet are conducted based on various operating parameters in both the processing and radar operations. Thirdly, to solve the problem of aspect angle diversity for μ-DS classification, we design MC-DopNet for multi-static μ- DS which is embedded with two new fusion schemes termed as Greedy Importance Reweighting (GIR) and `21-Norm. These two schemes are based on two different strategies and have been evaluated experimentally: GIR uses a “win by sacrificing worst case” whilst `21-Norm adopts a “win by sacrificing best case” approach. The SC-DopNet outperforms the non-deep methods by 12.5% in average and the proposed MC-DopNet with two fusion methods outperforms the conventional binary voting by 1.2% in average. Note that we also argue and discuss how to utilize the statistics of SC-DopNet results to infer the selection of fusion strategies for MC-DopNet under different experimental scenarios

    Radar signal processing for sensing in assisted living: the challenges associated with real-time implementation of emerging algorithms

    Get PDF
    This article covers radar signal processing for sensing in the context of assisted living (AL). This is presented through three example applications: human activity recognition (HAR) for activities of daily living (ADL), respiratory disorders, and sleep stages (SSs) classification. The common challenge of classification is discussed within a framework of measurements/preprocessing, feature extraction, and classification algorithms for supervised learning. Then, the specific challenges of the three applications from a signal processing standpoint are detailed in their specific data processing and ad hoc classification strategies. Here, the focus is on recent trends in the field of activity recognition (multidomain, multimodal, and fusion), health-care applications based on vital signs (superresolution techniques), and comments related to outstanding challenges. Finally, this article explores challenges associated with the real-time implementation of signal processing/classification algorithms

    Activity Classification Using Raw Range and I & Q Radar Data with Long Short Term Memory Layers

    Get PDF
    This paper presents the first initial results of using radar raw I & Q data and range profiles combined with Long Short Term Memory layers to classify human activities. Although tested only on simple classification problems, this is an innovative approach that enables to bypass the conventional usage of Doppler-time patterns (spectrograms) as inputs of the Long Short Term Memory layers, and adopt instead sequences of range profiles or even raw complex data as inputs. A maximum 99.56% accuracy and a mean accuracy of 97.67% was achieved by treating the radar data as these time sequences, in an effective scheme using a deep learning approach that did not require the pre-processing of the radar data to generate spectrograms and treat them as images. The prediction time needed for a given input testing sample is also reported, showing a promising path for real-time implementation once the Long Short Term Memory layers network is properly trained
    corecore