735 research outputs found

    Practical classification of different moving targets using automotive radar and deep neural networks

    Get PDF
    In this work, the authors present results for classification of different classes of targets (car, single and multiple people, bicycle) using automotive radar data and different neural networks. A fast implementation of radar algorithms for detection, tracking, and micro-Doppler extraction is proposed in conjunction with the automotive radar transceiver TEF810X and microcontroller unit SR32R274 manufactured by NXP Semiconductors. Three different types of neural networks are considered, namely a classic convolutional network, a residual network, and a combination of convolutional and recurrent network, for different classification problems across the four classes of targets recorded. Considerable accuracy (close to 100% in some cases) and low latency of the radar pre-processing prior to classification (∼0.55 s to produce a 0.5 s long spectrogram) are demonstrated in this study, and possible shortcomings and outstanding issues are discussed

    Personnel recognition and gait classification based on multistatic micro-doppler signatures using deep convolutional neural networks

    Get PDF
    In this letter, we propose two methods for personnel recognition and gait classification using deep convolutional neural networks (DCNNs) based on multistatic radar micro-Doppler signatures. Previous DCNN-based schemes have mainly focused on monostatic scenarios, whereas directional diversity offered by multistatic radar is exploited in this letter to improve classification accuracy. We first propose the voted monostatic DCNN (VMo-DCNN) method, which trains DCNNs on each receiver node separately and fuses the results by binary voting. By merging the fusion step into the network architecture, we further propose the multistatic DCNN (Mul-DCNN) method, which performs slightly better than VMo-DCNN. These methods are validated on real data measured with a 2.4-GHz multistatic radar system. Experimental results show that the Mul-DCNN achieves over 99% accuracy in armed/unarmed gait classification using only 20% training data and similar performance in two-class personnel recognition using 50% training data, which are higher than the accuracy obtained by performing DCNN on a single radar node

    Micro-Doppler Based Human-Robot Classification Using Ensemble and Deep Learning Approaches

    Full text link
    Radar sensors can be used for analyzing the induced frequency shifts due to micro-motions in both range and velocity dimensions identified as micro-Doppler (μ\boldsymbol{\mu}-D) and micro-Range (μ\boldsymbol{\mu}-R), respectively. Different moving targets will have unique μ\boldsymbol{\mu}-D and μ\boldsymbol{\mu}-R signatures that can be used for target classification. Such classification can be used in numerous fields, such as gait recognition, safety and surveillance. In this paper, a 25 GHz FMCW Single-Input Single-Output (SISO) radar is used in industrial safety for real-time human-robot identification. Due to the real-time constraint, joint Range-Doppler (R-D) maps are directly analyzed for our classification problem. Furthermore, a comparison between the conventional classical learning approaches with handcrafted extracted features, ensemble classifiers and deep learning approaches is presented. For ensemble classifiers, restructured range and velocity profiles are passed directly to ensemble trees, such as gradient boosting and random forest without feature extraction. Finally, a Deep Convolutional Neural Network (DCNN) is used and raw R-D images are directly fed into the constructed network. DCNN shows a superior performance of 99\% accuracy in identifying humans from robots on a single R-D map.Comment: 6 pages, accepted in IEEE Radar Conference 201

    Radar for Assisted Living in the Context of Internet of Things for Health and Beyond

    Get PDF
    This paper discusses the place of radar for assisted living in the context of IoT for Health and beyond. First, the context of assisted living and the urgency to address the problem is described. The second part gives a literature review of existing sensing modalities for assisted living and explains why radar is an upcoming preferred modality to address this issue. The third section presents developments in machine learning that helps improve performances in classification especially with deep learning with a reflection on lessons learned from it. The fourth section introduces recent published work from our research group in the area that shows promise with multimodal sensor fusion for classification and long short-term memory applied to early stages in the radar signal processing chain. Finally, we conclude with open challenges still to be addressed in the area and open to future research directions in animal welfare

    Spectro-temporal modelling for human activity recognition using a radar sensor network

    Get PDF

    Indoor person identification using a low-power FMCW radar

    Get PDF
    Contemporary surveillance systems mainly use video cameras as their primary sensor. However, video cameras possess fundamental deficiencies, such as the inability to handle low-light environments, poor weather conditions, and concealing clothing. In contrast, radar devices are able to sense in pitchdark environments and to see through walls. In this paper, we investigate the use of micro-Doppler (MD) signatures retrieved from a low-power radar device to identify a set of persons based on their gait characteristics. To that end, we propose a robust feature learning approach based on deep convolutional neural networks. Given that we aim at providing a solution for a real-world problem, people are allowed to walk around freely in two different rooms. In this setting, the IDentification with Radar data data set is constructed and published, consisting of 150 min of annotated MD data equally spread over five targets. Through experiments, we investigate the effectiveness of both the Doppler and time dimension, showing that our approach achieves a classification error rate of 24.70% on the validation set and 21.54% on the test set for the five targets used. When experimenting with larger time windows, we are able to further lower the error rate

    Multistatic radar classification of armed vs unarmed personnel using neural networks

    Get PDF
    This paper investigates an implementation of an array of distributed neural networks, operating together to classify between unarmed and potentially armed personnel in areas under surveillance using ground based radar. Experimental data collected by the University College London (UCL) multistatic radar system NetRAD is analysed. Neural networks are applied to the extracted micro-Doppler data in order to classify between the two scenarios, and accuracy above 98% is demonstrated on the validation data, showing an improvement over methodologies based on classifiers where human intervention is required. The main advantage of using neural networks is the ability to bypass the manual extraction process of handcrafted features from the radar data, where thresholds and parameters need to be tuned by human operators. Different network architectures are explored, from feed-forward networks to stacked auto-encoders, with the advantages of deep topologies being capable of classifying the spectrograms (Doppler-time patterns) directly. Significant parameters concerning the actual deployment of the networks are also investigated, for example the dwell time (i.e. how long the radar needs to focus on a target in order to achieve classification), and the robustness of the networks in classifying data from new people, whose signatures were unseen during the training stage. Finally, a data ensembling technique is also presented which utilises a weighted decision approach, established beforehand, utilising information from all three sensors, and yielding stable classification accuracies of 99% or more, across all monitored zones

    Radar and RGB-depth sensors for fall detection: a review

    Get PDF
    This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing

    Cross-Frequency Classification of Indoor Activities with DNN Transfer Learning

    Get PDF
    Remote, non-contact recognition of human motion and activities is central to health monitoring in assisted living facilities, but current systems face the problems of training compatibility, minimal training data sets and a lack of interoperability between radar sensors at different frequencies. This paper represents a first work to consider the efficacy of deep neural networks (DNNs) and transfer learning to bridge the gap in phenomenology that results when multiple types of radars simultaneously observe human activity. Six different human activities are recorded indoors simultaneously with 5.8 GHz and 25 GHz radars. Firstly, the bottleneck feature performance of the DNNs show that a baseline of 76% is achieved. On models trained only with 25 GHz data when 5.8 GHz data is used for testing 81% accuracy is achieved. in absence of a large dataset for radar at a certain frequency, we demonstrate information from a different frequency radar is better suited for generating the classification models than optical images and by using time-velocity diagrams (TVD), a degree of interoperability can be achieved
    • …
    corecore