3,525 research outputs found

    Radar and RGB-depth sensors for fall detection: a review

    Get PDF
    This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing

    A Multiple Radar Approach for Automatic Target Recognition of Aircraft using Inverse Synthetic Aperture Radar

    Get PDF
    Along with the improvement of radar technologies, Automatic Target Recognition (ATR) using Synthetic Aperture Radar (SAR) and Inverse SAR (ISAR) has come to be an active research area. SAR/ISAR are radar techniques to generate a two-dimensional high-resolution image of a target. Unlike other similar experiments using Convolutional Neural Networks (CNN) to solve this problem, we utilize an unusual approach that leads to better performance and faster training times. Our CNN uses complex values generated by a simulation to train the network; additionally, we utilize a multi-radar approach to increase the accuracy of the training and testing processes, thus resulting in higher accuracies than the other papers working on SAR/ISAR ATR. We generated our dataset with 7 different aircraft models with a radar simulator we developed called RadarPixel; it is a Windows GUI program implemented using Matlab and Java programming, the simulator is capable of accurately replicating a real SAR/ISAR configurations. Our objective is to utilize our multi-radar technique and determine the optimal number of radars needed to detect and classify targets.Comment: 8 pages, 9 figures, International Conference for Data Intelligence and Security (ICDIS

    Gait Analysis of Horses for Lameness Detection with Radar Sensors

    Get PDF
    This paper presents the preliminary investigation of the use of radar signatures to detect and assess lameness of horses and its severity. Radar sensors in this context can provide attractive contactless sensing capabilities, as a complementary or alternative technology to the current techniques for lameness assessment using video-graphics and inertial sensors attached to the horses' body. The paper presents several examples of experimental data collected at the Weipers Centre Equine Hospital at the University of Glasgow, showing the micro- Doppler signatures of horses and preliminary results of their analysis

    Automatic Target Classification in Passive ISAR Range-Crossrange Images

    Get PDF

    Enhancing Dynamic Hand Gesture Recognition using Feature Concatenation via Multi-Input Hybrid Model

    Get PDF
    Radar-based hand gesture recognition is an important research area that provides suitable support for various applications, such as human-computer interaction and healthcare monitoring. Several deep learning algorithms for gesture recognition using Impulse Radio Ultra-Wide Band (IR-UWB) have been proposed. Most of them focus on achieving high performance, which requires a huge amount of data. The procedure of acquiring and annotating data remains a complex, costly, and time-consuming task. Moreover, processing a large volume of data usually requires a complex model with very large training parameters, high computation, and memory consumption. To overcome these shortcomings, we propose a simple data processing approach along with a lightweight multi-input hybrid model structure to enhance performance. We aim to improve the existing state-of-the-art results obtained using an available IR-UWB gesture dataset consisting of range-time images of dynamic hand gestures. First, these images are extended using the Sobel filter, which generates low-level feature representations for each sample. These represent the gradient images in the x-direction, the y-direction, and both the x- and y-directions. Next, we apply these representations as inputs to a three-input Convolutional Neural Network- Long Short-Term Memory- Support Vector Machine (CNN-LSTM-SVM) model. Each one is provided to a separate CNN branch and then concatenated for further processing by the LSTM. This combination allows for the automatic extraction of richer spatiotemporal features of the target with no manual engineering approach or prior domain knowledge. To select the optimal classifier for our model and achieve a high recognition rate, the SVM hyperparameters are tuned using the Optuna framework. Our proposed multi-input hybrid model achieved high performance on several parameters, including 98.27% accuracy, 98.30% precision, 98.29% recall, and 98.27% F1-score while ensuring low complexity. Experimental results indicate that the proposed approach improves accuracy and prevents the model from overfitting
    corecore