497 research outputs found

    UWB Based Static Gesture Classification

    Full text link
    Our paper presents a robust framework for UWB-based static gesture recognition, leveraging proprietary UWB radar sensor technology. Extensive data collection efforts were undertaken to compile datasets containing five commonly used gestures. Our approach involves a comprehensive data pre-processing pipeline that encompasses outlier handling, aspect ratio-preserving resizing, and false-color image transformation. Both CNN and MobileNet models were trained on the processed images. Remarkably, our best-performing model achieved an accuracy of 96.78%. Additionally, we developed a user-friendly GUI framework to assess the model's system resource usage and processing times, which revealed low memory utilization and real-time task completion in under one second. This research marks a significant step towards enhancing static gesture recognition using UWB technology, promising practical applications in various domains

    Recognition of Radar-Based Deaf Sign Language Using Convolution Neural Network

    Get PDF
    The difficulties in the communication between the deaf and normal people through sign language can be overcome by implementing deep learning in the gestures signal recognition. The use of the Convolution Neural Network (CNN) in distinguishing radar-based gesture signals of deaf sign language has not been investigated. This paper describes the recognition of gestures of deaf sign language using radar and CNN. Six gestures of deaf sign language were acquired from normal subjects using a radar system and processed. Short-time Fourier Transform was performed to extract the gestures features and the classification was performed using CNN. The performance of CNN was examined using two types of inputs; segmented and non-segmented spectrograms. The accuracy of recognising the gestures is higher (92.31%) using the non-segmented spectrograms compared to the segmented spectrogram. The radar-based deaf sign language could be recognised accurately using CNN without segmentation

    Recognition of Radar-Based Deaf Sign Language Using Convolution Neural Network

    Get PDF
    The difficulties in the communication between the deaf and normal people through sign language can be overcome by implementing deep learning in the gestures signal recognition. The use of the Convolution Neural Network (CNN) in distinguishing radar-based gesture signals of deaf sign language has not been investigated. This paper describes the recognition of gestures of deaf sign language using radar and CNN. Six gestures of deaf sign language were acquired from normal subjects using a radar system and processed. Short-time Fourier Transform was performed to extract the gestures features and the classification was performed using CNN. The performance of CNN was examined using two types of inputs; segmented and non-segmented spectrograms. The accuracy of recognising the gestures is higher (92.31%) using the non-segmented spectrograms compared to the segmented spectrogram. The radar-based deaf sign language could be recognised accurately using CNN without segmentation

    Few-Shot User-Definable Radar-Based Hand Gesture Recognition at the Edge

    Get PDF
    This work was supported in part by ITEA3 Unleash Potentials in Simulation (UPSIM) by the German Federal Ministry of Education and Research (BMBF) under Project 19006, in part by the Austrian Research Promotion Agency (FFG), in part by the Rijksdienst voor Ondernemend Nederland (Rvo), and in part by the Innovation Fund Denmark (IFD).Technological advances and scalability are leading Human-Computer Interaction (HCI) to evolve towards intuitive forms, such as through gesture recognition. Among the various interaction strategies, radar-based recognition is emerging as a touchless, privacy-secure, and versatile solution in different environmental conditions. Classical radar-based gesture HCI solutions involve deep learning but require training on large and varied datasets to achieve robust prediction. Innovative self-learning algorithms can help tackling this problem by recognizing patterns and adapt from similar contexts. Yet, such approaches are often computationally expensive and hardly integrable into hardware-constrained solutions. In this paper, we present a gesture recognition algorithm which is easily adaptable to new users and contexts. We exploit an optimization-based meta-learning approach to enable gesture recognition in learning sequences. This method targets at learning the best possible initialization of the model parameters, simplifying training on new contexts when small amounts of data are available. The reduction in computational cost is achieved by processing the radar sensed data of gestures in the form of time maps, to minimize the input data size. This approach enables the adaptation of simple convolutional neural network (CNN) to new hand poses, thus easing the integration of the model into a hardware-constrained platform. Moreover, the use of a Variational Autoencoders (VAE) to reduce the gestures' dimensionality leads to a model size decrease of an order of magnitude and to half of the required adaptation time. The proposed framework, deployed on the Intel(R) Neural Compute Stick 2 (NCS 2), leads to an average accuracy of around 84% for unseen gestures when only one example per class is utilized at training time. The accuracy increases up to 92.6% and 94.2% when three and five samples per class are used.Federal Ministry of Education & Research (BMBF) 19006Austrian Research Promotion Agency (FFG)Rijksdienst voor Ondernemend Nederland (Rvo)Innovation Fund Denmark (IFD

    Enhancing Dynamic Hand Gesture Recognition using Feature Concatenation via Multi-Input Hybrid Model

    Get PDF
    Radar-based hand gesture recognition is an important research area that provides suitable support for various applications, such as human-computer interaction and healthcare monitoring. Several deep learning algorithms for gesture recognition using Impulse Radio Ultra-Wide Band (IR-UWB) have been proposed. Most of them focus on achieving high performance, which requires a huge amount of data. The procedure of acquiring and annotating data remains a complex, costly, and time-consuming task. Moreover, processing a large volume of data usually requires a complex model with very large training parameters, high computation, and memory consumption. To overcome these shortcomings, we propose a simple data processing approach along with a lightweight multi-input hybrid model structure to enhance performance. We aim to improve the existing state-of-the-art results obtained using an available IR-UWB gesture dataset consisting of range-time images of dynamic hand gestures. First, these images are extended using the Sobel filter, which generates low-level feature representations for each sample. These represent the gradient images in the x-direction, the y-direction, and both the x- and y-directions. Next, we apply these representations as inputs to a three-input Convolutional Neural Network- Long Short-Term Memory- Support Vector Machine (CNN-LSTM-SVM) model. Each one is provided to a separate CNN branch and then concatenated for further processing by the LSTM. This combination allows for the automatic extraction of richer spatiotemporal features of the target with no manual engineering approach or prior domain knowledge. To select the optimal classifier for our model and achieve a high recognition rate, the SVM hyperparameters are tuned using the Optuna framework. Our proposed multi-input hybrid model achieved high performance on several parameters, including 98.27% accuracy, 98.30% precision, 98.29% recall, and 98.27% F1-score while ensuring low complexity. Experimental results indicate that the proposed approach improves accuracy and prevents the model from overfitting

    Unobtrusive hand gesture recognition using ultra-wide band radar and deep learning

    Get PDF
    Hand function after stroke injuries is not regained rapidly and requires physical rehabilitation for at least 6 months. Due to the heavy burden on the healthcare system, assisted rehabilitation is prescribed for a limited time, whereas so-called home rehabilitation is offered. It is therefore essential to develop robust solutions that facilitate monitoring while preserving the privacy of patients in a home-based setting. To meet these expectations, an unobtrusive solution based on radar sensing and deep learning is proposed. The multi-input multi-output convolutional eXtra trees (MIMO-CxT) is a new deep hybrid model used for hand gesture recognition (HGR) with impulse-radio ultra-wide band (IR-UWB) radars. It consists of a lightweight architecture based on a multi-input convolutional neural network (CNN) used in a hybrid configuration with extremely randomized trees (ETs). The model takes data from multiple sensors as input and processes them separately. The outputs of the CNN branches are concatenated before the prediction is made by the ETs. Moreover, the model uses depthwise separable convolution layers, which reduce computational cost and learning time while maintaining high performance. The model is evaluated on a publicly available dataset of gestures collected by three IR-UWB radars and achieved an average accuracy of 98.86%

    Emerging Approaches for THz Array Imaging: A Tutorial Review and Software Tool

    Full text link
    Accelerated by the increasing attention drawn by 5G, 6G, and Internet of Things applications, communication and sensing technologies have rapidly evolved from millimeter-wave (mmWave) to terahertz (THz) in recent years. Enabled by significant advancements in electromagnetic (EM) hardware, mmWave and THz frequency regimes spanning 30 GHz to 300 GHz and 300 GHz to 3000 GHz, respectively, can be employed for a host of applications. The main feature of THz systems is high-bandwidth transmission, enabling ultra-high-resolution imaging and high-throughput communications; however, challenges in both the hardware and algorithmic arenas remain for the ubiquitous adoption of THz technology. Spectra comprising mmWave and THz frequencies are well-suited for synthetic aperture radar (SAR) imaging at sub-millimeter resolutions for a wide spectrum of tasks like material characterization and nondestructive testing (NDT). This article provides a tutorial review of systems and algorithms for THz SAR in the near-field with an emphasis on emerging algorithms that combine signal processing and machine learning techniques. As part of this study, an overview of classical and data-driven THz SAR algorithms is provided, focusing on object detection for security applications and SAR image super-resolution. We also discuss relevant issues, challenges, and future research directions for emerging algorithms and THz SAR, including standardization of system and algorithm benchmarking, adoption of state-of-the-art deep learning techniques, signal processing-optimized machine learning, and hybrid data-driven signal processing algorithms...Comment: Submitted to Proceedings of IEE

    Contactless WiFi Sensing and Monitoring for Future Healthcare:Emerging Trends, Challenges and Opportunities

    Get PDF
    WiFi sensing has recently received significant interest from academics, industry, healthcare professionals and other caregivers (including family members) as a potential mechanism to monitor our aging population at distance, without deploying devices on users bodies. In particular, these methods have gained significant interest to efficiently detect critical events such as falls, sleep disturbances, wandering behavior, respiratory disorders, and abnormal cardiac activity experienced by vulnerable people. The interest in such WiFi-based sensing systems stems from its practical deployments in indoor settings and compliance from monitored persons, unlike other sensors such as wearables, camera-based, and acoustic-based solutions. This paper reviews state-of-the-art research on collecting and analysing channel state information, extracted using ubiquitous WiFi signals, describing a range of healthcare applications and identifying a series of open research challenges, untapped areas, and related trends.This work aims to provide an overarching view in understanding the technology and discusses its uses-cases from a perspective that considers hardware, advanced signal processing, and data acquisition

    An inclusive survey of contactless wireless sensing: a technology used for remotely monitoring vital signs has the potential to combating COVID-19

    Get PDF
    With the Coronavirus pandemic showing no signs of abating, companies and governments around the world are spending millions of dollars to develop contactless sensor technologies that minimize the need for physical interactions between the patient and healthcare providers. As a result, healthcare research studies are rapidly progressing towards discovering innovative contactless technologies, especially for infants and elderly people who are suffering from chronic diseases that require continuous, real-time control, and monitoring. The fusion between sensing technology and wireless communication has emerged as a strong research candidate choice because wearing sensor devices is not desirable by patients as they cause anxiety and discomfort. Furthermore, physical contact exacerbates the spread of contagious diseases which may lead to catastrophic consequences. For this reason, research has gone towards sensor-less or contactless technology, through sending wireless signals, then analyzing and processing the reflected signals using special techniques such as frequency modulated continuous wave (FMCW) or channel state information (CSI). Therefore, it becomes easy to monitor and measure the subject’s vital signs remotely without physical contact or asking them to wear sensor devices. In this paper, we overview and explore state-of-the-art research in the field of contactless sensor technology in medicine, where we explain, summarize, and classify a plethora of contactless sensor technologies and techniques with the highest impact on contactless healthcare. Moreover, we overview the enabling hardware technologies as well as discuss the main challenges faced by these systems.This work is funded by the scientific and technological research council of Turkey (TÜBITAK) under grand 119E39
    • …
    corecore