12 research outputs found

    Radar Sensing in Assisted Living: An Overview

    Get PDF
    This paper gives an overview of trends in radar sensing for assisted living. It focuses on signal processing and classification, looking at conventional approaches, deep learning and fusion techniques. The last section shows examples of classification in human activity recognition and medical applications, e.g. breathing disorder and sleep stages recognition

    Radar signal processing for sensing in assisted living: the challenges associated with real-time implementation of emerging algorithms

    Get PDF
    This article covers radar signal processing for sensing in the context of assisted living (AL). This is presented through three example applications: human activity recognition (HAR) for activities of daily living (ADL), respiratory disorders, and sleep stages (SSs) classification. The common challenge of classification is discussed within a framework of measurements/preprocessing, feature extraction, and classification algorithms for supervised learning. Then, the specific challenges of the three applications from a signal processing standpoint are detailed in their specific data processing and ad hoc classification strategies. Here, the focus is on recent trends in the field of activity recognition (multidomain, multimodal, and fusion), health-care applications based on vital signs (superresolution techniques), and comments related to outstanding challenges. Finally, this article explores challenges associated with the real-time implementation of signal processing/classification algorithms

    Continuous human motion recognition with a dynamic range-Doppler trajectory method based on FMCW radar

    Get PDF
    Radar-based human motion recognition is crucial for many applications, such as surveillance, search and rescue operations, smart homes, and assisted living. Continuous human motion recognition in real-living environment is necessary for practical deployment, i.e., classification of a sequence of activities transitioning one into another, rather than individual activities. In this paper, a novel dynamic range-Doppler trajectory (DRDT) method based on the frequency-modulated continuous-wave (FMCW) radar system is proposed to recognize continuous human motions with various conditions emulating real-living environment. This method can separate continuous motions and process them as single events. First, range-Doppler frames consisting of a series of range-Doppler maps are obtained from the backscattered signals. Next, the DRDT is extracted from these frames to monitor human motions in time, range, and Doppler domains in real time. Then, a peak search method is applied to locate and separate each human motion from the DRDT map. Finally, range, Doppler, radar cross section (RCS), and dispersion features are extracted and combined in a multidomain fusion approach as inputs to a machine learning classifier. This achieves accurate and robust recognition even in various conditions of distance, view angle, direction, and individual diversity. Extensive experiments have been conducted to show its feasibility and superiority by obtaining an average accuracy of 91.9% on continuous classification

    Ultra-Wideband Radar-Based Activity Recognition Using Deep Learning

    Get PDF
    With recent advances in the field of sensing, it has become possible to build better assistive technologies. This enables the strengthening of eldercare with regard to daily routines and the provision of personalised care to users. For instance, it is possible to detect a person’s behaviour based on wearable or ambient sensors; however, it is difficult for users to wear devices 24/7, as they would have to be recharged regularly because of their energy consumption. Similarly, although cameras have been widely used as ambient sensors, they carry the risk of breaching users’ privacy. This paper presents a novel sensing approach based on deep learning for human activity recognition using a non-wearable ultra-wideband (UWB) radar sensor. UWB sensors protect privacy better than RGB cameras because they do not collect visual data. In this study, UWB sensors were mounted on a mobile robot to monitor and observe subjects from a specific distance (namely, 1.5–2.0 m). Initially, data were collected in a lab environment for five different human activities. Subsequently, the data were used to train a model using the state-of-the-art deep learning approach, namely long short-term memory (LSTM). Conventional training approaches were also tested to validate the superiority of LSTM. As a UWB sensor collects many data points in a single frame, enhanced discriminant analysis was used to reduce the dimensions of the features through application of principal component analysis to the raw dataset, followed by linear discriminant analysis. The enhanced discriminant features were fed into the LSTMs. Finally, the trained model was tested using new inputs. The proposed LSTM-based activity recognition approach performed better than conventional approaches, with an accuracy of 99.6%. We applied 5-fold cross-validation to test our approach. We also validated our approach on publically available dataset. The proposed method can be applied in many prominent fields, including human–robot interaction for various practical applications, such as mobile robots for eldercare.publishedVersio

    Radar-Based Hierarchical Human Activity Classification

    Get PDF
    Worldwide the ageing population is increasing, and there are new requirements from governments to keep people at home longer. As a consequence assisted living has been an active area of research, and radar has been identified as an emerging technology of choice for indoor activity monitoring. Activity classification has been investigated, but is often limited by the classification accuracy in the most challenging yet realistic cases. This paper aims to evaluate and improve the accuracy in classifying six commonly performed indoor activities from the University of Glasgow open dataset. For activity classification, the selection of features to discriminate between activities is paramount. Activity classification is usually done as one vs all strategy with one classifier and a set of features to distinguish between all the activities. In this paper, we propose to optimise the feature selection and classifier choice per activity using a hierarchical classification structure. This strategy reached 95.4% accuracy for all activities and about 100% for walking, opening the field for personnel recognition

    Radar-Based Human Motion Recognition by Using Vital Signs with ECA-CNN

    Get PDF
    Radar technologies reserve a large latent capacity in dealing with human motion recognition (HMR). For the problem that it is challenging to quickly and accurately classify various complex motions, an HMR algorithm combing the attention mechanism and convolution neural network (ECA-CNN) using vital signs is proposed. Firstly, the original radar signal is obtained from human chest wall displacement. Chirp-Z Transform (CZT) algorithm is adopted to refine and amplify the narrow band spectrum region of interest in the global spectrum of the signal, and accurate information on the specific band is extracted. Secondly, six time-domain features were extracted for the neural network. Finally, an ECA-CNN is designed to improve classification accuracy, with a small size, fast speed, and high accuracy of 98%. This method can improve the classification accuracy and efficiency of the network to a large extent. Besides, the size of this network is 100 kb, which is convenient to integrate into the embedded devices

    A Study of Types of Sensors used in Remote Sensing

    Get PDF
    Of late, the science of Remote Sensing has been gaining a lot of interest and attention due to its wide variety of applications. Remotely sensed data can be used in various fields such as medicine, agriculture, engineering, weather forecasting, military tactics, disaster management etc. only to name a few. This article presents a study of the two categories of sensors namely optical and microwave which are used for remotely sensing the occurrence of disasters such as earthquakes, floods, landslides, avalanches, tropical cyclones and suspicious movements. The remotely sensed data acquired either through satellites or through ground based- synthetic aperture radar systems could be used to avert or mitigate a disaster or to perform a post-disaster analysis

    A Study of Types of Sensors used in Remote Sensing

    Get PDF
    Of late, the science of Remote Sensing has been gaining a lot of interest and attention due to its wide variety of applications. Remotely sensed data can be used in various fields such as medicine, agriculture, engineering, weather forecasting, military tactics, disaster management etc. only to name a few. This article presents a study of the two categories of sensors namely optical and microwave which are used for remotely sensing the occurrence of disasters such as earthquakes, floods, landslides, avalanches, tropical cyclones and suspicious movements. The remotely sensed data acquired either through satellites or through ground based- synthetic aperture radar systems could be used to avert or mitigate a disaster or to perform a post-disaster analysis
    corecore