141 research outputs found

    Deep Learning for Sensor-based Human Activity Recognition: Overview, Challenges and Opportunities

    Full text link
    The vast proliferation of sensor devices and Internet of Things enables the applications of sensor-based activity recognition. However, there exist substantial challenges that could influence the performance of the recognition system in practical scenarios. Recently, as deep learning has demonstrated its effectiveness in many areas, plenty of deep methods have been investigated to address the challenges in activity recognition. In this study, we present a survey of the state-of-the-art deep learning methods for sensor-based human activity recognition. We first introduce the multi-modality of the sensory data and provide information for public datasets that can be used for evaluation in different challenge tasks. We then propose a new taxonomy to structure the deep methods by challenges. Challenges and challenge-related deep methods are summarized and analyzed to form an overview of the current research progress. At the end of this work, we discuss the open issues and provide some insights for future directions

    A Review of Physical Human Activity Recognition Chain Using Sensors

    Get PDF
    In the era of Internet of Medical Things (IoMT), healthcare monitoring has gained a vital role nowadays. Moreover, improving lifestyle, encouraging healthy behaviours, and decreasing the chronic diseases are urgently required. However, tracking and monitoring critical cases/conditions of elderly and patients is a great challenge. Healthcare services for those people are crucial in order to achieve high safety consideration. Physical human activity recognition using wearable devices is used to monitor and recognize human activities for elderly and patient. The main aim of this review study is to highlight the human activity recognition chain, which includes, sensing technologies, preprocessing and segmentation, feature extractions methods, and classification techniques. Challenges and future trends are also highlighted.

    Multimodal federated learning on IoT data

    Get PDF
    Federated learning is proposed as an alternative to centralized machine learning since its client-server structure provides better privacy protection and scalability in real-world applications. In many applications, such as smart homes with Internet-of-Things (IoT) devices, local data on clients are generated from different modalities such as sensory, visual, and audio data. Existing federated learning systems only work on local data from a single modality, which limits the scalability of the systems. In this paper, we propose a multimodal and semi-supervised federated learning framework that trains autoencoders to extract shared or correlated representations from different local data modalities on clients. In addition, we propose a multimodal FedAvg algorithm to aggregate local autoencoders trained on different data modalities. We use the learned global autoencoder for a downstream classification task with the help of auxiliary labelled data on the server. We empirically evaluate our framework on different modalities including sensory data, depth camera videos, and RGB camera videos. Our experimental results demonstrate that introducing data from multiple modalities into federated learning can improve its classification performance. In addition, we can use labelled data from only one modality for supervised learning on the server and apply the learned model to testing data from other modalities to achieve decent F1 scores (e.g., with the best performance being higher than 60%), especially when combining contributions from both unimodal clients and multimodal clients

    Sensor-based datasets for human activity recognition - a systematic review of literature

    Get PDF
    The research area of ambient assisted living has led to the development of activity recognition systems (ARS) based on human activity recognition (HAR). These systems improve the quality of life and the health care of the elderly and dependent people. However, before making them available to end users, it is necessary to evaluate their performance in recognizing activities of daily living, using data set benchmarks in experimental scenarios. For that reason, the scientific community has developed and provided a huge amount of data sets for HAR. Therefore, identifying which ones to use in the evaluation process and which techniques are the most appropriate for prediction of HAR in a specific context is not a trivial task and is key to further progress in this area of research. This work presents a systematic review of the literature of the sensor-based data sets used to evaluate ARS. On the one hand, an analysis of different variables taken from indexed publications related to this field was performed. The sources of information are journals, proceedings, and books located in specialized databases. The analyzed variables characterize publications by year, database, type, quartile, country of origin, and destination, using scientometrics, which allowed identification of the data set most used by researchers. On the other hand, the descriptive and functional variables were analyzed for each of the identified data sets: occupation, annotation, approach, segmentation, representation, feature selection, balancing and addition of instances, and classifier used for recognition. This paper provides an analysis of the sensor-based data sets used in HAR to date, identifying the most appropriate dataset to evaluate ARS and the classification techniques that generate better results

    Sensor-based datasets for human activity recognition - a systematic review of literature

    Get PDF
    The research area of ambient assisted living has led to the development of activity recognition systems (ARS) based on human activity recognition (HAR). These systems improve the quality of life and the health care of the elderly and dependent people. However, before making them available to end users, it is necessary to evaluate their performance in recognizing activities of daily living, using data set benchmarks in experimental scenarios. For that reason, the scientific community has developed and provided a huge amount of data sets for HAR. Therefore, identifying which ones to use in the evaluation process and which techniques are the most appropriate for prediction of HAR in a specific context is not a trivial task and is key to further progress in this area of research. This work presents a systematic review of the literature of the sensor-based data sets used to evaluate ARS. On the one hand, an analysis of different variables taken from indexed publications related to this field was performed. The sources of information are journals, proceedings, and books located in specialized databases. The analyzed variables characterize publications by year, database, type, quartile, country of origin, and destination, using scientometrics, which allowed identification of the data set most used by researchers. On the other hand, the descriptive and functional variables were analyzed for each of the identified data sets: occupation, annotation, approach, segmentation, representation, feature selection, balancing and addition of instances, and classifier used for recognition. This paper provides an analysis of the sensor-based data sets used in HAR to date, identifying the most appropriate dataset to evaluate ARS and the classification techniques that generate better results

    Radar Sensing in Assisted Living: An Overview

    Get PDF
    This paper gives an overview of trends in radar sensing for assisted living. It focuses on signal processing and classification, looking at conventional approaches, deep learning and fusion techniques. The last section shows examples of classification in human activity recognition and medical applications, e.g. breathing disorder and sleep stages recognition
    corecore