5 research outputs found

    Domain Generalization for Activity Recognition via Adaptive Feature Fusion

    Full text link
    Human activity recognition requires the efforts to build a generalizable model using the training datasets with the hope to achieve good performance in test datasets. However, in real applications, the training and testing datasets may have totally different distributions due to various reasons such as different body shapes, acting styles, and habits, damaging the model's generalization performance. While such a distribution gap can be reduced by existing domain adaptation approaches, they typically assume that the test data can be accessed in the training stage, which is not realistic. In this paper, we consider a more practical and challenging scenario: domain-generalized activity recognition (DGAR) where the test dataset \emph{cannot} be accessed during training. To this end, we propose \emph{Adaptive Feature Fusion for Activity Recognition~(AFFAR)}, a domain generalization approach that learns to fuse the domain-invariant and domain-specific representations to improve the model's generalization performance. AFFAR takes the best of both worlds where domain-invariant representations enhance the transferability across domains and domain-specific representations leverage the model discrimination power from each domain. Extensive experiments on three public HAR datasets show its effectiveness. Furthermore, we apply AFFAR to a real application, i.e., the diagnosis of Children's Attention Deficit Hyperactivity Disorder~(ADHD), which also demonstrates the superiority of our approach.Comment: Accepted by ACM Transactions on Intelligent Systems and Technology (TIST) 2022; Code: https://github.com/jindongwang/transferlearning/tree/master/code/DeepD

    Cross Dataset Evaluation for IoT Network Intrusion Detection

    Get PDF
    With the advent of Internet of Things (IOT) technology, the need to ensure the security of an IOT network has become important. There are several intrusion detection systems (IDS) that are available for analyzing and predicting network anomalies and threats. However, it is challenging to evaluate them to realistically estimate their performance when deployed. A lot of research has been conducted where the training and testing is done using the same simulated dataset. However, realistically, a network on which an intrusion detection model is deployed will be very different from the network on which it was trained. The aim of this research is to perform a cross-dataset evaluation using different machine learning models for IDS. This helps ensure that a model that performs well when evaluated on one dataset will also perform well when deployed. Two publicly available simulation datasets., IOTID20 and Bot-IoT datasets created to capture IOT networks for different attacks such as DoS and Scanning were used for training and testing. Machine learning models applied to these datasets were evaluated within each dataset followed by cross -dataset evaluation. A significant difference was observed between the results obtained using the two datasets. Supervised machine learning models were built and evaluated for binary classification to classify between normal and anomaly attack instances as well as for multiclass classification to also categorize the type of attack on the IoT network

    ContrasGAN : unsupervised domain adaptation in Human Activity Recognition via adversarial and contrastive learning

    Get PDF
    Human Activity Recognition (HAR) makes it possible to drive applications directly from embedded and wearable sensors. Machine learning, and especially deep learning, has made significant progress in learning sensor features from raw sensing signals with high recognition accuracy. However, most techniques need to be trained on a large labelled dataset, which is often difficult to acquire. In this paper, we present ContrasGAN, an unsupervised domain adaptation technique that addresses this labelling challenge by transferring an activity model from one labelled domain to other unlabelled domains. ContrasGAN uses bi-directional generative adversarial networks for heterogeneous feature transfer and contrastive learning to capture distinctive features between classes. We evaluate ContrasGAN on three commonly-used HAR datasets under conditions of cross-body, cross-user, and cross-sensor transfer learning. Experimental results show a superior performance of ContrasGAN on all these tasks over a number of state-of-the-art techniques, with relatively low computational cost.PostprintPeer reviewe

    Unsupervised domain adaptation in sensor-based human activity recognition

    Get PDF
    Sensor-based human activity recognition (HAR) is to recognise human daily activities through a collection of ambient and wearable sensors. Sensor-based human activity recognition is having a significant impact in a wide range of applications in smart city, smart home, and personal healthcare. Such wide deployment of HAR systems often faces the annotation-scarcity challenge; that is, most of the HAR techniques, especially the deep learning techniques, require a large number of training data while annotating sensor data is very time- and effort-consuming. Unsupervised domain adaptation has been successfully applied to tackle this challenge, where the activity knowledge from a well-annotated domain can be transferred to a new, unlabelled domain. However, existing techniques do not perform well on highly heterogeneous domains. To address this problem, this thesis proposes unsupervised domain adaptation models for human activity recognition. The first model presented is a new knowledge- and data-driven technique to achieve coarse- and fine-grained feature alignment using variational autoencoders. This proposed approach demonstrates high recognition accuracy and robustness against sensor noise, compared to the state-of-the-art domain adaptation techniques. However, the limitations with this approach are that knowledge-driven annotation can be inaccurate and also the model incurs extra knowledge engineering effort to map the source and target domain. This limits the application of the model. To tackle the above limitation, we then present another two data-driven unsupervised domain adaptation techniques. The first method is based on bidirectional generative adversarial networks (Bi-GAN) to perform domain adaptation. In order to improve the matching between the source and target domain, we employ Kernel Mean Matching (KMM) to enable covariate shift correction between transformed source data and original target data so that they can be better aligned. This technique works well but it does not separate classes that have similar patterns. To tackle this problem, our second method includes contrastive learning during the adaptation process to minimise the intra-class discrepancy and maximise the inter-class margin. Both methods are validated with high accuracy results on various experiments using three HAR datasets and multiple transfer learning tasks in comparison with 12 state-of-the-art techniques
    corecore