3 research outputs found

    Evaluation of an Optimized K-Means Algorithm Based on Real Data

    Full text link

    Multi-task Self-Supervised Learning for Human Activity Detection

    Full text link
    Deep learning methods are successfully used in applications pertaining to ubiquitous computing, health, and well-being. Specifically, the area of human activity recognition (HAR) is primarily transformed by the convolutional and recurrent neural networks, thanks to their ability to learn semantic representations from raw input. However, to extract generalizable features, massive amounts of well-curated data are required, which is a notoriously challenging task; hindered by privacy issues, and annotation costs. Therefore, unsupervised representation learning is of prime importance to leverage the vast amount of unlabeled data produced by smart devices. In this work, we propose a novel self-supervised technique for feature learning from sensory data that does not require access to any form of semantic labels. We learn a multi-task temporal convolutional network to recognize transformations applied on an input signal. By exploiting these transformations, we demonstrate that simple auxiliary tasks of the binary classification result in a strong supervisory signal for extracting useful features for the downstream task. We extensively evaluate the proposed approach on several publicly available datasets for smartphone-based HAR in unsupervised, semi-supervised, and transfer learning settings. Our method achieves performance levels superior to or comparable with fully-supervised networks, and it performs significantly better than autoencoders. Notably, for the semi-supervised case, the self-supervised features substantially boost the detection rate by attaining a kappa score between 0.7-0.8 with only 10 labeled examples per class. We get similar impressive performance even if the features are transferred from a different data source. While this paper focuses on HAR as the application domain, the proposed technique is general and could be applied to a wide variety of problems in other areas

    Enhanced context-aware framework for individual and crowd condition prediction

    Get PDF
    Context-aware framework is basic context-aware that utilizes contexts such as user with their individual activities, location and time, which are hidden information derived from smartphone sensors. These data are used to monitor a situation in a crowd scenario. Its application using embedded sensors has the potential to monitor tasks that are practically complicated to access. Inaccuracies observed in the individual activity recognition (IAR) due to faulty accelerometer data and data classification problem have led to its inefficiency when used for prediction. This study developed a solution to this problem by introducing a method of feature extraction and selection, which provides a higher accuracy by selecting only the relevant features and minimizing false negative rate (FNR) of IAR used for crowd condition prediction. The approach used was the enhanced context-aware framework (EHCAF) for the prediction of human movement activities during an emergency. Three new methods to ensure high accuracy and low FNR were introduced. Firstly, an improved statistical-based time-frequency domain (SBTFD) representing and extracting hidden context information from sensor signals with improved accuracy was introduced. Secondly, a feature selection method (FSM) to achieve improved accuracy with statistical-based time-frequency domain (SBTFD) and low false negative rate was used. Finally, a method for individual behaviour estimation (IBE) and crowd condition prediction in which the threshold and crowd density determination (CDD) was developed and used, achieved a low false negative rate. The approach showed that the individual behaviour estimation used the best selected features, flow velocity estimation and direction to determine the disparity value of individual abnormality behaviour in a crowd. These were used for individual and crowd density determination evaluation in terms of inflow, outflow and crowd turbulence during an emergency. Classifiers were used to confirm features ability to differentiate individual activity recognition data class. Experimenting SBTFD with decision tree (J48) classifier produced a maximum of 99:2% accuracy and 3:3% false negative rate. The individual classes were classified based on 7 best features, which produced a reduction in dimension, increased accuracy to 99:1% and had a low false negative rate (FNR) of 2:8%. In conclusion, the enhanced context-aware framework that was developed in this research proved to be a viable solution for individual and crowd condition prediction in our society
    corecore