29 research outputs found

    Recognizing daily and sports activities in two open source machine learning environments using body-worn sensor units

    Get PDF
    This study provides a comparative assessment on the different techniques of classifying human activities performed while wearing inertial and magnetic sensor units on the chest, arms and legs. The gyroscope, accelerometer and the magnetometer in each unit are tri-axial. Naive Bayesian classifier, artificial neural networks (ANNs), dissimilarity-based classifier, three types of decision trees, Gaussian mixture models (GMMs) and support vector machines (SVMs) are considered. A feature set extracted from the raw sensor data using principal component analysis is used for classification. Three different cross-validation techniques are employed to validate the classifiers. A performance comparison of the classifiers is provided in terms of their correct differentiation rates, confusion matrices and computational cost. The highest correct differentiation rates are achieved with ANNs (99.2%), SVMs (99.2%) and a GMM (99.1%). GMMs may be preferable because of their lower computational requirements. Regarding the position of sensor units on the body, those worn on the legs are the most informative. Comparing the different sensor modalities indicates that if only a single sensor type is used, the highest classification rates are achieved with magnetometers, followed by accelerometers and gyroscopes. The study also provides a comparison between two commonly used open source machine learning environments (WEKA and PRTools) in terms of their functionality, manageability, classifier performance and execution times. © 2013 © The British Computer Society 2013. All rights reserved

    A Machine Learning Approach to Measure and Monitor Physical Activity in Children to Help Fight Overweight and Obesity

    Get PDF
    Physical Activity is important for maintaining healthy lifestyles. Recommendations for physical activity levels are issued by most governments as part of public health measures. As such, reliable measurement of physical activity for regulatory purposes is vital. This has lead research to explore standards for achieving this using wearable technology and artificial neural networks that produce classifications for specific physical activity events. Applied from a very early age, the ubiquitous capture of physical activity data using mobile and wearable technology may help us to understand how we can combat childhood obesity and the impact that this has in later life. A supervised machine learning approach is adopted in this paper that utilizes data obtained from accelerometer sensors worn by children in free-living environments. The paper presents a set of activities and features suitable for measuring physical activity and evaluates the use of a Multilayer Perceptron neural network to classify physical activities by activity type. A rigorous reproducible data science methodology is presented for subsequent use in physical activity research. Our results show that it was possible to obtain an overall accuracy of 96 % with 95 % for sensitivity, 99 % for specificity and a kappa value of 94 % when three and four feature combinations were used

    Stratified Transfer Learning for Cross-domain Activity Recognition

    Full text link
    In activity recognition, it is often expensive and time-consuming to acquire sufficient activity labels. To solve this problem, transfer learning leverages the labeled samples from the source domain to annotate the target domain which has few or none labels. Existing approaches typically consider learning a global domain shift while ignoring the intra-affinity between classes, which will hinder the performance of the algorithms. In this paper, we propose a novel and general cross-domain learning framework that can exploit the intra-affinity of classes to perform intra-class knowledge transfer. The proposed framework, referred to as Stratified Transfer Learning (STL), can dramatically improve the classification accuracy for cross-domain activity recognition. Specifically, STL first obtains pseudo labels for the target domain via majority voting technique. Then, it performs intra-class knowledge transfer iteratively to transform both domains into the same subspaces. Finally, the labels of target domain are obtained via the second annotation. To evaluate the performance of STL, we conduct comprehensive experiments on three large public activity recognition datasets~(i.e. OPPORTUNITY, PAMAP2, and UCI DSADS), which demonstrates that STL significantly outperforms other state-of-the-art methods w.r.t. classification accuracy (improvement of 7.68%). Furthermore, we extensively investigate the performance of STL across different degrees of similarities and activity levels between domains. And we also discuss the potential of STL in other pervasive computing applications to provide empirical experience for future research.Comment: 10 pages; accepted by IEEE PerCom 2018; full paper. (camera-ready version

    Subsampling Methods for Persistent Homology

    Full text link
    Persistent homology is a multiscale method for analyzing the shape of sets and functions from point cloud data arising from an unknown distribution supported on those sets. When the size of the sample is large, direct computation of the persistent homology is prohibitive due to the combinatorial nature of the existing algorithms. We propose to compute the persistent homology of several subsamples of the data and then combine the resulting estimates. We study the risk of two estimators and we prove that the subsampling approach carries stable topological information while achieving a great reduction in computational complexity

    The classification of skateboarding trick images by means of transfer learning and machine learning models

    Get PDF
    The evaluation of tricks executions in skateboarding is commonly executed manually and subjectively. The panels of judges often rely on their prior experience in identifying the effectiveness of tricks performance during skateboarding competitions. This technique of classifying tricks is deemed as not a practical solution for the evaluation of skateboarding tricks mainly for big competitions. Therefore, an objective and unbiased means of evaluating skateboarding tricks for analyzing skateboarder’s trick is nontrivial. This study aims at classifying flat ground tricks namely Ollie, Kickflip, Pop Shove-it, Nollie Frontside Shove-it, and Frontside 180 through the camera vision and the combination of Transfer Learning (TL) and Machine Learning (ML). An amateur skateboarder (23 years of age with ± 5.0 years’ experience) executed five tricks for each type of trick repeatedly on an HZ skateboard from a YI action camera placed at a distance of 1.26 m on a cemented ground. The features from the image obtained are extracted automatically via 18 TL models. The features extracted from the models are then fed into different tuned ML classifiers models, for instance, Support Vector Machine (SVM), k-Nearest Neighbors (k-NN), and Random Forest (RF). The grid search optimization technique through five-fold cross-validation was used to tune the hyperparameters of the classifiers evaluated. The data (722 images) was split into training, validation, and testing with a stratified ratio of 60:20:20, respectively. The study demonstrated that VGG16 + SVM and VGG19 + RF attained classification accuracy (CA) of 100% and 98%, respectively on the test dataset, followed by VGG19 + k-NN and also DenseNet201 + k-NN that achieved a CA of 97%. In order to evaluate the developed pipelines, robustness evaluation was carried out via the form of independent testing that employed the augmented images (2250 images). It was found that VGG16 + SVM, VGG19 + k-NN, and DenseNet201 + RF (by average) are able to yield reasonable CA with 99%, 98%, and 97%, respectively. Conclusively, based on the robustness evaluation, it can be ascertained that the VGG16 + SVM pipeline able to classify the tricks exceptionally well. Therefore, from the present study, it has been demonstrated that the proposed pipelines may facilitate judges in providing a more accurate evaluation of the tricks performed as opposed to the traditional method that is currently applied in competitions

    Ensemble residual network-based gender and activity recognition method with signals

    Get PDF
    Nowadays, deep learning is one of the popular research areas of the computer sciences, and many deep networks have been proposed to solve artificial intelligence and machine learning problems. Residual networks (ResNet) for instance ResNet18, ResNet50 and ResNet101 are widely used deep network in the literature. In this paper, a novel ResNet-based signal recognition method is presented. In this study, ResNet18, ResNet50 and ResNet101 are utilized as feature extractor and each network extracts 1000 features. The extracted features are concatenated, and 3000 features are obtained. In the feature selection phase, 1000 most discriminative features are selected using ReliefF, and these selected features are used as input for the third-degree polynomial (cubic) activation-based support vector machine. The proposed method achieved 99.96% and 99.61% classification accuracy rates for gender and activity recognitions, respectively. These results clearly demonstrate that the proposed pre-trained ensemble ResNet-based method achieved high success rate for sensors signals. © 2020, Springer Science+Business Media, LLC, part of Springer Nature

    Context-Aware Deep Sequence Learning with Multi-View Factor Pooling for Time Series Classification

    Get PDF
    In this paper, we propose an effective, multi-view, multivariate deep classification model for time-series data. Multi-view methods show promise in their ability to learn correlation and exclusivity properties across different independent information resources. However, most current multi-view integration schemes employ only a linear model and, therefore, do not extensively utilize the relationships observed across different view-specific representations. Moreover, the majority of these methods rely exclusively on sophisticated, handcrafted features to capture local data patterns and, thus, depend heavily on large collections of labeled data. The multi-view, multivariate deep classification model for time-series data proposed in this paper makes important contributions to address these limitations. The proposed model derives a LSTM-based, deep feature descriptor to model both the view-specific data characteristics and cross-view interaction in an integrated deep architecture while driving the learning phase in a data-driven manner. The proposed model employs a compact context descriptor to exploit view-specific affinity information to design a more insightful context representation. Finally, the model uses a multi-view factor-pooling scheme for a context-driven attention learning strategy to weigh the most relevant feature dimensions while eliminating noise from the resulting fused descriptor. As shown by experiments, compared to the existing multi-view methods, the proposed multi-view deep sequential learning approach improves classification performance by roughly 4% in the UCI multi-view activity recognition dataset, while also showing significantly robust generalized representation capacity against its single-view counterparts, in classifying several large-scale multi-view light curve collections

    Stacked Autoencoder and Meta-Learning based Heterogeneous Domain Adaptation for Human Activity Recognition

    Get PDF
    The field of human activity recognition (HAR) using machine learning approaches has gained a lot of interest in the research community due to its empowerment of automation and autonomous systems in industries and homes with respect to the given context and due to the increasing number of smart wearable devices. However, it is challenging to achieve a considerable accuracy for recognizing actions with diverse set of wearable devices due to their variance in feature spaces, sampling rate, units, sensor modalities and so forth. Furthermore, collecting annotated data has always been a serious issue in the machine learning community. Domain adaptation is a field that helps to cope with the issue by training on the source domain and labeling the samples in the target domain, however, due to the aforementioned variances (heterogeneity) in wearable sensor data, the action recognition accuracy remains on the lower side. Existing studies try to make the target domain feature space compliant with the source domain to improve the results, but it assumes that the system has a prior knowledge of the feature space of the target domain, which does not reflect real-world implication. In this regard, we propose stacked autoencoder and meta-learning based heterogeneous domain adaptation (SAM- HDD) network. The stacked autoencoder part is trained on the source domain feature space to extract the latent representation and train the employed classifiers, accordingly. The classification probabilities from the classifiers are trained with meta-learner to further improve the recognition performance. The data from tar- get domain undergoes the encoding layers of the trained stacked autoencoders to extract the latent representations, followed by the classification of label from the trained classifiers and meta- learner. The results show that the proposed approach is efficient in terms of accuracy score and achieves best results among the existing works, respectivel
    corecore