80 research outputs found

    Leveraging wearable sensors for human daily activity recognition with stacked denoising autoencoders

    Get PDF
    Activity recognition has received considerable attention in many research fields, such as industrial and healthcare fields. However, many researches about activity recognition have focused on static activities and dynamic activities in current literature, while, the transitional activities, such as stand-to-sit and sit-to-stand, are more difficult to recognize than both of them. Consider that it may be important in real applications. Thus, a novel framework is proposed in this paper to recognize static activities, dynamic activities, and transitional activities by utilizing stacked denoising autoencoders (SDAE), which is able to extract features automatically as a deep learning model rather than utilize manual features extracted by conventional machine learning methods. Moreover, the resampling technique (random oversampling) is used to improve problem of unbalanced samples due to relatively short duration characteristic of transitional activity. The experiment protocol is designed to collect twelve daily activities (three types) by using wearable sensors from 10 adults in smart lab of Ulster University, the experiment results show the significant performance on transitional activity recognition and achieve the overall accuracy of 94.88% on three types of activities. The results obtained by comparing with other methods and performances on other three public datasets verify the feasibility and priority of our framework. This paper also explores the effect of multiple sensors (accelerometer and gyroscope) to determine the optimal combination for activity recognition

    Learning Human Motion Models for Long-term Predictions

    Full text link
    We propose a new architecture for the learning of predictive spatio-temporal motion models from data alone. Our approach, dubbed the Dropout Autoencoder LSTM, is capable of synthesizing natural looking motion sequences over long time horizons without catastrophic drift or motion degradation. The model consists of two components, a 3-layer recurrent neural network to model temporal aspects and a novel auto-encoder that is trained to implicitly recover the spatial structure of the human skeleton via randomly removing information about joints during training time. This Dropout Autoencoder (D-AE) is then used to filter each predicted pose of the LSTM, reducing accumulation of error and hence drift over time. Furthermore, we propose new evaluation protocols to assess the quality of synthetic motion sequences even for which no ground truth data exists. The proposed protocols can be used to assess generated sequences of arbitrary length. Finally, we evaluate our proposed method on two of the largest motion-capture datasets available to date and show that our model outperforms the state-of-the-art on a variety of actions, including cyclic and acyclic motion, and that it can produce natural looking sequences over longer time horizons than previous methods

    Adversarial Variational Embedding for Robust Semi-supervised Learning

    Full text link
    Semi-supervised learning is sought for leveraging the unlabelled data when labelled data is difficult or expensive to acquire. Deep generative models (e.g., Variational Autoencoder (VAE)) and semisupervised Generative Adversarial Networks (GANs) have recently shown promising performance in semi-supervised classification for the excellent discriminative representing ability. However, the latent code learned by the traditional VAE is not exclusive (repeatable) for a specific input sample, which prevents it from excellent classification performance. In particular, the learned latent representation depends on a non-exclusive component which is stochastically sampled from the prior distribution. Moreover, the semi-supervised GAN models generate data from pre-defined distribution (e.g., Gaussian noises) which is independent of the input data distribution and may obstruct the convergence and is difficult to control the distribution of the generated data. To address the aforementioned issues, we propose a novel Adversarial Variational Embedding (AVAE) framework for robust and effective semi-supervised learning to leverage both the advantage of GAN as a high quality generative model and VAE as a posterior distribution learner. The proposed approach first produces an exclusive latent code by the model which we call VAE++, and meanwhile, provides a meaningful prior distribution for the generator of GAN. The proposed approach is evaluated over four different real-world applications and we show that our method outperforms the state-of-the-art models, which confirms that the combination of VAE++ and GAN can provide significant improvements in semisupervised classification.Comment: 9 pages, Accepted by Research Track in KDD 201

    Machine Learning in Robotic Navigation:Deep Visual Localization and Adaptive Control

    Get PDF
    The work conducted in this thesis contributes to the robotic navigation field by focusing on different machine learning solutions: supervised learning with (deep) neural networks, unsupervised learning, and reinforcement learning.First, we propose a semi-supervised machine learning approach that can dynamically update the robot controller's parameters using situational analysis through feature extraction and unsupervised clustering. The results show that the robot can adapt to the changes in its surroundings, resulting in a thirty percent improvement in navigation speed and stability.Then, we train multiple deep neural networks for estimating the robot's position in the environment using ground truth information provided by a classical localization and mapping approach. We prepare two image-based localization datasets in 3D simulation and compare the results of a traditional multilayer perceptron, a stacked denoising autoencoder, and a convolutional neural network (CNN). The experiment results show that our proposed inception based CNNs without pooling layers perform very well in all the environments. Finally, we propose a two-stage learning framework for visual navigation in which the experience of the agent during exploration of one goal is shared to learn to navigate to other goals. The multi-goal Q-function learns to traverse the environment by using the provided discretized map. Transfer learning is applied to the multi-goal Q-function from a maze structure to a 2D simulator and is finally deployed in a 3D simulator where the robot uses the estimated locations from the position estimator deep CNNs. The results show a significant improvement when multi-goal reinforcement learning is used

    A Review of Physical Human Activity Recognition Chain Using Sensors

    Get PDF
    In the era of Internet of Medical Things (IoMT), healthcare monitoring has gained a vital role nowadays. Moreover, improving lifestyle, encouraging healthy behaviours, and decreasing the chronic diseases are urgently required. However, tracking and monitoring critical cases/conditions of elderly and patients is a great challenge. Healthcare services for those people are crucial in order to achieve high safety consideration. Physical human activity recognition using wearable devices is used to monitor and recognize human activities for elderly and patient. The main aim of this review study is to highlight the human activity recognition chain, which includes, sensing technologies, preprocessing and segmentation, feature extractions methods, and classification techniques. Challenges and future trends are also highlighted.

    Deep learning approaches for detecting freezing of gait in Parkinson’s disease patients through on-body acceleration sensors

    Get PDF
    Freezing of gait (FOG) is one of the most incapacitating motor symptoms in Parkinson’s disease (PD). The occurrence of FOG reduces the patients’ quality of live and leads to falls. FOG assessment has usually been made through questionnaires, however, this method can be subjective and could not provide an accurate representation of the severity of this symptom. The use of sensor-based systems can provide accurate and objective information to track the symptoms’ evolution to optimize PD management and treatments. Several authors have proposed specific methods based on wearables and the analysis of inertial signals to detect FOG in laboratory conditions, however, its performance is usually lower when being used at patients’ homes. This study presents a new approach based on a recurrent neural network (RNN) and a single waist-worn triaxial accelerometer to enhance the FOG detection performance to be used in real home-environments. Also, several machine and deep learning approaches for FOG detection are evaluated using a leave-one-subject-out (LOSO) cross-validation. Results show that modeling spectral information of adjacent windows through an RNN can bring a significant improvement in the performance of FOG detection without increasing the length of the analysis window (required to using it as a cue-system).This work has been supported by: (1) FCT—Fundação para a Ciência e Tecnologia within the R&D Units Project Scope: UIDB/00319/2020. (2) Grupo de Investigación en Instrumentación y Acústica Aplicada (I2A2). ETSI Industriales. Universidad Politécnica de Madrid. The authors would like to thank Technical Research Centre for Dependency Care and Autonomous Living (CETpD) for sharing the data of freezing of gait
    • …
    corecore