6 research outputs found
On-line Context Aware Physical Activity Recognition from the Accelerometer and Audio Sensors of Smartphones
International audienceActivity Recognition (AR) from smartphone sensors has be-come a hot topic in the mobile computing domain since it can provide ser-vices directly to the user (health monitoring, fitness, context-awareness) as well as for third party applications and social network (performance sharing, profiling). Most of the research effort has been focused on direct recognition from accelerometer sensors and few studies have integrated the audio channel in their model despite the fact that it is a sensor that is always available on all kinds of smartphones. In this study, we show that audio features bring an important performance improvement over an accelerometer based approach. Moreover, the study demonstrates the interest of considering the smartphone location for on-line context-aware AR and the prediction power of audio features for this task. Finally, an-other contribution of the study is the collected corpus that is made avail-able to the community for AR recognition from audio and accelerometer sensors
A CNN Based Transfer Learning Model for Automatic Activity Recognition from Accelerometer Sensors
Accelerometers are become ubiquitous and available in several devices such as smartphones, smartwaches, fitness trackers, and wearable devices. Accelerometers are increasingly used to monitor human activities of daily living in different contexts such as monitoring activities of persons with cognitive deficits in smart homes, and monitoring physical and fitness activities. Activity recognition is the most important core component in monitoring applications. Activity recognition algorithms require substantial amount of labeled data to produce satisfactory results under diverse circumstances. Several methods have been proposed for activity recognition from accelerometer data. However, very little work has been done on identifying connections and relationships between existing labeled datasets to perform transfer learning for new datasets. In this paper, we investigate deep learning based transfer learning algorithm based on convolutional neural networks (CNNs) that takes advantage of learned representations of activities of daily living from one dataset to recognize these activities in different other datasets characterized by different features including sensor modality, sampling rate, activity duration and environment. We experimentally validated our proposed algorithm on several existing datasets and demonstrated its performance and suitability for activity recognition
Latent feature learning for activity recognition using simple sensors in smart homes
[[abstract]]Activity recognition is an important step towards monitoring and evaluating the functional health of an individual, and it potentially promotes human-centric ubiquitous applications in smart homes particularly for senior healthcare. The nature of human activity characterized by a high degree of complexity and uncertainty, however, poses a great challenge to the design of good feature representations and the optimization of classifiers towards building a robust model for human activity recognition. In this study, we propose to exploit deep learning techniques to automatically learn high-level features from the binary sensor data under the assumption that there exist discriminative latent patterns inherent in the simple low-level features. Specifically, we extract high-level features with a stacked autoencoder that has a deep and hierarchy architecture, and combine feature learning and classifier construction into a unified framework to obtain a jointly optimized activity recognizer. Besides, we investigate two different original feature representations of the sensor data for latent feature learning. To evaluate the performance of the proposed method, we conduct extensive experiments on three publicly available smart home datasets, and compare it with a range of shallow models in terms of time-slice accuracy and class accuracy. Experimental results show that our proposed model achieves better recognition rates and generalizes better across different original feature representations, indicating its applicability to the real-world activity recognition.[[notice]]補正完