14 research outputs found

    Seeking Optimum System Settings for Physical Activity Recognition on Smartwatches

    Full text link
    Physical activity recognition (PAR) using wearable devices can provide valued information regarding an individual's degree of functional ability and lifestyle. In this regards, smartphone-based physical activity recognition is a well-studied area. Research on smartwatch-based PAR, on the other hand, is still in its infancy. Through a large-scale exploratory study, this work aims to investigate the smartwatch-based PAR domain. A detailed analysis of various feature banks and classification methods are carried out to find the optimum system settings for the best performance of any smartwatch-based PAR system for both personal and impersonal models. To further validate our hypothesis for both personal (The classifier is built using the data only from one specific user) and impersonal (The classifier is built using the data from every user except the one under study) models, we tested single subject validation process for smartwatch-based activity recognition.Comment: 15 pages, 2 figures, Accepted in CVC'1

    On-line Context Aware Physical Activity Recognition from the Accelerometer and Audio Sensors of Smartphones

    No full text
    International audienceActivity Recognition (AR) from smartphone sensors has be-come a hot topic in the mobile computing domain since it can provide ser-vices directly to the user (health monitoring, fitness, context-awareness) as well as for third party applications and social network (performance sharing, profiling). Most of the research effort has been focused on direct recognition from accelerometer sensors and few studies have integrated the audio channel in their model despite the fact that it is a sensor that is always available on all kinds of smartphones. In this study, we show that audio features bring an important performance improvement over an accelerometer based approach. Moreover, the study demonstrates the interest of considering the smartphone location for on-line context-aware AR and the prediction power of audio features for this task. Finally, an-other contribution of the study is the collected corpus that is made avail-able to the community for AR recognition from audio and accelerometer sensors

    Multi-channel Wireless Sensor Networks

    No full text
    corecore