2 research outputs found

    CAVIAR: Context-driven Active and Incremental Activity Recognition

    Get PDF
    Activity recognition on mobile device sensor data has been an active research area in mobile and pervasive computing for several years. While the majority of the proposed techniques are based on supervised learning, semi-supervised approaches are being considered to reduce the size of the training set required to initialize the model. These approaches usually apply self-training or active learning to incrementally refine the model, but their effectiveness seems to be limited to a restricted set of physical activities. We claim that the context which surrounds the user (e.g., time, location, proximity to transportation routes) combined with common knowledge about the relationship between context and human activities could be effective in significantly increasing the set of recognized activities including those that are difficult to discriminate only considering inertial sensors, and the highly context-dependent ones. In this paper, we propose CAVIAR, a novel hybrid semi-supervised and knowledge-based system for real-time activity recognition. Our method applies semantic reasoning on context-data to refine the predictions of an incremental classifier. The recognition model is continuously updated using active learning. Results on a real dataset obtained from 26 subjects show the effectiveness of our approach in increasing the recognition rate, extending the number of recognizable activities and, most importantly, reducing the number of queries triggered by active learning. In order to evaluate the impact of context reasoning, we also compare CAVIAR with a purely statistical version, considering features computed on context-data as part of the machine learning process

    Robust and Deployable Gesture Recognition for Smartwatches

    Get PDF
    Funding Information: This work was supported by the Department of Communications and Networking – Aalto University, Finnish Center for Artificial Intelligence (FCAI) and the Academy of Finland projects Human Automata (Project ID: 328813), BAD (Project ID: 318559), Huawei Technologies, and the Horizon 2020 FET program of the European Union (grant CHIST-ERA-20-BCI-001). Publisher Copyright: © 2022 ACM. Open Access fee has been paid, but the PDF version does not contain information on OA licence.Gesture recognition on smartwatches is challenging not only due to resource constraints but also due to the dynamically changing conditions of users. It is currently an open problem how to engineer gesture recognisers that are robust and yet deployable on smartwatches. Recent research has found that common everyday events, such as a user removing and wearing their smartwatch again, can deteriorate recognition accuracy significantly. In this paper, we suggest that prior understanding of causes behind everyday variability and false positives should be exploited in the development of recognisers. To this end, first, we present a data collection method that aims at diversifying gesture data in a representative way, in which users are taken through experimental conditions that resemble known causes of variability (e.g., walking while gesturing) and are asked to produce deliberately varied, but realistic gestures. Secondly, we review known approaches in machine learning for recogniser design on constrained hardware. We propose convolution-based network variations for classifying raw sensor data, achieving greater than 98% accuracy reliably under both individual and situational variations where previous approaches have reported significant performance deterioration. This performance is achieved with a model that is two orders of magnitude less complex than previous state-of-the-art models. Our work suggests that deployable and robust recognition is feasible but requires systematic efforts in data collection and network design to address known causes of gesture variability.Peer reviewe
    corecore