9,509 research outputs found

    Automatic Annotation for Human Activity Recognition in Free Living Using a Smartphone

    Get PDF
    Data annotation is a time-consuming process posing major limitations to the development of Human Activity Recognition (HAR) systems. The availability of a large amount of labeled data is required for supervised Machine Learning (ML) approaches, especially in the case of online and personalized approaches requiring user specific datasets to be labeled. The availability of such datasets has the potential to help address common problems of smartphone-based HAR, such as inter-person variability. In this work, we present (i) an automatic labeling method facilitating the collection of labeled datasets in free-living conditions using the smartphone, and (ii) we investigate the robustness of common supervised classification approaches under instances of noisy data. We evaluated the results with a dataset consisting of 38 days of manually labeled data collected in free living. The comparison between the manually and the automatically labeled ground truth demonstrated that it was possible to obtain labels automatically with an 80–85% average precision rate. Results obtained also show how a supervised approach trained using automatically generated labels achieved an 84% f-score (using Neural Networks and Random Forests); however, results also demonstrated how the presence of label noise could lower the f-score up to 64–74% depending on the classification approach (Nearest Centroid and Multi-Class Support Vector Machine)

    ADMarker: A Multi-Modal Federated Learning System for Monitoring Digital Biomarkers of Alzheimer's Disease

    Full text link
    Alzheimer's Disease (AD) and related dementia are a growing global health challenge due to the aging population. In this paper, we present ADMarker, the first end-to-end system that integrates multi-modal sensors and new federated learning algorithms for detecting multidimensional AD digital biomarkers in natural living environments. ADMarker features a novel three-stage multi-modal federated learning architecture that can accurately detect digital biomarkers in a privacy-preserving manner. Our approach collectively addresses several major real-world challenges, such as limited data labels, data heterogeneity, and limited computing resources. We built a compact multi-modality hardware system and deployed it in a four-week clinical trial involving 91 elderly participants. The results indicate that ADMarker can accurately detect a comprehensive set of digital biomarkers with up to 93.8% accuracy and identify early AD with an average of 88.9% accuracy. ADMarker offers a new platform that can allow AD clinicians to characterize and track the complex correlation between multidimensional interpretable digital biomarkers, demographic factors of patients, and AD diagnosis in a longitudinal manner

    Sensor-based activity recognition with dynamically added context

    Get PDF
    An activity recognition system essentially processes raw sensor data and maps them into latent activity classes. Most of the previous systems are built with supervised learning techniques and pre-defined data sources, and result in static models. However, in realistic and dynamic environments, original data sources may fail and new data sources become available, a robust activity recognition system should be able to perform evolution automatically with dynamic sensor availability in dynamic environments. In this paper, we propose methods that automatically incorporate dynamically available data sources to adapt and refine the recognition system at run-time. The system is built upon ensemble classifiers which can automatically choose the features with the most discriminative power. Extensive experimental results with publicly available datasets demonstrate the effectiveness of our methods

    Revisiting “Recognizing Human Activities User- Independently on Smartphones Based on Accelerometer Data” – What Has Happened Since 2012?

    Get PDF
    Our article “Recognizing human activities user-independently on smartphones based on accelerometer data” was published in the International Journal of Interactive Multimedia and Artificial Intelligence (IJIMAI) in 2012. In 2018, it was selected as the most outstanding article published in the 10 years of IJIMAI life. To celebrate the 10th anniversary of IJIMAI, in this article we will introduce what has happened in the field of human activity recognition and wearable sensor-based recognition since 2012, and especially, this article concentrates on introducing our work since 2012

    Travel Mode Identification with Smartphone Sensors

    Full text link
    Personal trips in a modern urban society typically involve multiple travel modes. Recognizing a traveller\u27s transportation mode is not only critical to personal context-awareness in related applications, but also essential to urban traffic operations, transportation planning, and facility design. While the state of the art in travel mode recognition mainly relies on large-scale infrastructure-based fixed sensors or on individuals\u27 GPS devices, the emergence of the smartphone provides a promising alternative with its ever-growing computing, networking, and sensing powers. In this thesis, we propose new algorithms for travel mode identification using smartphone sensors. The prototype system is built upon the latest Android and iOS platforms with multimodality sensors. It takes smartphone sensor data as the input, and aims to identify six travel modes: walking, jogging, bicycling, driving a car, riding a bus, taking a subway. The methods and algorithms presented in our work are guided by two key design principles. First, careful consideration of smartphones\u27 limited computing resources and batteries should be taken. Second, careful balancing of the following dimensions (i) user-adaptability, (ii) energy efficiency, and (iii) computation speed. There are three key challenges in travel mode identification with smartphone sensors, stemming from the three steps in a typical mobile mining procedure. They are (C1) data capturing and preprocessing, (C2) feature engineering, and (C3) model training and adaptation. This thesis is our response to the challenges above. To address the first challenge (C1), in Chapter 4 we develop a smartphone app that collects a multitude of smartphone sensor measurement data, and showcase a comprehensive set of de-noising techniques. To tackle challenge (C2), in Chapter 5 we design feature extraction methods that carefully balance prediction accuracy, computation time, and battery consumption. And to answer challenge (C3), in Chapters 6,7 and 8 we design different learning models to accommodate different situations in model training. A hierarchical model with dynamic sensor selection is designed to address the energy consumption issue. We propose a personalized model that adapts to each traveller\u27s specific travel behavior using limited labeled data. We also propose an online model for the purpose of addressing the model updating problem with large scaled data. In addressing the challenges and proposing solutions, this thesis provides an comprehensive study and gives a systematic solution for travel mode detection with smartphone sensors
    • …
    corecore