37,391 research outputs found

    Understanding and Improving Recurrent Networks for Human Activity Recognition by Continuous Attention

    Full text link
    Deep neural networks, including recurrent networks, have been successfully applied to human activity recognition. Unfortunately, the final representation learned by recurrent networks might encode some noise (irrelevant signal components, unimportant sensor modalities, etc.). Besides, it is difficult to interpret the recurrent networks to gain insight into the models' behavior. To address these issues, we propose two attention models for human activity recognition: temporal attention and sensor attention. These two mechanisms adaptively focus on important signals and sensor modalities. To further improve the understandability and mean F1 score, we add continuity constraints, considering that continuous sensor signals are more robust than discrete ones. We evaluate the approaches on three datasets and obtain state-of-the-art results. Furthermore, qualitative analysis shows that the attention learned by the models agree well with human intuition.Comment: 8 pages. published in The International Symposium on Wearable Computers (ISWC) 201

    Improving activity recognition using a wearable barometric pressure sensor in mobility-impaired stroke patients.

    Get PDF
    © 2015 Massé et al.Background: Stroke survivors often suffer from mobility deficits. Current clinical evaluation methods, including questionnaires and motor function tests, cannot provide an objective measure of the patients mobility in daily life. Physical activity performance in daily-life can be assessed using unobtrusive monitoring, for example with a single sensor module fixed on the trunk. Existing approaches based on inertial sensors have limited performance, particularly in detecting transitions between different activities and postures, due to the inherent inter-patient variability of kinematic patterns. To overcome these limitations, one possibility is to use additional information from a barometric pressure (BP) sensor. Methods: Our study aims at integrating BP and inertial sensor data into an activity classifier in order to improve the activity (sitting, standing, walking, lying) recognition and the corresponding body elevation (during climbing stairs or when taking an elevator). Taking into account the trunk elevation changes during postural transitions (sit-to-stand, stand-to-sit), we devised an event-driven activity classifier based on fuzzy-logic. Data were acquired from 12 stroke patients with impaired mobility, using a trunk-worn inertial and BP sensor. Events, including walking and lying periods and potential postural transitions, were first extracted. These events were then fed into a double-stage hierarchical Fuzzy Inference System (H-FIS). The first stage processed the events to infer activities and the second stage improved activity recognition by applying behavioral constraints. Finally, the body elevation was estimated using a pattern-enhancing algorithm applied on BP. The patients were videotaped for reference. The performance of the algorithm was estimated using the Correct Classification Rate (CCR) and F-score. The BP-based classification approach was benchmarked against a previously-published fuzzy-logic classifier (FIS-IMU) and a conventional epoch-based classifier (EPOCH). Results: The algorithm performance for posture/activity detection, in terms of CCR was 90.4 %, with 3.3 % and 5.6 % improvements against FIS-IMU and EPOCH, respectively. The proposed classifier essentially benefits from a better recognition of standing activity (70.3 % versus 61.5 % [FIS-IMU] and 42.5 % [EPOCH]) with 98.2 % CCR for body elevation estimation. Conclusion: The monitoring and recognition of daily activities in mobility-impaired stoke patients can be significantly improved using a trunk-fixed sensor that integrates BP, inertial sensors, and an event-based activity classifier

    Transportation mode recognition fusing wearable motion, sound and vision sensors

    Get PDF
    We present the first work that investigates the potential of improving the performance of transportation mode recognition through fusing multimodal data from wearable sensors: motion, sound and vision. We first train three independent deep neural network (DNN) classifiers, which work with the three types of sensors, respectively. We then propose two schemes that fuse the classification results from the three mono-modal classifiers. The first scheme makes an ensemble decision with fixed rules including Sum, Product, Majority Voting, and Borda Count. The second scheme is an adaptive fuser built as another classifier (including Naive Bayes, Decision Tree, Random Forest and Neural Network) that learns enhanced predictions by combining the outputs from the three mono-modal classifiers. We verify the advantage of the proposed method with the state-of-the-art Sussex-Huawei Locomotion and Transportation (SHL) dataset recognizing the eight transportation activities: Still, Walk, Run, Bike, Bus, Car, Train and Subway. We achieve F1 scores of 79.4%, 82.1% and 72.8% with the mono-modal motion, sound and vision classifiers, respectively. The F1 score is remarkably improved to 94.5% and 95.5% by the two data fusion schemes, respectively. The recognition performance can be further improved with a post-processing scheme that exploits the temporal continuity of transportation. When assessing generalization of the model to unseen data, we show that while performance is reduced - as expected - for each individual classifier, the benefits of fusion are retained with performance improved by 15 percentage points. Besides the actual performance increase, this work, most importantly, opens up the possibility for dynamically fusing modalities to achieve distinct power-performance trade-off at run time

    Real-time human ambulation, activity, and physiological monitoring:taxonomy of issues, techniques, applications, challenges and limitations

    Get PDF
    Automated methods of real-time, unobtrusive, human ambulation, activity, and wellness monitoring and data analysis using various algorithmic techniques have been subjects of intense research. The general aim is to devise effective means of addressing the demands of assisted living, rehabilitation, and clinical observation and assessment through sensor-based monitoring. The research studies have resulted in a large amount of literature. This paper presents a holistic articulation of the research studies and offers comprehensive insights along four main axes: distribution of existing studies; monitoring device framework and sensor types; data collection, processing and analysis; and applications, limitations and challenges. The aim is to present a systematic and most complete study of literature in the area in order to identify research gaps and prioritize future research directions

    Deep Learning for Sensor-based Human Activity Recognition: Overview, Challenges and Opportunities

    Full text link
    The vast proliferation of sensor devices and Internet of Things enables the applications of sensor-based activity recognition. However, there exist substantial challenges that could influence the performance of the recognition system in practical scenarios. Recently, as deep learning has demonstrated its effectiveness in many areas, plenty of deep methods have been investigated to address the challenges in activity recognition. In this study, we present a survey of the state-of-the-art deep learning methods for sensor-based human activity recognition. We first introduce the multi-modality of the sensory data and provide information for public datasets that can be used for evaluation in different challenge tasks. We then propose a new taxonomy to structure the deep methods by challenges. Challenges and challenge-related deep methods are summarized and analyzed to form an overview of the current research progress. At the end of this work, we discuss the open issues and provide some insights for future directions

    Action recognition based on efficient deep feature learning in the spatio-temporal domain

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Hand-crafted feature functions are usually designed based on the domain knowledge of a presumably controlled environment and often fail to generalize, as the statistics of real-world data cannot always be modeled correctly. Data-driven feature learning methods, on the other hand, have emerged as an alternative that often generalize better in uncontrolled environments. We present a simple, yet robust, 2D convolutional neural network extended to a concatenated 3D network that learns to extract features from the spatio-temporal domain of raw video data. The resulting network model is used for content-based recognition of videos. Relying on a 2D convolutional neural network allows us to exploit a pretrained network as a descriptor that yielded the best results on the largest and challenging ILSVRC-2014 dataset. Experimental results on commonly used benchmarking video datasets demonstrate that our results are state-of-the-art in terms of accuracy and computational time without requiring any preprocessing (e.g., optic flow) or a priori knowledge on data capture (e.g., camera motion estimation), which makes it more general and flexible than other approaches. Our implementation is made available.Peer ReviewedPostprint (author's final draft
    • …
    corecore