4,869 research outputs found
Deep HMResNet Model for Human Activity-Aware Robotic Systems
Endowing the robotic systems with cognitive capabilities for recognizing
daily activities of humans is an important challenge, which requires
sophisticated and novel approaches. Most of the proposed approaches explore
pattern recognition techniques which are generally based on hand-crafted
features or learned features. In this paper, a novel Hierarchal Multichannel
Deep Residual Network (HMResNet) model is proposed for robotic systems to
recognize daily human activities in the ambient environments. The introduced
model is comprised of multilevel fusion layers. The proposed Multichannel 1D
Deep Residual Network model is, at the features level, combined with a
Bottleneck MLP neural network to automatically extract robust features
regardless of the hardware configuration and, at the decision level, is fully
connected with an MLP neural network to recognize daily human activities.
Empirical experiments on real-world datasets and an online demonstration are
used for validating the proposed model. Results demonstrated that the proposed
model outperforms the baseline models in daily human activity recognition.Comment: Presented at AI-HRI AAAI-FSS, 2018 (arXiv:1809.06606
Transportation mode recognition fusing wearable motion, sound and vision sensors
We present the first work that investigates the potential of improving the performance of transportation mode recognition through fusing multimodal data from wearable sensors: motion, sound and vision. We first train three independent deep neural network (DNN) classifiers, which work with the three types of sensors, respectively. We then propose two schemes that fuse the classification results from the three mono-modal classifiers. The first scheme makes an ensemble decision with fixed rules including Sum, Product, Majority Voting, and Borda Count. The second scheme is an adaptive fuser built as another classifier (including Naive Bayes, Decision Tree, Random Forest and Neural Network) that learns enhanced predictions by combining the outputs from the three mono-modal classifiers. We verify the advantage of the proposed method with the state-of-the-art Sussex-Huawei Locomotion and Transportation (SHL) dataset recognizing the eight transportation activities: Still, Walk, Run, Bike, Bus, Car, Train and Subway. We achieve F1 scores of 79.4%, 82.1% and 72.8% with the mono-modal motion, sound and vision classifiers, respectively. The F1 score is remarkably improved to 94.5% and 95.5% by the two data fusion schemes, respectively. The recognition performance can be further improved with a post-processing scheme that exploits the temporal continuity of transportation. When assessing generalization of the model to unseen data, we show that while performance is reduced - as expected - for each individual classifier, the benefits of fusion are retained with performance improved by 15 percentage points. Besides the actual performance increase, this work, most importantly, opens up the possibility for dynamically fusing modalities to achieve distinct power-performance trade-off at run time
Instructor Activity Recognition Using Smartwatch and Smartphone Sensors
During a classroom session, an instructor performs several activities, such as writing on the board, speaking to the students, gestures to explain a concept. A record of the time spent in each of these activities could be valuable information for the instructors to virtually observe their own style of instruction. It can help in identifying activities that engage the students more, thereby enhancing teaching effectiveness and efficiency. In this work, we present a preliminary study on profiling multiple activities of an instructor in the classroom using smartwatch and smartphone sensor data. We use 2 benchmark datasets to test out the feasibility of classifying the activities. Comparing multiple machine learning techniques, we finally propose a hybrid deep recurrent neural network based approach that performs better than the other techniques
- …