18,186 research outputs found
Human activity recognition on smartphones using a multiclass hardware-friendly support vector machine
Activity-Based Computing aims to capture the state of the user and its environment by exploiting heterogeneous sensors in order to provide adaptation to exogenous computing resources. When these sensors are attached to the subject’s body, they permit continuous monitoring of numerous physiological signals. This has appealing use in healthcare applications, e.g. the exploitation of Ambient Intelligence (AmI) in daily activity monitoring for elderly people. In this paper, we present a system for human physical Activity Recognition (AR) using smartphone inertial sensors. As these mobile phones are limited in terms of energy and computing power, we propose a novel hardware-friendly approach for multiclass classification. This method adapts the standard Support Vector Machine (SVM) and exploits fixed-point arithmetic for computational cost reduction. A comparison with the traditional SVM shows a significant improvement in terms of computational costs while maintaining similar accuracy, which can contribute to develop more sustainable systems for AmI.Peer ReviewedPostprint (author's final draft
Human physical activity recognition algorithm based on smartphone data and long short time memory neural network
The continuous advancement of smartphone sensors has brought more opportunities for the universal application of human motion recognition technology. Based on the data of the mobile phone's three-axis acceleration sensor, using combining a double-layer Long Short Time Memory (LSTM) and full connected layers allow us to improve human actions recognition accuracy, including walking, jogging, sitting, standing, and going up and down stairs. This is helpful for smart assistive technology. It is shown that physical activity classification accuracy is equal to 98.4 %
Seeking Optimum System Settings for Physical Activity Recognition on Smartwatches
Physical activity recognition (PAR) using wearable devices can provide valued
information regarding an individual's degree of functional ability and
lifestyle. In this regards, smartphone-based physical activity recognition is a
well-studied area. Research on smartwatch-based PAR, on the other hand, is
still in its infancy. Through a large-scale exploratory study, this work aims
to investigate the smartwatch-based PAR domain. A detailed analysis of various
feature banks and classification methods are carried out to find the optimum
system settings for the best performance of any smartwatch-based PAR system for
both personal and impersonal models. To further validate our hypothesis for
both personal (The classifier is built using the data only from one specific
user) and impersonal (The classifier is built using the data from every user
except the one under study) models, we tested single subject validation process
for smartwatch-based activity recognition.Comment: 15 pages, 2 figures, Accepted in CVC'1
A Study and Estimation a Lost Person Behavior in Crowded Areas Using Accelerometer Data from Smartphones
As smartphones become more popular, applications are being developed with new and innovative ways to solve problems in the day-to-day lives of users. One area of smartphone technology that has been developed in recent years is human activity recognition (HAR). This technology uses various sensors that are built into the smartphone to sense a person\u27s activity in real time. Applications that incorporate HAR can be used to track a person\u27s movements and are very useful in areas such as health care. We use this type of motion sensing technology, specifically, using data collected from the accelerometer sensor. The purpose of this study is to study and estimate the person who may become lost in a crowded area. The application is capable of estimating the movements of people in a crowded area, and whether or not the person is lost in a crowded area based on his/her movements as detected by the smartphone. This will be a great benefit to anyone interested in crowd management strategies. In this paper, we review related literature and research that has given us the basis for our own research. We also detail research on lost person behavior. We looked at the typical movements a person will likely make when he/she is lost and used these movements to indicate lost person behavior. We then evaluate and describe the creation of the application, all of its components, and the testing process
Classification of sporting activities using smartphone accelerometers
In this paper we present a framework that allows for the automatic identification of sporting activities using commonly available smartphones. We extract discriminative informational features from smartphone accelerometers using the Discrete Wavelet Transform (DWT). Despite the poor quality of their accelerometers, smartphones were used as capture devices due to their prevalence in today’s society. Successful classification on this basis potentially makes the technology accessible to both elite and non-elite athletes. Extracted features are used to train different categories of classifiers. No one classifier family has a reportable direct advantage in activity classification problems to date; thus we examine classifiers from each of the most widely used classifier families. We investigate three classification approaches; a commonly used SVM-based approach, an optimized classification model and a fusion of classifiers. We also investigate the effect of changing several of the DWT input parameters, including mother wavelets, window lengths and DWT decomposition levels. During the course of this work we created a challenging
sports activity analysis dataset, comprised of soccer and field-hockey activities. The average maximum F-measure accuracy of 87% was achieved using a fusion of classifiers, which was 6% better than a single classifier model and 23% better than a standard SVM approach
Resource consumption analysis of online activity recognition on mobile phones and smartwatches
Most of the studies on human activity recognition using smartphones and smartwatches are performed in an offline manner. In such studies, collected data is analyzed in machine learning tools with less focus on the resource consumption of these devices for running an activity recognition system. In this paper, we analyze the resource consumption of human activity recognition on both smartphones and smartwatches, considering six different classifiers, three different sensors, different sampling rates and window sizes. We study the CPU, memory and battery usage with different parameters, where the smartphone is used to recognize seven physical activities and the smartwatch is used to recognize smoking activity. As a result of this analysis, we report that classification function takes a very small amount of CPU time out of total app’s CPU time while sensing and feature calculation consume most of it. When an additional sensor is used besides an accelerometer, such as gyroscope, CPU usage increases significantly. Analysis results also show that increasing the window size reduces the resource consumption more than reducing the sampling rate. As a final remark, we observe that a more complex model using only the accelerometer is a better option than using a simple model with both accelerometer and gyroscope when resource usage is to be reduced
Context-awareness for mobile sensing: a survey and future directions
The evolution of smartphones together with increasing computational power have empowered developers to create innovative context-aware applications for recognizing user related social and cognitive activities in any situation and at any location. The existence and awareness of the context provides the capability of being conscious of physical environments or situations around mobile device users. This allows network services to respond proactively and intelligently based on such awareness. The key idea behind context-aware applications is to encourage users to collect, analyze and share local sensory knowledge in the purpose for a large scale community use by creating a smart network. The desired network is capable of making autonomous logical decisions to actuate environmental objects, and also assist individuals. However, many open challenges remain, which are mostly arisen due to the middleware services provided in mobile devices have limited resources in terms of power, memory and bandwidth. Thus, it becomes critically important to study how the drawbacks can be elaborated and resolved, and at the same time better understand the opportunities for the research community to contribute to the context-awareness. To this end, this paper surveys the literature over the period of 1991-2014 from the emerging concepts to applications of context-awareness in mobile platforms by providing up-to-date research and future research directions. Moreover, it points out the challenges faced in this regard and enlighten them by proposing possible solutions
Deep HMResNet Model for Human Activity-Aware Robotic Systems
Endowing the robotic systems with cognitive capabilities for recognizing
daily activities of humans is an important challenge, which requires
sophisticated and novel approaches. Most of the proposed approaches explore
pattern recognition techniques which are generally based on hand-crafted
features or learned features. In this paper, a novel Hierarchal Multichannel
Deep Residual Network (HMResNet) model is proposed for robotic systems to
recognize daily human activities in the ambient environments. The introduced
model is comprised of multilevel fusion layers. The proposed Multichannel 1D
Deep Residual Network model is, at the features level, combined with a
Bottleneck MLP neural network to automatically extract robust features
regardless of the hardware configuration and, at the decision level, is fully
connected with an MLP neural network to recognize daily human activities.
Empirical experiments on real-world datasets and an online demonstration are
used for validating the proposed model. Results demonstrated that the proposed
model outperforms the baseline models in daily human activity recognition.Comment: Presented at AI-HRI AAAI-FSS, 2018 (arXiv:1809.06606
- …