91,470 research outputs found
Recognition of Crowd Behavior from Mobile Sensors with Pattern Analysis and Graph Clustering Methods
Mobile on-body sensing has distinct advantages for the analysis and
understanding of crowd dynamics: sensing is not geographically restricted to a
specific instrumented area, mobile phones offer on-body sensing and they are
already deployed on a large scale, and the rich sets of sensors they contain
allows one to characterize the behavior of users through pattern recognition
techniques.
In this paper we present a methodological framework for the machine
recognition of crowd behavior from on-body sensors, such as those in mobile
phones. The recognition of crowd behaviors opens the way to the acquisition of
large-scale datasets for the analysis and understanding of crowd dynamics. It
has also practical safety applications by providing improved crowd situational
awareness in cases of emergency.
The framework comprises: behavioral recognition with the user's mobile
device, pairwise analyses of the activity relatedness of two users, and graph
clustering in order to uncover globally, which users participate in a given
crowd behavior. We illustrate this framework for the identification of groups
of persons walking, using empirically collected data.
We discuss the challenges and research avenues for theoretical and applied
mathematics arising from the mobile sensing of crowd behaviors
Human activity recognition using wearable sensors: a deep learning approach
In the past decades, Human Activity Recognition (HAR) grabbed considerable research attentions from a wide range of pattern recognition and humanâcomputer interaction researchers due to its prominent applications such as smart home health care. The wealth of information requires efficient classification and analysis methods. Deep learning represents a promising technique for large-scale data analytics. There are various ways of using different sensors for human activity recognition in a smartly controlled environment. Among them, physical human activity recognition through wearable sensors provides valuable information about an individualâs degree of functional ability and lifestyle. There is abundant research that works upon real time processing and causes more power consumption of mobile devices. Mobile phones are resource-limited devices. It is a thought-provoking task to implement and evaluate different recognition systems on mobile devices.
This work proposes a Deep Belief Network (DBN) model for successful human activity recognition. Various experiments are performed on a real-world wearable sensor dataset to verify the effectiveness of the deep learning algorithm. The results show that the proposed DBN performs competitively in comparison with other algorithms and achieves satisfactory activity recognition performance. Some open problems and ideas are also presented and should be investigated as future research
Comparing the Performance of Machine Learning Algorithms for Human Activities Recognition using WISDM Dataset
Human activity recognition is an important area of machine learning research as it has much utilization in different areas such as sports training, security, entertainment, ambient-assisted living, and health monitoring and management. Studying human activity recognition shows that researchers are interested mostly in the daily activities of the human. Mobile phones are used to be more than luxury products, it has become a kind of urgent need for a fast-moving world with rapid development. Nowadays mobile phone is well equipped with advanced processor, more memory, powerful battery and built-in sensors. This provides an opportunity to open up new areas of data mining for activity recognition of humanâs daily living. In this paper, we tested experiment using Tree based Classifiers (Decision Tree, J48, JRIP, and Random Forest) and Rule based algorithms Classifiers (Naive Bayes and AD1) to classify six activities of daily life by using Weka tool. According to the tested results Random Forest classifier is more accurate than other classifiers
Human activity recognition on smartphones using a multiclass hardware-friendly support vector machine
Activity-Based Computing aims to capture the state of the user and its environment by exploiting heterogeneous sensors in order to provide adaptation to exogenous computing resources. When these sensors are attached to the subjectâs body, they permit continuous monitoring of numerous physiological signals. This has appealing use in healthcare applications, e.g. the exploitation of Ambient Intelligence (AmI) in daily activity monitoring for elderly people. In this paper, we present a system for human physical Activity Recognition (AR) using smartphone inertial sensors. As these mobile phones are limited in terms of energy and computing power, we propose a novel hardware-friendly approach for multiclass classification. This method adapts the standard Support Vector Machine (SVM) and exploits fixed-point arithmetic for computational cost reduction. A comparison with the traditional SVM shows a significant improvement in terms of computational costs while maintaining similar accuracy, which can contribute to develop more sustainable systems for AmI.Peer ReviewedPostprint (author's final draft
FLSys: Toward an Open Ecosystem for Federated Learning Mobile Apps
This paper presents the design, implementation, and evaluation of FLSys, a
mobile-cloud federated learning (FL) system that supports deep learning models
for mobile apps. FLSys is a key component toward creating an open ecosystem of
FL models and apps that use these models. FLSys is designed to work with mobile
sensing data collected on smart phones, balance model performance with resource
consumption on the phones, tolerate phone communication failures, and achieve
scalability in the cloud. In FLSys, different DL models with different FL
aggregation methods in the cloud can be trained and accessed concurrently by
different apps. Furthermore, FLSys provides a common API for third-party app
developers to train FL models. FLSys is implemented in Android and AWS cloud.
We co-designed FLSys with a human activity recognition (HAR) in the wild FL
model. HAR sensing data was collected in two areas from the phones of 100+
college students during a five-month period. We implemented HAR-Wild, a CNN
model tailored to mobile devices, with a data augmentation mechanism to
mitigate the problem of non-Independent and Identically Distributed (non-IID)
data that affects FL model training in the wild. A sentiment analysis (SA)
model is used to demonstrate how FLSys effectively supports concurrent models,
and it uses a dataset with 46,000+ tweets from 436 users. We conducted
extensive experiments on Android phones and emulators showing that FLSys
achieves good model utility and practical system performance.Comment: The first two authors contributed equally to this wor
An Exercise and Sports Equipment Recognition System
Most mobile health management applications today require manual input or use sensors like the accelerometer or GPS to record user data. The onboard camera remains underused. We propose an Exercise and Sports Equipment Recognition System (ESRS) that can recognize physical activity equipment from raw image data. This system can be integrated with mobile phones to allow the camera to become a primary input device for recording physical activity. We employ a deep convolutional neural network to train models capable of recognizing 14 different equipment categories. Furthermore, we propose a preprocessing scheme that uses color normalization and denoising techniques to improve recognition accuracy. Our best model is able to achieve a a top-3 accuracy of 83.3% on the test dataset. We demonstrate that our model improves upon GoogLeNet for this dataset, the state-of-the-art network which won the ILSVRC 2014 challenge. Our work is extendable as improving the quality and size of the training dataset can further boost predictive accuracy
Emotions in context: examining pervasive affective sensing systems, applications, and analyses
Pervasive sensing has opened up new opportunities for measuring our feelings and understanding our behavior by monitoring our affective states while mobile. This review paper surveys pervasive affect sensing by examining and considering three major elements of affective pervasive systems, namely; âsensingâ, âanalysisâ, and âapplicationâ. Sensing investigates the different sensing modalities that are used in existing real-time affective applications, Analysis explores different approaches to emotion recognition and visualization based on different types of collected data, and Application investigates different leading areas of affective applications. For each of the three aspects, the paper includes an extensive survey of the literature and finally outlines some of challenges and future research opportunities of affective sensing in the context of pervasive computing
Anticipatory Mobile Computing: A Survey of the State of the Art and Research Challenges
Today's mobile phones are far from mere communication devices they were ten
years ago. Equipped with sophisticated sensors and advanced computing hardware,
phones can be used to infer users' location, activity, social setting and more.
As devices become increasingly intelligent, their capabilities evolve beyond
inferring context to predicting it, and then reasoning and acting upon the
predicted context. This article provides an overview of the current state of
the art in mobile sensing and context prediction paving the way for
full-fledged anticipatory mobile computing. We present a survey of phenomena
that mobile phones can infer and predict, and offer a description of machine
learning techniques used for such predictions. We then discuss proactive
decision making and decision delivery via the user-device feedback loop.
Finally, we discuss the challenges and opportunities of anticipatory mobile
computing.Comment: 29 pages, 5 figure
- âŠ