16 research outputs found

    Important Considerations for Human Activity Recognition Using Sensor Data

    Get PDF
    Automated human activity recognition has received much attention in recent years due to increasing focus on interconnected devices in The Internet of Things (IoT) and the miniaturization and proliferation of sensor systems with the adoption of smartphones. In this work, we focus on the current status of human activity recognition across multiple studies, including methodology, accuracy of results, and current challenges to implementation. We include some preliminary work we have completed on a sensor system for classifying treadmill usage

    The Usage of Statistical Learning Methods on Wearable Devices and a Case Study: Activity Recognition on Smartwatches

    Get PDF
    The aim of this study is to explore the usage of statistical learning methods on wearable devices and realize an experimental study for recognition of human activities by using smartwatch sensor data. To achieve this objective, mobile applications that run on smartwatch and smartphone were developed to gain training data and detect human activity momentarily; 500 pattern data were obtained with 4‐second intervals for each activity (walking, typing, stationary, running, standing, writing on board, brushing teeth, cleaning and writing). Created dataset was tested with five different statistical learning methods (Naive Bayes, k nearest neighbour (kNN), logistic regression, Bayesian network and multilayer perceptron) and their performances were compared

    Seeking Optimum System Settings for Physical Activity Recognition on Smartwatches

    Full text link
    Physical activity recognition (PAR) using wearable devices can provide valued information regarding an individual's degree of functional ability and lifestyle. In this regards, smartphone-based physical activity recognition is a well-studied area. Research on smartwatch-based PAR, on the other hand, is still in its infancy. Through a large-scale exploratory study, this work aims to investigate the smartwatch-based PAR domain. A detailed analysis of various feature banks and classification methods are carried out to find the optimum system settings for the best performance of any smartwatch-based PAR system for both personal and impersonal models. To further validate our hypothesis for both personal (The classifier is built using the data only from one specific user) and impersonal (The classifier is built using the data from every user except the one under study) models, we tested single subject validation process for smartwatch-based activity recognition.Comment: 15 pages, 2 figures, Accepted in CVC'1

    Smartwatch-Based IoT Fall Detection Application

    Get PDF
    This paper proposes using only the streaming accelerometer data from a commodity-based smartwatch (IoT) device to detect falls. The smartwatch is paired with a smartphone as a means for performing the computation necessary for the prediction of falls in realtime without incurring latency in communicating with a cloud server while also preserving data privacy. The majority of current fall detection applications require specially designed hardware and software which make them expensive and inaccessible to the general public. Moreover, a fall detection application that uses a wrist worn smartwatch for data collection has the added benefit that it can be perceived as a piece of jewelry and thus non-intrusive. We experimented with both Support Vector Machine and Naive Bayes machine learning algorithms for the creation of the fall model. We demonstrated that by adjusting the sampling frequency of the streaming data, computing acceleration features over a sliding window, and using a Naive Bayes machine learning model, we can obtain the true positive rate of fall detection in real-world setting with 93.33% accuracy. Our result demonstrated that using a commodity-based smartwatch sensor can yield fall detection results that are competitive with those of custom made expensive sensors

    Optimized limited memory and warping LCSS for online gesture recognition or overlearning?

    Get PDF
    In this paper, we present and evaluate a new algorithm for online gesture recognition in noisy streams. This technique relies upon the proposed LM-WLCSS (Limited Memory and Warping LCSS) algorithm that has demonstrated its efficiency on gesture recognition. This new method involves a quantization step (via the KMeans clustering algorithm). This transforms new data to a finite set. In this way, each new sample can be compared to several templates (one per class) and gestures are rejected based on a previously trained rejection threshold. Then, an algorithm, called SearchMax, find a local maximum within a sliding window and output whether or not the gesture has been recognized. In order to resolve conflicts that may occur, another classifier could be completed. As the K-Means clustering algorithm, needs to be initialized with the number of clusters to create, we also introduce a straightforward optimization process. Such an operation also optimizes the window size for the SearchMax algorithm. In order to demonstrate the robustness of our algorithm, an experiment has been performed over two different data sets. However, results on tested data sets are only accurate when training data are used as test data. This may be due to the fact that the method is in an overlearning state

    Unobtrusive monitoring of behavior and movement patterns to detect clinical depression severity level via smartphone

    Get PDF
    The number of individuals with mental disorders is increasing and they are commonly found among individuals who avoid social interaction and like to live alone. Amongst such mental health disorders is depression which is both common and serious. The present paper introduces a method to assess the depression level of an individual using a smartphone by monitoring their daily activities. The time domain characteristics from a smartphone acceleration sensor were used alongside a vector machine algorithm to classify physical activities. Additionally, the geographical location information was clustered using a smartphone GPS sensor to simplify movement patterns. A total of 12 features were extracted from individuals’ physical activity and movement patterns and were analyzed alongside their weekly depression scores using the nine-item Patient Health Questionnaire. Using a wrapper feature selection method, a subset of features was selected and applied to a linear regression model to estimate the depression score. The support vector machine algorithm was then used to classify the depression severity level among individuals (absence, moderate, severe) and had an accuracy of 87.2% in severe depression cases which outperformed other classification models including the k-nearest neighbor and artificial neural network. This method of identifying depression is a cost-effective solution for long-term use and can monitor individuals for depression without invading their personal space or creating other day-to-day disturbances

    Recognition of Daily Gestures with Wearable Inertial Rings and Bracelets

    Get PDF
    Recognition of activities of daily living plays an important role in monitoring elderly people and helping caregivers in controlling and detecting changes in daily behaviors. Thanks to the miniaturization and low cost of Microelectromechanical systems (MEMs), in particular of Inertial Measurement Units, in recent years body-worn activity recognition has gained popularity. In this context, the proposed work aims to recognize nine different gestures involved in daily activities using hand and wrist wearable sensors. Additionally, the analysis was carried out also considering different combinations of wearable sensors, in order to find the best combination in terms of unobtrusiveness and recognition accuracy. In order to achieve the proposed goals, an extensive experimentation was performed in a realistic environment. Twenty users were asked to perform the selected gestures and then the data were off-line analyzed to extract significant features. In order to corroborate the analysis, the classification problem was treated using two different and commonly used supervised machine learning techniques, namely Decision Tree and Support Vector Machine, analyzing both personal model and Leave-One-Subject-Out cross validation. The results obtained from this analysis show that the proposed system is able to recognize the proposed gestures with an accuracy of 89.01% in the Leave-One-Subject-Out cross validation and are therefore promising for further investigation in real life scenarios
    corecore