8 research outputs found

    The Usage of Statistical Learning Methods on Wearable Devices and a Case Study: Activity Recognition on Smartwatches

    Get PDF
    The aim of this study is to explore the usage of statistical learning methods on wearable devices and realize an experimental study for recognition of human activities by using smartwatch sensor data. To achieve this objective, mobile applications that run on smartwatch and smartphone were developed to gain training data and detect human activity momentarily; 500 pattern data were obtained with 4‐second intervals for each activity (walking, typing, stationary, running, standing, writing on board, brushing teeth, cleaning and writing). Created dataset was tested with five different statistical learning methods (Naive Bayes, k nearest neighbour (kNN), logistic regression, Bayesian network and multilayer perceptron) and their performances were compared

    안경에서 기계적으로 증폭된 힘 측정을 통한 측두근 활동의 감지

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 공과대학 기계항공공학부, 2017. 8. 이건우.Recently, the form of a pair of glasses is broadly utilized as a wearable device that provides the virtual and augmented reality in addition to its natural functionality as a visual aid. These approaches, however, have lacked the use of its inherent kinematic structure, which is composed of both the temple and the hinge. When we equip the glasses, the force is concentrated at the hinge, which connects the head piece and the temple, from the law of the lever. In addition, since the temple passes through a temporalis muscle, chewing and wink activity, anatomically activated by the contraction and relaxation of the temporalis muscle, can be detected from the mechanically amplified force measurement at the hinge. This study presents a new and effective method for automatic and objective measurement of the temporalis muscle activity through the natural-born lever mechanism of the glasses. From the implementation of the load cell-integrated wireless circuit module inserted into the both hinges of a 3D printed glasses frame, we developed the system that responds to the temporalis muscle activity persistently regardless of various form factor different from each person. This offers the potential to improve previous studies by avoiding the morphological, behavioral, and environmental constraints of using skin-attached, proximity, and sound sensors. In this study, we collected data featured as sedentary rest, chewing, walking, chewing while walking, talking and wink from 10-subject user study. The collected data were transferred to a series of 84-dimentional feature vectors, each of which was composed of the statistical features of both temporal and spectral domain. These feature vectors, then, were used to define a classifier model implemented by the support vector machine (SVM) algorithm. The model classified the featured activities (chewing, wink, and physical activity) as the average F1 score of 93.7%. This study provides a novel approach on the monitoring of ingestive behavior (MIB) in a non-intrusive and un-obtrusive manner. It supplies the possibility to apply the MIB into daily life by distinguishing the food intake from the other physical activities such as walking, talking, and wink with higher accuracy and wearability. Furthermore, through applying this approach to a sensor-integrated hair band, it can be potentially used for the medical monitoring of the sleep bruxism or temporomandibular dysfunction.Abstract Chapter 1. Introduction 1.1. Motivation 1.1.1. Law of the Lever 1.1.2. Lever Mechanism in Human Body 1.1.3. Mechanical Advantage in Auditory Ossicle 1.1.4. Mechanical Advantage in Glasses 1.2. Background 1.2.1. Biological Information from Temporalis Muscle 1.2.2. Detection of Temporalis Muscle Activity 1.2.3. Monitoring of Ingestive Behavior 1.3. Research Scope and Objectives Chapter 2. Proof-of-Concept Validation 2.1. Experimental Apparatus 2.2. Measurement Results 2.3. Discussion Chapter 3. Implementation of GlasSense 3.1. Hardware Prototyping 3.1.1. Preparation 3.1.2. Load Cell-Integrated Circuit Module 3.1.3. 3D Printed Frame of Glasses 3.1.4. Hardware Integration 3.2. Data Acquisition System 3.2.1. Wireless Data Transmission 3.2.2. Data Collecting Module Chapter 4. Data Collection through User Study 4.1. Preparation for Experiment 4.2. Activity Recording Chapter 5. Feature Extraction 5.1. Signal Preprocessing and Segmentation 5.1.1. Temporal Frame 5.1.2. Spectral Frame 5.2. Feature Extraction 5.2.1. Temporal Features 5.2.2. Spectral Features 5.2.3. Feature Vector Generation Chapter 6. Classification of Featured Activity 6.1. Support Vector Machine (SVM) 6.2. Design of Classifier Model 6.2.1. Grid-Search 6.2.2. Cross-Validation 6.3. Classification Result 6.4. Performance Improvement 6.5. Discussion Chapter 7. Conclusions Bibliography 초록Docto

    Using Hidden Markov Models to Segment and Classify Wrist Motions Related to Eating Activities

    Get PDF
    Advances in body sensing and mobile health technology have created new opportunities for empowering people to take a more active role in managing their health. Measurements of dietary intake are commonly used for the study and treatment of obesity. However, the most widely used tools rely upon self-report and require considerable manual effort, leading to underreporting of consumption, non-compliance, and discontinued use over the long term. We are investigating the use of wrist-worn accelerometers and gyroscopes to automatically recognize eating gestures. In order to improve recognition accuracy, we studied the sequential ependency of actions during eating. In chapter 2 we first undertook the task of finding a set of wrist motion gestures which were small and descriptive enough to model the actions performed by an eater during consumption of a meal. We found a set of four actions: rest, utensiling, bite, and drink; any alternative gestures is referred as the other gesture. The stability of the definitions for gestures was evaluated using an inter-rater reliability test. Later, in chapter 3, 25 meals were hand labeled and used to study the existence of sequential dependence of the gestures. To study this, three types of classifiers were built: 1) a K-nearest neighbor classifier which uses no sequential context, 2) a hidden Markov model (HMM) which captures the sequential context of sub-gesture motions, and 3) HMMs that model inter-gesture sequential dependencies. We built first-order to sixth-order HMMs to evaluate the usefulness of increasing amounts of sequential dependence to aid recognition. The first two were our baseline algorithms. We found that the adding knowledge of the sequential dependence of gestures achieved an accuracy of 96.5%, which is an improvement of 20.7% and 12.2% over the KNN and sub-gesture HMM. Lastly, in chapter 4, we automatically segmented a continuous wrist motion signal and assessed its classification performance for each of the three classifiers. Again, the knowledge of sequential dependence enhances the recognition of gestures in unsegmented data, achieving 90% accuracy and improving 30.1% and 18.9% over the KNN and the sub-gesture HMM

    Context-Aware Complex Human Activity Recognition Using Hybrid Deep Learning Model

    Get PDF
    Smart devices, such as smartphones, smartwatches, etc., are examples of promising platforms for automatic recognition of human activities. However, it is difficult to accurately monitor complex human activities on these platforms due to interclass pattern similarities, which occur when different human activities exhibit similar signal patterns or characteristics. Current smartphone-based recognition systems depend on traditional sensors, such as accelerometers and gyroscopes, which are built-in in these devices. Therefore, apart from using information from the traditional sensors, these systems lack the contextual information to support automatic activity recognition. In this article, we explore environmental contexts, such as illumination (light conditions) and noise level, to support sensory data obtained from the traditional sensors using a hybrid of Convolutional Neural Network and Long Short-Term Memory (CNN–LSTM) learning models. The models performed sensor fusion by augmenting low-level sensor signals with rich contextual data to improve the models’ recognition accuracy and generalization. Two sets of experiments were performed to validate the proposed solution. The first set of experiments used triaxial inertial sensing signals to train baseline models, while the second set of experiments combined the inertial signals with contextual information from environmental sensors. The obtained results demonstrate that contextual information, such as environmental noise level and light conditions using hybrid deep learning models, achieved better recognition accuracy than the traditional baseline activity recognition models without contextual information

    Advancement in Dietary Assessment and Self-Monitoring Using Technology

    Get PDF
    Although methods to assess or self-monitor intake may be considered similar, the intended function of each is quite distinct. For the assessment of dietary intake, methods aim to measure food and nutrient intake and/or to derive dietary patterns for determining diet-disease relationships, population surveillance or the effectiveness of interventions. In comparison, dietary self-monitoring primarily aims to create awareness of and reinforce individual eating behaviours, in addition to tracking foods consumed. Advancements in the capabilities of technologies, such as smartphones and wearable devices, have enhanced the collection, analysis and interpretation of dietary intake data in both contexts. This Special Issue invites submissions on the use of novel technology-based approaches for the assessment of food and/or nutrient intake and for self-monitoring eating behaviours. Submissions may document any part of the development and evaluation of the technology-based approaches. Examples may include: web adaption of existing dietary assessment or self-monitoring tools (e.g., food frequency questionnaires, screeners) image-based or image-assisted methods mobile/smartphone applications for capturing intake for assessment or self-monitoring wearable cameras to record dietary intake or eating behaviours body sensors to measure eating behaviours and/or dietary intake use of technology-based methods to complement aspects of traditional dietary assessment or self-monitoring, such as portion size estimation

    A data fusion-based hybrid sensory system for older people’s daily activity recognition.

    Get PDF
    Population aged 60 and over is growing faster. Ageing-caused changes, such as physical or cognitive decline, could affect people’s quality of life, resulting in injuries, mental health or the lack of physical activity. Sensor-based human activity recognition (HAR) has become one of the most promising assistive technologies for older people’s daily life. Literature in HAR suggests that each sensor modality has its strengths and limitations and single sensor modalities may not cope with complex situations in practice. This research aims to design and implement a hybrid sensory HAR system to provide more comprehensive, practical and accurate surveillance for older people to assist them living independently. This reseach: 1) designs and develops a hybrid HAR system which provides a spatio- temporal surveillance system for older people by combining the wrist-worn sensors and the room-mounted ambient sensors (passive infrared); the wearable data are used to recognize the defined specific daily activities, and the ambient information is used to infer the occupant’s room-level daily routine; 2): proposes a unique and effective data fusion method to hybridize the two-source sensory data, in which the captured room-level location information from the ambient sensors is also utilized to trigger the sub classification models pretrained by room-assigned wearable data; 3): implements augmented features which are extracted from the attitude angles of the wearable device and explores the contribution of the new features to HAR; 4:) proposes a feature selection (FS) method in the view of kernel canonical correlation analysis (KCCA) to maximize the relevance between the feature candidate and the target class labels and simultaneously minimizes the joint redundancy between the already selected features and the feature candidate, named mRMJR-KCCA; 5:) demonstrates all the proposed methods above with the ground-truth data collected from recruited participants in home settings. The proposed system has three function modes: 1) the pure wearable sensing mode (the whole classification model) which can identify all the defined specific daily activities together and function alone when the ambient sensing fails; 2) the pure ambient sensing mode which can deliver the occupant’s room-level daily routine without wearable sensing; and 3) the data fusion mode (room-based sub classification mode) which provides a more comprehensive and accurate surveillance HAR when both the wearable sensing and ambient sensing function properly. The research also applies the mutual information (MI)-based FS methods for feature selection, Support Vector Machine (SVM) and Random Forest (RF) for classification. The experimental results demonstrate that the proposed hybrid sensory system improves the recognition accuracy to 98.96% after applying data fusion using Random Forest (RF) classification and mRMJR-KCCA feature selection. Furthermore, the improved results are achieved with a much smaller number of features compared with the scenario of recognizing all the defined activities using wearable data alone. The research work conducted in the thesis is unique, which is not directly compared with others since there are few other similar existing works in terms of the proposed data fusion method and the introduced new feature set

    A Study of Temporal Action Sequencing During Consumption of a Meal

    No full text
    corecore