1,047 research outputs found

    Recognition of elementary upper limb movements in an activity of daily living using data from wrist mounted accelerometers

    No full text
    In this paper we present a methodology as a proof of concept for recognizing fundamental movements of the humanarm (extension, flexion and rotation of the forearm) involved in ‘making-a-cup-of-tea’, typical of an activity of daily-living (ADL). The movements are initially performed in a controlled environment as part of a training phase and the data are grouped into three clusters using k-means clustering. Movements performed during ADL, forming part of the testing phase, are associated with each cluster label using a minimum distance classifier in a multi-dimensional feature space, comprising of features selected from a ranked set of 30 features, using Euclidean and Mahalonobis distance as the metric. Experiments were performed with four healthy subjects and our results show that the proposed methodology can detect the three movements with an overall average accuracy of 88% across all subjects and arm movement types using Euclidean distance classifier

    Recognizing upper limb movements with wrist worn inertial sensors using k-means clustering classification

    No full text
    In this paper we present a methodology for recognizing three fundamental movements of the human forearm (extension, flexion and rotation) using pattern recognition applied to the data from a single wrist-worn, inertial sensor. We propose that this technique could be used as a clinical tool to assess rehabilitation progress in neurodegenerative pathologies such as stroke or cerebral palsy by tracking the number of times a patient performs specific arm movements (e.g. prescribed exercises) with their paretic arm throughout the day. We demonstrate this with healthy subjects and stroke patients in a simple proof of concept study in whichthese arm movements are detected during an archetypal activity of daily-living (ADL) – ‘making-a-cup-of-tea’. Data is collected from a tri-axial accelerometer and a tri-axial gyroscope located proximal to the wrist. In a training phase, movements are initially performed in a controlled environment which are represented by a ranked set of 30 time-domain features. Using a sequential forward selection technique, for each set of feature combinations three clusters are formed using k-means clustering followed by 10 runs of 10-fold cross validation on the training data to determine the best feature combinations. For the testing phase, movements performed during the ADL are associated with each cluster label using a minimum distance classifier in a multi-dimensional feature space, comprised of the best ranked features, using Euclidean or Mahalanobis distance as the metric. Experiments were performed with four healthy subjects and four stroke survivors and our results showthat the proposed methodology can detect the three movements performed during the ADL with an overall average accuracy of 88% using the accelerometer data and 83% using the gyroscope data across all healthy subjects and arm movement types. The average accuracy across all stroke survivors was 70% using accelerometer data and 66% using gyroscope data. We also use a Linear Discriminant Analysis (LDA) classifier and a Support Vector Machine (SVM) classifier in association with the same set of features to detect the three arm movements and compare the results to demonstrate the effectiveness of our proposed methodology

    Continuous Estimation of Smoking Lapse Risk from Noisy Wrist Sensor Data Using Sparse and Positive-Only Labels

    Get PDF
    Estimating the imminent risk of adverse health behaviors provides opportunities for developing effective behavioral intervention mechanisms to prevent the occurrence of the target behavior. One of the key goals is to find opportune moments for intervention by passively detecting the rising risk of an imminent adverse behavior. Significant progress in mobile health research and the ability to continuously sense internal and external states of individual health and behavior has paved the way for detecting diverse risk factors from mobile sensor data. The next frontier in this research is to account for the combined effects of these risk factors to produce a composite risk score of adverse behaviors using wearable sensors convenient for daily use. Developing a machine learning-based model for assessing the risk of smoking lapse in the natural environment faces significant outstanding challenges requiring the development of novel and unique methodologies for each of them. The first challenge is coming up with an accurate representation of noisy and incomplete sensor data to encode the present and historical influence of behavioral cues, mental states, and the interactions of individuals with their ever-changing environment. The next noteworthy challenge is the absence of confirmed negative labels of low-risk states and adequate precise annotations of high-risk states. Finally, the model should work on convenient wearable devices to facilitate widespread adoption in research and practice. In this dissertation, we develop methods that account for the multi-faceted nature of smoking lapse behavior to train and evaluate a machine learning model capable of estimating composite risk scores in the natural environment. We first develop mRisk, which combines the effects of various mHealth biomarkers such as stress, physical activity, and location history in producing the risk of smoking lapse using sequential deep neural networks. We propose an event-based encoding of sensor data to reduce the effect of noises and then present an approach to efficiently model the historical influence of recent and past sensor-derived contexts on the likelihood of smoking lapse. To circumvent the lack of confirmed negative labels (i.e., annotated low-risk moments) and only a few positive labels (i.e., sensor-based detection of smoking lapse corroborated by self-reports), we propose a new loss function to accurately optimize the models. We build the mRisk models using biomarker (stress, physical activity) streams derived from chest-worn sensors. Adapting the models to work with less invasive and more convenient wrist-based sensors requires adapting the biomarker detection models to work with wrist-worn sensor data. To that end, we develop robust stress and activity inference methodologies from noisy wrist-sensor data. We first propose CQP, which quantifies wrist-sensor collected PPG data quality. Next, we show that integrating CQP within the inference pipeline improves accuracy-yield trade-offs associated with stress detection from wrist-worn PPG sensors in the natural environment. mRisk also requires sensor-based precise detection of smoking events and confirmation through self-reports to extract positive labels. Hence, we develop rSmoke, an orientation-invariant smoking detection model that is robust to the variations in sensor data resulting from orientation switches in the field. We train the proposed mRisk risk estimation models using the wrist-based inferences of lapse risk factors. To evaluate the utility of the risk models, we simulate the delivery of intelligent smoking interventions to at-risk participants as informed by the composite risk scores. Our results demonstrate the envisaged impact of machine learning-based models operating on wrist-worn wearable sensor data to output continuous smoking lapse risk scores. The novel methodologies we propose throughout this dissertation help instigate a new frontier in smoking research that can potentially improve the smoking abstinence rate in participants willing to quit

    A 'one-size-fits-most' walking recognition method for smartphones, smartwatches, and wearable accelerometers

    Full text link
    The ubiquity of personal digital devices offers unprecedented opportunities to study human behavior. Current state-of-the-art methods quantify physical activity using 'activity counts,' a measure which overlooks specific types of physical activities. We proposed a walking recognition method for sub-second tri-axial accelerometer data, in which activity classification is based on the inherent features of walking: intensity, periodicity, and duration. We validated our method against 20 publicly available, annotated datasets on walking activity data collected at various body locations (thigh, waist, chest, arm, wrist). We demonstrated that our method can estimate walking periods with high sensitivity and specificity: average sensitivity ranged between 0.92 and 0.97 across various body locations, and average specificity for common daily activities was typically above 0.95. We also assessed the method's algorithmic fairness to demographic and anthropometric variables and measurement contexts (body location, environment). Finally, we have released our method as open-source software in MATLAB and Python.Comment: 39 pages, 4 figures (incl. 1 supplementary), and 5 tables (incl. 2 supplementary

    Instructor Activity Recognition Using Smartwatch and Smartphone Sensors

    Get PDF
    During a classroom session, an instructor performs several activities, such as writing on the board, speaking to the students, gestures to explain a concept. A record of the time spent in each of these activities could be valuable information for the instructors to virtually observe their own style of instruction. It can help in identifying activities that engage the students more, thereby enhancing teaching effectiveness and efficiency. In this work, we present a preliminary study on profiling multiple activities of an instructor in the classroom using smartwatch and smartphone sensor data. We use 2 benchmark datasets to test out the feasibility of classifying the activities. Comparing multiple machine learning techniques, we finally propose a hybrid deep recurrent neural network based approach that performs better than the other techniques

    Gesture recognition by learning local motion signatures using smartphones

    Get PDF
    In recent years, gesture or activity recognition is an important area of research for the modern health care system. An activity is recognized by learning from human body postures and signatures. Presently all smartphones are equipped with accelerometer and gyroscopes sensors, and the reading of these sensors can be utilized as an input to a classifier to predict the human activity. Although the human activity recognition gained a notable scientific interest in recent years, still accuracy, scalability and robustness need significant improvement to cater as a solution of most of the real world problems. This paper aims to fill the identified research gap and proposes Grid Search based Logistic Regression and Gradient Boosting Decision Tree multistage prediction model. UCI-HAR dataset has been used to perform Gesture recognition by learning local motion signatures. The proposed approach exhibits improved accuracy over preexisting techniques concerning to human activity recognition
    • 

    corecore