1,976 research outputs found

    Context-Aware Human Activity Recognition (CAHAR) in-the-Wild Using Smartphone Accelerometer

    Get PDF

    Computational Approaches for Remote Monitoring of Symptoms and Activities

    Get PDF
    We now have a unique phenomenon where significant computational power, storage, connectivity, and built-in sensors are carried by many people willingly as part of their life style; two billion people now use smart phones. Unique and innovative solutions using smart phones are motivated by rising health care cost in both the developed and developing worlds. In this work, development of a methodology for building a remote symptom monitoring system for rural people in developing countries has been explored. Design, development, deployment, and evaluation of e-ESAS is described. The system’s performance was studied by analyzing feedback from users. A smart phone based prototype activity detection system that can detect basic human activities for monitoring by remote observers was developed and explored in this study. The majority voting fusion technique, along with decision tree learners were used to classify eight activities in a multi-sensor framework. This multimodal approach was examined in details and evaluated for both single and multi-subject cases. Time-delay embedding with expectation-maximization for Gaussian Mixture Model was explored as a way of developing activity detection system using reduced number of sensors, leading to a lower computational cost algorithm. The systems and algorithms developed in this work focus on means for remote monitoring using smart phones. The smart phone based remote symptom monitoring system called e-ESAS serves as a working tool to monitor essential symptoms of patients with breast cancer by doctors. The activity detection system allows a remote observer to monitor basic human activities. For the activity detection system, the majority voting fusion technique in multi-sensor architecture is evaluated for eight activities in both single and multiple subjects cases. Time-delay embedding with expectation-maximization algorithm for Gaussian Mixture Model was studied using data from multiple single sensor cases

    A 'one-size-fits-most' walking recognition method for smartphones, smartwatches, and wearable accelerometers

    Full text link
    The ubiquity of personal digital devices offers unprecedented opportunities to study human behavior. Current state-of-the-art methods quantify physical activity using 'activity counts,' a measure which overlooks specific types of physical activities. We proposed a walking recognition method for sub-second tri-axial accelerometer data, in which activity classification is based on the inherent features of walking: intensity, periodicity, and duration. We validated our method against 20 publicly available, annotated datasets on walking activity data collected at various body locations (thigh, waist, chest, arm, wrist). We demonstrated that our method can estimate walking periods with high sensitivity and specificity: average sensitivity ranged between 0.92 and 0.97 across various body locations, and average specificity for common daily activities was typically above 0.95. We also assessed the method's algorithmic fairness to demographic and anthropometric variables and measurement contexts (body location, environment). Finally, we have released our method as open-source software in MATLAB and Python.Comment: 39 pages, 4 figures (incl. 1 supplementary), and 5 tables (incl. 2 supplementary

    Activity Recognition for Incomplete Spinal Cord Injury Subjects Using Hidden Markov Models

    Get PDF
    Successful activity recognition in patients with motor disabilities can improve patient care by providing researchers and clinicians with valuable information on patient movements and quality of life in real-world settings. Understanding the everyday activities of patients is important for rehabilitation. For researchers, having convenient, objective, and continuous data can drastically improve outcome measures to better compare therapies, and ultimately make recommendations. For clinicians, individual assessment of compliance and outcomes outside the clinic can be more objective, permitting much more tailored recommendations to patients. Most importantly, for individual patients, activity recognition can make this improved health care possible by simply having patients wearing a small sensor, minimizing the need for clinical visits but reaping all the benefits of tailored healthcare. There are many activity trackers available in the market. But most of them have been designed for healthy subjects. Studies have shown that activity tracking systems designed for healthy subjects can perform poorly on mobility-impaired populations, like those with incomplete spinal cord injury (iSCI) due to their unique patterns of movement. Because iSCI patient populations move in distinct ways, algorithms can and should be specifically tailored for them. By applying machine learning to collect movement data from this specific patient population, we demonstrate how an iSCI-specific system can improve activity recognition with this population. Traditional activity recognition approaches analyze individual clips of accelerometer data to perform activity recognition. These static classifiers are easier to construct, as each clip of data is treated independently, but the structure of events over time is lost. This thesis attempts to improve upon the standard static classification method by augmenting these static classifiers with a dynamic state estimation model—a hidden Markov model (HMM). An HMM takes into account not only the information present in a clip of sensor data, but also the context of that clip over time, which leads to a higher classification accuracy. By using an HMM to go over the predictions made by the static classifier, unlikely sequences of events can be removed and corrected. Data were collected from thirteen ambulatory incomplete spinal cord injury subjects who were instructed to perform a standardized set of activities while wearing a waist-worn accelerometer in the clinic. Activities included lying, sitting, standing, walking, wheeling, and stair climbing. The accelerometer data was parsed into two-second clips and a standard set of time-series features were extracted from those clips. Those features were then analyzed by a static classifier to produce probabilistic estimates of the likely activity the subject was performing. Those estimates were then input as observations into the HMM to reclassify ambiguous or improbable sequences of activities made by the static classifier. Multiple classifiers and validation methods were used to assess the ability of the machine learning techniques. Using within-subject cross validation, static classifiers provided a classification accuracy of 86.3%. By adding another layer of a hidden Markov model, the accuracy improved an additional 2.6% to 88.9%. In subject-wise cross validation, a hybrid static classifier and HMM model gave the highest classification accuracy of 64.3%, a 1.2% improvement over the model using only static classifiers. Our prediction accuracy was subtle because we dealt with activities that are almost undistinguishable: sitting and wheeling, walking and stair climbing. Individuals with impaired movements can benefit from improved activity recognition to more objectively, conveniently, and continuously measure patient outcomes. Such measures support therapists, clinicians, and clinical researchers to select the right physical or drug therapies, and further refine selected therapies to improve mobility in patients

    Computational Approaches for Remote Monitoring of Symptoms and Activities

    Get PDF
    We now have a unique phenomenon where significant computational power, storage, connectivity, and built-in sensors are carried by many people willingly as part of their life style; two billion people now use smart phones. Unique and innovative solutions using smart phones are motivated by rising health care cost in both the developed and developing worlds. In this work, development of a methodology for building a remote symptom monitoring system for rural people in developing countries has been explored. Design, development, deployment, and evaluation of e-ESAS is described. The system’s performance was studied by analyzing feedback from users. A smart phone based prototype activity detection system that can detect basic human activities for monitoring by remote observers was developed and explored in this study. The majority voting fusion technique, along with decision tree learners were used to classify eight activities in a multi-sensor framework. This multimodal approach was examined in details and evaluated for both single and multi-subject cases. Time-delay embedding with expectation-maximization for Gaussian Mixture Model was explored as a way of developing activity detection system using reduced number of sensors, leading to a lower computational cost algorithm. The systems and algorithms developed in this work focus on means for remote monitoring using smart phones. The smart phone based remote symptom monitoring system called e-ESAS serves as a working tool to monitor essential symptoms of patients with breast cancer by doctors. The activity detection system allows a remote observer to monitor basic human activities. For the activity detection system, the majority voting fusion technique in multi-sensor architecture is evaluated for eight activities in both single and multiple subjects cases. Time-delay embedding with expectation-maximization algorithm for Gaussian Mixture Model was studied using data from multiple single sensor cases

    Context and activity recognition for personalized mobile recommendations

    Get PDF
    Through the use of mobile devices, contextual information about users can be derived to use as an additional information source for traditional recommendation algorithms. This paper presents a framework for detecting the context and activity of users by analyzing sensor data of a mobile device. The recognized activity and context serves as input for a recommender system, which is built on top of the framework. Through context-aware recommendations, users receive a personalized content offer, consisting of relevant information such as points-of-interest, train schedules, and touristic info. An evaluation of the recommender system and the underlying context-recognition framework demonstrates the impact of the response times of external information providers. The data traffic on the mobile device required for the recommendations shows to be limited. A user evaluation confirms the usability and attractiveness of the recommender. The recommendations are experienced as effective and useful for discovering new venues and relevant information

    Objective and subjective measurement of sedentary behavior in human adults : a toolkit.

    Get PDF
    Objectives: Human biologists are increasingly interested in measuring and comparing physical activities in different societies. Sedentary behavior, which refers to time spent sitting or lying down while awake, is a large component of daily 24 hours movement patterns in humans and has been linked to poor health outcomes such as risk of all‐cause and cardiovascular mortality, independently of physical activity. As such, it is important for researchers, with the aim of measuring human movement patterns, to most effectively use resources available to them to capture sedentary behavior. Methods: This toolkit outlines objective (device‐based) and subjective (self‐report) methods for measuring sedentary behavior in free‐living contexts, the benefits and drawbacks to each, as well as novel options for combined use to maximize scientific rigor. Throughout this toolkit, emphasis is placed on considerations for the use of these methods in various field conditions and in varying cultural contexts. Results: Objective measures such as inclinometers are the gold‐standard for measuring total sedentary time but they typically cannot capture contextual information or determine which specific behaviors are taking place. Subjective measures such as questionnaires and 24 hours‐recall methods can provide measurements of time spent in specific sedentary behaviors but are subject to measurement error and response bias. Conclusions: We recommend that researchers use the method(s) that suit the research question; inclinometers are recommended for the measurement of total sedentary time, while self‐report methods are recommended for measuring time spent in particular contexts of sedentary behavior
    • 

    corecore