37 research outputs found

    Smartphone Sensor-Based Activity Recognition by Using Machine Learning and Deep Learning Algorithms

    Get PDF
    Article originally published International Journal of Machine Learning and ComputingSmartphones are widely used today, and it becomes possible to detect the user's environmental changes by using the smartphone sensors, as demonstrated in this paper where we propose a method to identify human activities with reasonably high accuracy by using smartphone sensor data. First, the raw smartphone sensor data are collected from two categories of human activity: motion-based, e.g., walking and running; and phone movement-based, e.g., left-right, up-down, clockwise and counterclockwise movement. Firstly, two types of features extraction are designed from the raw sensor data, and activity recognition is analyzed using machine learning classification models based on these features. Secondly, the activity recognition performance is analyzed through the Convolutional Neural Network (CNN) model using only the raw data. Our experiments show substantial improvement in the result with the addition of features and the use of CNN model based on smartphone sensor data with judicious learning techniques and good feature designs

    Human activity recognition on smartphones for mobile context awareness

    Get PDF
    Activity-Based Computing [1] aims to capture the state of the user and its environment by exploiting heterogeneous sensors in order to provide adaptation to exogenous computing resources. When these sensors are attached to the subject’s body, they permit continuous monitoring of numerous physiological signals. This has appealing use in healthcare applications, e.g. the exploitation of Ambient Intelligence (AmI) in daily activity monitoring for elderly people. In this paper, we present a system for human physical Activity Recognition (AR) using smartphone inertial sensors. As these mobile phones are limited in terms of energy and computing power, we propose a novel hardware-friendly approach for multiclass classification. This method adapts the standard Support Vector Machine (SVM) and exploits fixed-point arithmetic. In addition to the clear computational advantages of fixed-point arithmetic, it is easy to show the regularization effect of the number of bits and then the connections with the Statistical Learning Theory. A comparison with the traditional SVM shows a significant improvement in terms of computational costs while maintaining similar accuracy, which can contribute to develop more sustainable systems for AmI.Peer ReviewedPostprint (published version

    Monitoring Functional Capability of Individuals with Lower Limb Amputations Using Mobile Phones

    Get PDF
    To be effective, a prescribed prosthetic device must match the functional requirements and capabilities of each patient. These capabilities are usually assessed by a clinician and reported by the Medicare K-level designation of mobility. However, it is not clear how the K-level designation objectively relates to the use of prostheses outside of a clinical environment. Here, we quantify participant activity using mobile phones and relate activity measured during real world activity to the assigned K-levels. We observe a correlation between K-level and the proportion of moderate to high activity over the course of a week. This relationship suggests that accelerometry-based technologies such as mobile phones can be used to evaluate real world activity for mobility assessment. Quantifying everyday activity promises to improve assessment of real world prosthesis use, leading to a better matching of prostheses to individuals and enabling better evaluations of future prosthetic devices.Max Nader Center for Rehabilitation Technologies and Outcome

    Towards Automated Smart Mobile Crowdsensing for Tinnitus Research

    Get PDF
    Tinnitus is a disorder that is not entirely understood, and many of its correlations are still unknown. On the other hand, smartphones became ubiquitous. Their modern versions provide high computational capabilities, reasonable battery size, and a bunch of embedded high-quality sensors, combined with an accepted user interface and an application ecosystem. For tinnitus, as for many other health problems, there are a number of apps trying to help patients, therapists, and researchers to get insights into personal characteristics but also into scientific correlations as such. In this paper, we present the first approach to an app in this context, called TinnituSense that does automatic sensing of related characteristics and enables correlations to the current condition of the patient by a combined participatory sensing, e.g., a questionnaire. For tinnitus, there is a strong hypothesis that weather conditions have some influence. Our proof-of-concept implementation records weather-related sensor data and correlates them to the standard Tinnitus Handicap Inventory (THI) questionnaire. Thus, TinnituSense enables therapists and researchers to collect evidence for unknown facts, as this is the first opportunity to correlate weather to patient conditions on a larger scale. Our concept as such is limited neither to tinnitus nor to built-in sensors, e.g., in the tinnitus domain, we are experimenting with mobile EEG sensors. TinnituSense is faced with several challenges of which we already solved principle architecture, sensor management, and energy consumption

    Fall Classification by Machine Learning Using Mobile Phones

    Get PDF
    Fall prevention is a critical component of health care; falls are a common source of injury in the elderly and are associated with significant levels of mortality and morbidity. Automatically detecting falls can allow rapid response to potential emergencies; in addition, knowing the cause or manner of a fall can be beneficial for prevention studies or a more tailored emergency response. The purpose of this study is to demonstrate techniques to not only reliably detect a fall but also to automatically classify the type. We asked 15 subjects to simulate four different types of falls–left and right lateral, forward trips, and backward slips–while wearing mobile phones and previously validated, dedicated accelerometers. Nine subjects also wore the devices for ten days, to provide data for comparison with the simulated falls. We applied five machine learning classifiers to a large time-series feature set to detect falls. Support vector machines and regularized logistic regression were able to identify a fall with 98% accuracy and classify the type of fall with 99% accuracy. This work demonstrates how current machine learning approaches can simplify data collection for prevention in fall-related research as well as improve rapid response to potential injuries due to falls

    Wearable System for Daily Activity Recognition Using Inertial and Pressure Sensors of a Smart Band and Smart Shoes

    Get PDF
    Human Activity Recognition (HAR) is a challenging task in the field of human-related signal processing. Owing to the development of wearable sensing technology, an emerging research approach in HAR is to identify user-performed tasks by using data collected from wearable sensors. In this paper, we propose a novel system for monitoring and recognizing daily living activities using an off-the-shelf smart band and two smart shoes. The system aims at providing a useful tool for solving problems regarding body part placement, fusion of multimodal sensors and feature selection for a specific set of activities. The system collects inertial and plantar pressure data at wrist and foot to analyze and then, extract, select important features for recognition. We construct and compare two predictive models of classifying activities from the reduced feature set. A comparison of the classification for each wearable device and a fusion scheme is provided to identify the best body part for activity recognition: either the wrist or the feet. This comparison also demonstrated the effective HAR performance of the proposed system

    Online Human Activity Recognition using Low-Power Wearable Devices

    Full text link
    Human activity recognition~(HAR) has attracted significant research interest due to its applications in health monitoring and patient rehabilitation. Recent research on HAR focuses on using smartphones due to their widespread use. However, this leads to inconvenient use, limited choice of sensors and inefficient use of resources, since smartphones are not designed for HAR. This paper presents the first HAR framework that can perform both online training and inference. The proposed framework starts with a novel technique that generates features using the fast Fourier and discrete wavelet transforms of a textile-based stretch sensor and accelerometer. Using these features, we design an artificial neural network classifier which is trained online using the policy gradient algorithm. Experiments on a low power IoT device (TI-CC2650 MCU) with nine users show 97.7% accuracy in identifying six activities and their transitions with less than 12.5 mW power consumption.Comment: This is in proceedings of ICCAD 2018. The datasets are available at https://github.com/gmbhat/human-activity-recognitio

    CAVIAR: Context-driven Active and Incremental Activity Recognition

    Get PDF
    Activity recognition on mobile device sensor data has been an active research area in mobile and pervasive computing for several years. While the majority of the proposed techniques are based on supervised learning, semi-supervised approaches are being considered to reduce the size of the training set required to initialize the model. These approaches usually apply self-training or active learning to incrementally refine the model, but their effectiveness seems to be limited to a restricted set of physical activities. We claim that the context which surrounds the user (e.g., time, location, proximity to transportation routes) combined with common knowledge about the relationship between context and human activities could be effective in significantly increasing the set of recognized activities including those that are difficult to discriminate only considering inertial sensors, and the highly context-dependent ones. In this paper, we propose CAVIAR, a novel hybrid semi-supervised and knowledge-based system for real-time activity recognition. Our method applies semantic reasoning on context-data to refine the predictions of an incremental classifier. The recognition model is continuously updated using active learning. Results on a real dataset obtained from 26 subjects show the effectiveness of our approach in increasing the recognition rate, extending the number of recognizable activities and, most importantly, reducing the number of queries triggered by active learning. In order to evaluate the impact of context reasoning, we also compare CAVIAR with a purely statistical version, considering features computed on context-data as part of the machine learning process
    corecore