10 research outputs found

    Feature Selection Analysis of Chewing Activity Based on Contactless Food Intake Detection

    Get PDF
    This paper presents the feature selection methods for chewing activity detection. Chewing detection typically used for food intake monitoring applications. The work aims to analyze the effect of implementing optimum feature selection that can improve the accuracy of the chewing detection.  The raw chewing data is collected using a proximity sensor. Pre-process procedures are implemented on the data using normalization and bandpass filters. The searching of a suitable combination of bandpass filter parameters such as lower cut-off frequency (Fc1) and steepness targeted for best accuracy was also included. The Fc1 was 0,5Hz, 1.0Hz and 1.2H, while the steepness varied from 0.75 to 0.9 with an interval of 0.5. By using the bandpass filter with the value of [1Hz, 5Hz] with a steepness of 0.8, the system’s accuracy improves by 1.2% compared to the previous work, which uses [0.5Hz, 5Hz] with a steepness of 0.85. The accuracy of using all 40 extracted features is 98.5%. Two feature selection methods based on feature domain and feature ranking are analyzed. The features domain gives an accuracy of 95.8% using 10 features of the time domain, while the combination of time domain and frequency domain gives an accuracy of 98% with 13 features. Three feature ranking methods were used in this paper: minimum redundancy maximum relevance (MRMR), t-Test, and receiver operating characteristic (ROC). The analysis of the feature ranking method has the accuracy of 98.2%, 85.8%, and 98% for MRMR, t-Test, and ROC with 10 features, respectively. While the accuracy of using 20 features is 98.3%, 97.9%, and 98.3% for MRMR, t-Test, and ROC, respectively. It can be concluded that the feature selection method helps to reduce the number of features while giving a good accuracy

    Hardwired… to Self- Destruct? Using Technology to Improve Behavior Change Science

    Get PDF
    Many societal problems are related to human behavior. To change behavior, it is crucial to be aware of Lewin’s formula indicating that behavior is a function of a person and their environment. Technology provides opportunities with regard to (measurement of) all three elements of this formula. This raises the question how existing technologies can be used to improve behavior change science. This article provides two answers to this question: application and innovation of theory. Technology can be used to apply behavior change methods in practice. For example, providing computer-tailored feedback based on a social-cognitive profile. Technology can also be used to innovate theory, which is less common, but results in more progress. For example, technology provides opportunities to triangulate ecological momentary assessment (EMA) with smartphone native sensor data to track behavior and environmental factors. If the opportunities provided by technology are combined with a rationale on how and which data to collect, then these data can be used to answer theoretically driven questions. Answering such questions results in better theories to both explain and change behavior. This is highly relevant for more effective and more efficient solutions to all societal problems related to human behavior

    Detecting Eating Episodes with an Ear-mounted Sensor

    Get PDF
    In this paper, we propose Auracle, a wearable earpiece that can automatically recognize eating behavior. More specifically, in free-living conditions, we can recognize when and for how long a person is eating. Using an off-the-shelf contact microphone placed behind the ear, Auracle captures the sound of a person chewing as it passes through the bone and tissue of the head. This audio data is then processed by a custom analog/digital circuit board. To ensure reliable (yet comfortable) contact between microphone and skin, all hardware components are incorporated into a 3D-printed behind-the-head framework. We collected field data with 14 participants for 32 hours in free-living conditions and additional eating data with 10 participants for 2 hours in a laboratory setting. We achieved accuracy exceeding 92.8% and F1 score exceeding 77.5% for eating detection. Moreover, Auracle successfully detected 20-24 eating episodes (depending on the metrics) out of 26 in free-living conditions. We demonstrate that our custom device could sense, process, and classify audio data in real time. Additionally, we estimateAuracle can last 28.1 hours with a 110 mAh battery while communicating its observations of eating behavior to a smartphone over Bluetooth

    Hardwired …… to self-destruct ?

    Get PDF

    DETECTION OF HEALTH-RELATED BEHAVIOURS USING HEAD-MOUNTED DEVICES

    Get PDF
    The detection of health-related behaviors is the basis of many mobile-sensing applications for healthcare and can trigger other inquiries or interventions. Wearable sensors have been widely used for mobile sensing due to their ever-decreasing cost, ease of deployment, and ability to provide continuous monitoring. In this dissertation, we develop a generalizable approach to sensing eating-related behavior. First, we developed Auracle, a wearable earpiece that can automatically detect eating episodes. Using an off-the-shelf contact microphone placed behind the ear, Auracle captures the sound of a person chewing as it passes through the head. This audio data is then processed by a custom circuit board. We collected data with 14 participants for 32 hours in free-living conditions and achieved accuracy exceeding 92.8% and F1 score exceeding77.5% for eating detection with 1-minute resolution. Second, we adapted Auracle for measuring children’s eating behavior, and improved the accuracy and robustness of the eating-activity detection algorithms. We used this improved prototype in a laboratory study with a sample of 10 children for 60 total sessions and collected 22.3 hours of data in both meal and snack scenarios. Overall, we achieved 95.5% accuracy and 95.7% F1 score for eating detection with 1-minute resolution. Third, we developed a computer-vision approach for eating detection in free-living scenarios. Using a miniature head-mounted camera, we collected data with 10 participants for about 55 hours. The camera was fixed under the brim of a cap, pointing to the mouth of the wearer and continuously recording video (but not audio) throughout their normal daily activity. We evaluated performance for eating detection using four different Convolutional Neural Network (CNN) models. The best model achieved 90.9% accuracy and 78.7%F1 score for eating detection with 1-minute resolution. Finally, we validated the feasibility of deploying the 3D CNN model in wearable or mobile platforms when considering computation, memory, and power constraints
    corecore