145 research outputs found

    Detecting Eating Episodes with an Ear-mounted Sensor

    Get PDF
    In this paper, we propose Auracle, a wearable earpiece that can automatically recognize eating behavior. More specifically, in free-living conditions, we can recognize when and for how long a person is eating. Using an off-the-shelf contact microphone placed behind the ear, Auracle captures the sound of a person chewing as it passes through the bone and tissue of the head. This audio data is then processed by a custom analog/digital circuit board. To ensure reliable (yet comfortable) contact between microphone and skin, all hardware components are incorporated into a 3D-printed behind-the-head framework. We collected field data with 14 participants for 32 hours in free-living conditions and additional eating data with 10 participants for 2 hours in a laboratory setting. We achieved accuracy exceeding 92.8% and F1 score exceeding 77.5% for eating detection. Moreover, Auracle successfully detected 20-24 eating episodes (depending on the metrics) out of 26 in free-living conditions. We demonstrate that our custom device could sense, process, and classify audio data in real time. Additionally, we estimateAuracle can last 28.1 hours with a 110 mAh battery while communicating its observations of eating behavior to a smartphone over Bluetooth

    Feature Selection Analysis of Chewing Activity Based on Contactless Food Intake Detection

    Get PDF
    This paper presents the feature selection methods for chewing activity detection. Chewing detection typically used for food intake monitoring applications. The work aims to analyze the effect of implementing optimum feature selection that can improve the accuracy of the chewing detection.  The raw chewing data is collected using a proximity sensor. Pre-process procedures are implemented on the data using normalization and bandpass filters. The searching of a suitable combination of bandpass filter parameters such as lower cut-off frequency (Fc1) and steepness targeted for best accuracy was also included. The Fc1 was 0,5Hz, 1.0Hz and 1.2H, while the steepness varied from 0.75 to 0.9 with an interval of 0.5. By using the bandpass filter with the value of [1Hz, 5Hz] with a steepness of 0.8, the system’s accuracy improves by 1.2% compared to the previous work, which uses [0.5Hz, 5Hz] with a steepness of 0.85. The accuracy of using all 40 extracted features is 98.5%. Two feature selection methods based on feature domain and feature ranking are analyzed. The features domain gives an accuracy of 95.8% using 10 features of the time domain, while the combination of time domain and frequency domain gives an accuracy of 98% with 13 features. Three feature ranking methods were used in this paper: minimum redundancy maximum relevance (MRMR), t-Test, and receiver operating characteristic (ROC). The analysis of the feature ranking method has the accuracy of 98.2%, 85.8%, and 98% for MRMR, t-Test, and ROC with 10 features, respectively. While the accuracy of using 20 features is 98.3%, 97.9%, and 98.3% for MRMR, t-Test, and ROC, respectively. It can be concluded that the feature selection method helps to reduce the number of features while giving a good accuracy

    Methods for monitoring the human circadian rhythm in free-living

    Get PDF
    Our internal clock, the circadian clock, determines at which time we have our best cognitive abilities, are physically strongest, and when we are tired. Circadian clock phase is influenced primarily through exposure to light. A direct pathway from the eyes to the suprachiasmatic nucleus, where the circadian clock resides, is used to synchronise the circadian clock to external light-dark cycles. In modern society, with the ability to work anywhere at anytime and a full social agenda, many struggle to keep internal and external clocks synchronised. Living against our circadian clock makes us less efficient and poses serious health impact, especially when exercised over a long period of time, e.g. in shift workers. Assessing circadian clock phase is a cumbersome and uncomfortable task. A common method, dim light melatonin onset testing, requires a series of eight saliva samples taken in hourly intervals while the subject stays in dim light condition from 5 hours before until 2 hours past their habitual bedtime. At the same time, sensor-rich smartphones have become widely available and wearable computing is on the rise. The hypothesis of this thesis is that smartphones and wearables can be used to record sensor data to monitor human circadian rhythms in free-living. To test this hypothesis, we conducted research on specialised wearable hardware and smartphones to record relevant data, and developed algorithms to monitor circadian clock phase in free-living. We first introduce our smart eyeglasses concept, which can be personalised to the wearers head and 3D-printed. Furthermore, hardware was integrated into the eyewear to recognise typical activities of daily living (ADLs). A light sensor integrated into the eyeglasses bridge was used to detect screen use. In addition to wearables, we also investigate if sleep-wake patterns can be revealed from smartphone context information. We introduce novel methods to detect sleep opportunity, which incorporate expert knowledge to filter and fuse classifier outputs. Furthermore, we estimate light exposure from smartphone sensor and weather in- formation. We applied the Kronauer model to compare the phase shift resulting from head light measurements, wrist measurements, and smartphone estimations. We found it was possible to monitor circadian phase shift from light estimation based on smartphone sensor and weather information with a weekly error of 32±17min, which outperformed wrist measurements in 11 out of 12 participants. Sleep could be detected from smartphone use with an onset error of 40±48 min and wake error of 42±57 min. Screen use could be detected smart eyeglasses with 0.9 ROC AUC for ambient light intensities below 200lux. Nine clusters of ADLs were distinguished using Gaussian mixture models with an average accuracy of 77%. In conclusion, a combination of the proposed smartphones and smart eyeglasses applications could support users in synchronising their circadian clock to the external clocks, thus living a healthier lifestyle

    Embedding a Grid of Load Cells into a Dining Table for Automatic Monitoring and Detection of Eating Events

    Get PDF
    This dissertation describes a “smart dining table” that can detect and measure consumption events. This work is motivated by the growing problem of obesity, which is a global problem and an epidemic in the United States and Europe. Chapter 1 gives a background on the economic burden of obesity and its comorbidities. For the assessment of obesity, we briefly describe the classic dietary assessment tools and discuss their drawback and the necessity of using more objective, accurate, low-cost, and in-situ automatic dietary assessment tools. We explain in short various technologies used for automatic dietary assessment such as acoustic-, motion-, or image-based systems. This is followed by a literature review of prior works related to the detection of weights and locations of objects sitting on a table surface. Finally, we state the novelty of this work. In chapter 2, we describe the construction of a table that uses an embedded grid of load cells to sense the weights and positions of objects. The main challenge is aligning the tops of adjacent load cells to within a few micrometer tolerance, which we accomplish using a novel inversion process during construction. Experimental tests found that object weights distributed across 4 to 16 load cells could be measured with 99.97±0.1% accuracy. Testing the surface for flatness at 58 points showed that we achieved approximately 4.2±0.5 um deviation among adjacent 2x2 grid of tiles. Through empirical measurements we determined that the table has a 40.2 signal-to-noise ratio when detecting the smallest expected intake amount (0.5 g) from a normal meal (approximate total weight is 560 g), indicating that a tiny amount of intake can be detected well above the noise level of the sensors. In chapter 3, we describe a pilot experiment that tests the capability of the table to monitor eating. Eleven human subjects were video recorded for ground truth while eating a meal on the table using a plate, bowl, and cup. To detect consumption events, we describe an algorithm that analyzes the grid of weight measurements in the format of an image. The algorithm segments the image into multiple objects, tracks them over time, and uses a set of rules to detect and measure individual bites of food and drinks of liquid. On average, each meal consisted of 62 consumption events. Event detection accuracy was very high, with an F1-score per subject of 0.91 to 1.0, and an F1 score per container of 0.97 for the plate and bowl, and 0.99 for the cup. The experiment demonstrates that our device is capable of detecting and measuring individual consumption events during a meal. Chapter 4 compares the capability of our new tool to monitor eating against previous works that have also monitored table surfaces. We completed a literature search and identified the three state-of-the-art methods to be used for comparison. The main limitation of all previous methods is that they used only one load cell for monitoring, so only the total surface weight can be analyzed. To simulate their operations, the weights of our grid of load cells were summed up to use the 2D data as 1D. Data were prepared according to the requirements of each method. Four metrics were used to evaluate the comparison: precision, recall, accuracy, and F1-score. Our method scored the highest in recall, accuracy, and F1-score; compared to all other methods, our method scored 13-21% higher for recall, 8-28% higher for accuracy, and 10-18% higher for F1-score. For precision, our method scored 97% that is just 1% lower than the highest precision, which was 98%. In summary, this dissertation describes novel hardware, a pilot experiment, and a comparison against current state-of-the-art tools. We also believe our methods could be used to build a similar surface for other applications besides monitoring consumption

    DETECTION OF HEALTH-RELATED BEHAVIOURS USING HEAD-MOUNTED DEVICES

    Get PDF
    The detection of health-related behaviors is the basis of many mobile-sensing applications for healthcare and can trigger other inquiries or interventions. Wearable sensors have been widely used for mobile sensing due to their ever-decreasing cost, ease of deployment, and ability to provide continuous monitoring. In this dissertation, we develop a generalizable approach to sensing eating-related behavior. First, we developed Auracle, a wearable earpiece that can automatically detect eating episodes. Using an off-the-shelf contact microphone placed behind the ear, Auracle captures the sound of a person chewing as it passes through the head. This audio data is then processed by a custom circuit board. We collected data with 14 participants for 32 hours in free-living conditions and achieved accuracy exceeding 92.8% and F1 score exceeding77.5% for eating detection with 1-minute resolution. Second, we adapted Auracle for measuring children’s eating behavior, and improved the accuracy and robustness of the eating-activity detection algorithms. We used this improved prototype in a laboratory study with a sample of 10 children for 60 total sessions and collected 22.3 hours of data in both meal and snack scenarios. Overall, we achieved 95.5% accuracy and 95.7% F1 score for eating detection with 1-minute resolution. Third, we developed a computer-vision approach for eating detection in free-living scenarios. Using a miniature head-mounted camera, we collected data with 10 participants for about 55 hours. The camera was fixed under the brim of a cap, pointing to the mouth of the wearer and continuously recording video (but not audio) throughout their normal daily activity. We evaluated performance for eating detection using four different Convolutional Neural Network (CNN) models. The best model achieved 90.9% accuracy and 78.7%F1 score for eating detection with 1-minute resolution. Finally, we validated the feasibility of deploying the 3D CNN model in wearable or mobile platforms when considering computation, memory, and power constraints

    Advanced Signal Processing in Wearable Sensors for Health Monitoring

    Get PDF
    Smart, wearables devices on a miniature scale are becoming increasingly widely available, typically in the form of smart watches and other connected devices. Consequently, devices to assist in measurements such as electroencephalography (EEG), electrocardiogram (ECG), electromyography (EMG), blood pressure (BP), photoplethysmography (PPG), heart rhythm, respiration rate, apnoea, and motion detection are becoming more available, and play a significant role in healthcare monitoring. The industry is placing great emphasis on making these devices and technologies available on smart devices such as phones and watches. Such measurements are clinically and scientifically useful for real-time monitoring, long-term care, and diagnosis and therapeutic techniques. However, a pertaining issue is that recorded data are usually noisy, contain many artefacts, and are affected by external factors such as movements and physical conditions. In order to obtain accurate and meaningful indicators, the signal has to be processed and conditioned such that the measurements are accurate and free from noise and disturbances. In this context, many researchers have utilized recent technological advances in wearable sensors and signal processing to develop smart and accurate wearable devices for clinical applications. The processing and analysis of physiological signals is a key issue for these smart wearable devices. Consequently, ongoing work in this field of study includes research on filtration, quality checking, signal transformation and decomposition, feature extraction and, most recently, machine learning-based methods
    • …
    corecore