11 research outputs found

    Embedding a Grid of Load Cells into a Dining Table for Automatic Monitoring and Detection of Eating Events

    Get PDF
    This dissertation describes a ā€œsmart dining tableā€ that can detect and measure consumption events. This work is motivated by the growing problem of obesity, which is a global problem and an epidemic in the United States and Europe. Chapter 1 gives a background on the economic burden of obesity and its comorbidities. For the assessment of obesity, we briefly describe the classic dietary assessment tools and discuss their drawback and the necessity of using more objective, accurate, low-cost, and in-situ automatic dietary assessment tools. We explain in short various technologies used for automatic dietary assessment such as acoustic-, motion-, or image-based systems. This is followed by a literature review of prior works related to the detection of weights and locations of objects sitting on a table surface. Finally, we state the novelty of this work. In chapter 2, we describe the construction of a table that uses an embedded grid of load cells to sense the weights and positions of objects. The main challenge is aligning the tops of adjacent load cells to within a few micrometer tolerance, which we accomplish using a novel inversion process during construction. Experimental tests found that object weights distributed across 4 to 16 load cells could be measured with 99.97Ā±0.1% accuracy. Testing the surface for flatness at 58 points showed that we achieved approximately 4.2Ā±0.5 um deviation among adjacent 2x2 grid of tiles. Through empirical measurements we determined that the table has a 40.2 signal-to-noise ratio when detecting the smallest expected intake amount (0.5 g) from a normal meal (approximate total weight is 560 g), indicating that a tiny amount of intake can be detected well above the noise level of the sensors. In chapter 3, we describe a pilot experiment that tests the capability of the table to monitor eating. Eleven human subjects were video recorded for ground truth while eating a meal on the table using a plate, bowl, and cup. To detect consumption events, we describe an algorithm that analyzes the grid of weight measurements in the format of an image. The algorithm segments the image into multiple objects, tracks them over time, and uses a set of rules to detect and measure individual bites of food and drinks of liquid. On average, each meal consisted of 62 consumption events. Event detection accuracy was very high, with an F1-score per subject of 0.91 to 1.0, and an F1 score per container of 0.97 for the plate and bowl, and 0.99 for the cup. The experiment demonstrates that our device is capable of detecting and measuring individual consumption events during a meal. Chapter 4 compares the capability of our new tool to monitor eating against previous works that have also monitored table surfaces. We completed a literature search and identified the three state-of-the-art methods to be used for comparison. The main limitation of all previous methods is that they used only one load cell for monitoring, so only the total surface weight can be analyzed. To simulate their operations, the weights of our grid of load cells were summed up to use the 2D data as 1D. Data were prepared according to the requirements of each method. Four metrics were used to evaluate the comparison: precision, recall, accuracy, and F1-score. Our method scored the highest in recall, accuracy, and F1-score; compared to all other methods, our method scored 13-21% higher for recall, 8-28% higher for accuracy, and 10-18% higher for F1-score. For precision, our method scored 97% that is just 1% lower than the highest precision, which was 98%. In summary, this dissertation describes novel hardware, a pilot experiment, and a comparison against current state-of-the-art tools. We also believe our methods could be used to build a similar surface for other applications besides monitoring consumption

    Advancement in Dietary Assessment and Self-Monitoring Using Technology

    Get PDF
    Although methods to assess or self-monitor intake may be considered similar, the intended function of each is quite distinct. For the assessment of dietary intake, methods aim to measure food and nutrient intake and/or to derive dietary patterns for determining diet-disease relationships, population surveillance or the effectiveness of interventions. In comparison, dietary self-monitoring primarily aims to create awareness of and reinforce individual eating behaviours, in addition to tracking foods consumed. Advancements in the capabilities of technologies, such as smartphones and wearable devices, have enhanced the collection, analysis and interpretation of dietary intake data in both contexts. This Special Issue invites submissions on the use of novel technology-based approaches for the assessment of food and/or nutrient intake and for self-monitoring eating behaviours. Submissions may document any part of the development and evaluation of the technology-based approaches. Examples may include: web adaption of existing dietary assessment or self-monitoring tools (e.g., food frequency questionnaires, screeners) image-based or image-assisted methods mobile/smartphone applications for capturing intake for assessment or self-monitoring wearable cameras to record dietary intake or eating behaviours body sensors to measure eating behaviours and/or dietary intake use of technology-based methods to complement aspects of traditional dietary assessment or self-monitoring, such as portion size estimation

    Raw Versus Linear Acceleration in the Recognition of Wrist Motions Related to Eating During Everyday Life

    Get PDF
    This thesis investigates the difference between raw and linear acceleration in wrist motion for detecting eating episodes. In previous work, our group developed a classifier that analyzed linear motion and achieved good accuracy. However, the classifier can be volatile in the sense that when retrained and tested on the same data, accuracy varies, especially when trained on small amounts of data such as for a single individual. We hypothesize that in part this may be due to the noise in linear acceleration which is significantly larger relative to normal human wrist motions as compared to the noise in raw acceleration. We therefore perform a set of experiments to determine if classifier accuracy and/or stability can be improved by analyzing raw acceleration instead of linear acceleration. The dataset used for this work is the Clemson All-Day Eating (CAD) dataset. This was collected over a period of one year, in 2014. In the process of data collection, 351 participants were recruited and 354 days of wrist data was recorded. The recorded data contained 1,133 meals spread over 250 hours of eating. The total length of the recorded data was nearly 4,680 hours. In this work, the CAD dataset was reduced to 342 days and 1034 meals because for some recordings, raw acceleration data was not saved. Previous work developing a classifier based on linear acceleration achieved a time-based weighted accuracy of 80%, a true positive rate of 89% on eating episodes, and a false positive per true positive rate of 1.7. However, these results were based upon a single run of train and test. Recently we discovered that the model accuracy varies somewhat between runs. We therefore perform a replication experiment on the linear classifier to confirm these results by rerunning the entire experiment 10 times. We report the average and standard deviation of all metrics across these runs. This helps establish a better baseline for comparison of our new classifier that analyzes raw acceleration. We next analyze the same set of data, using the same neural network model and general approach as for the linear acceleration-based classifier, to compare its accuracy and stability. Evaluating all results, we found that the linear acceleration classifier achieved (average Ā± standard deviation across 10 runs) a TPR of 86% Ā±1.2% and a FP/TP of 1.7 Ā± 0.3. It also achieved a weighted accuracy of 79 % Ā± 0.5 %. Thus, we concluded that the results of original experiment were above the average results and could either be due to a freak training and testing run or due to contamination of the testing data. These results set up a new baseline with which we compare the raw acceleration model metrics. We found that the raw acceleration achieved a TPR of 84% Ā± 1.3% and a FP/TP of 1.7 Ā± 0.3. In the case of time metrics, the raw acceleration model achieved a weighted accuracy of 78% Ā± 0.4%. Thus, on average, we found that the linear acceleration performed slightly better than raw acceleration in episode detection. The time metrics for both raw and linear acceleration were more or less similar but we did see a higher standard deviation for the raw models. Our results indicate that linear acceleration does provide greater accuracy than raw acceleration. Even though raw acceleration has a higher signal-to-noise ratio than linear acceleration, in terms of normal human wrist motions, our classifier model has relatively equal volatility when analyzing either signal. We conclude that the main source of model volatility is still unknown. Thus, we found that linear acceleration is, overall, a better predictor of eating as compared to raw acceleration. It should be noted that the difference in the accuracies is very minor and the volatility in the training process could account for some of the differences

    Detecting Periods of Eating in Everyday Life by Tracking Wrist Motion ā€” What is a Meal?

    Get PDF
    Eating is one of the most basic activities observed in sentient animals, a behavior so natural that humans often eating without giving the activity a second thought. Unfortunately, this often leads to consuming more calories than expended, which can cause weight gain - a leading cause of diseases and death. This proposal describes research in methods to automatically detect periods of eating by tracking wrist motion so that calorie consumption can be tracked. We first briefly discuss how obesity is caused due to an imbalance in calorie intake and expenditure. Calorie consumption and expenditure can be tracked manually using tools like paper diaries, however it is well known that human bias can affect the accuracy of such tracking. Researchers in the upcoming field of automated dietary monitoring (ADM) are attempting to track diet using electronic methods in an effort to mitigate this bias. We attempt to replicate a previous algorithm that detects eating by tracking wrist motion electronically. The previous algorithm was evaluated on data collected from 43 subjects using an iPhone as the sensor. Periods of time are segmented first, and then classified using a naive Bayesian classifier. For replication, we describe the collection of the Clemson all-day data set (CAD), a free-living eating activity dataset containing 4,680 hours of wrist motion collected from 351 participants - the largest of its kind known to us. We learn that while different sensors are available to log wrist acceleration data, no unified convention exists, and this data must thus be transformed between conventions. We learn that the performance of the eating detection algorithm is affected due to changes in the sensors used to track wrist motion, increased variability in behavior due to a larger participant pool, and the ratio of eating to non-eating in the dataset. We learn that commercially available acceleration sensors contain noise in their reported readings which affects wrist tracking specifically due to the low magnitude of wrist acceleration. Commercial accelerometers can have noise up to 0.06g which is acceptable in applications like automobile crash testing or pedestrian indoor navigation, but not in ones using wrist motion. We quantify linear acceleration noise in our free-living dataset. We explain sources of noise, a method to mitigate it, and also evaluate the effect of this noise on the eating detection algorithm. By visualizing periods of eating in the collected dataset we learn that that people often conduct secondary activities while eating, such as walking, watching television, working, and doing household chores. These secondary activities cause wrist motions that obfuscate wrist motions associated with eating, which increases the difficulty of detecting periods of eating (meals). Subjects reported conducting secondary activities in 72% of meals. Analysis of wrist motion data revealed that the wrist was resting 12.8% of the time during self-reported meals, compared to only 6.8% of the time in a cafeteria dataset. Walking motion was found during 5.5% of the time during meals in free-living, compared to 0% in the cafeteria. Augmenting an eating detection classifier to include walking and resting detection improved the average per person accuracy from 74% to 77% on our free-living dataset (t[353]=7.86, p\u3c0.001). This suggests that future data collections for eating activity detection should also collect detailed ground truth on secondary activities being conducted during eating. Finally, learning from this data collection, we describe a convolutional neural network (CNN) to detect periods of eating by tracking wrist motion during everyday life. Eating uses hand-to-mouth gestures for ingestion, each of which lasts appx 1-5 sec. The novelty of our new approach is that we analyze a much longer window (0.5-15 min) that can contain other gestures related to eating, such as cutting or manipulating food, preparing foods for consumption, and resting between ingestion events. The context of these other gestures can improve the detection of periods of eating. We found that accuracy at detecting eating increased by 15% in longer windows compared to shorter windows. Overall results on CAD were 89% detection of meals with 1.7 false positives for every true positive (FP/TP), and a time weighted accuracy of 80%

    Segmentation and Recognition of Eating Gestures from Wrist Motion Using Deep Learning

    Get PDF
    This research considers training a deep learning neural network for segmenting and classifying eating related gestures from recordings of subjects eating unscripted meals in a cafeteria environment. It is inspired by the recent trend of success in deep learning for solving a wide variety of machine related tasks such as image annotation, classification and segmentation. Image segmentation is a particularly important inspiration, and this work proposes a novel deep learning classifier for segmenting time-series data based on the work done in [25] and [30]. While deep learning has established itself as the state-of-the-art approach in image segmentation, particularly in works such as [2],[25] and [31], very little work has been done for segmenting time-series data using deep learning models. Wrist mounted IMU sensors such as accelerometers and gyroscopes can record activity from a subject in a free-living environment, while being encapsulated in a watch-like device and thus being inconspicuous. Such a device can be used to monitor eating related activities as well, and is thought to be useful for monitoring energy intake for healthy individuals as well as those afflicted with conditions such as being overweight or obese. The data set that is used for this research study is known as the Clemson Cafeteria Dataset, available publicly at [14]. It contains data for 276 people eating a meal at the Harcombe Dining Hall at Clemson University, which is a large cafeteria environment. The data includes wrist motion measurements (accelerometer x, y, z; gyroscope yaw, pitch, roll) recorded when the subjects each ate an unscripted meal. Each meal consisted of 1-4 courses, of which 488 were used as part of this research. The ground truth labelings of gestures were created by a set of 18 trained human raters, and consist of labels such as ā€™biteā€™ used to indicate when the subject starts to put food in their mouth, and later moves the hand away for more ā€™bitesā€™ or other activities. Other labels include ā€™drinkā€™ for liquid intake, ā€™restā€™ for stationary hands and ā€™utensilingā€™ for actions such as cutting the food into bite size pieces, stirring a liquid or dipping food in sauce among other things. All other activities are labeled as ā€™otherā€™ by the human raters. Previous work in our group focused on recognizing these gesture types from manually segmented data using hidden Markov models [24],[27]. This thesis builds on that work, by considering a deep learning classifier for automatically segmenting and recognizing gestures. The neural network classifier proposed as part of this research performs satisfactorily well at recognizing intake gestures, with 79.6% of ā€™biteā€™ and 80.7% of ā€™drinkā€™ gestures being recognized correctly on average per meal. Overall 77.7% of all gestures were recognized correctly on average per meal, indicating that a deep learning classifier can successfully be used to simultaneously segment and identify eating gestures from wrist motion measured through IMU sensors
    corecore