5 research outputs found

    Collecting Shared Experiences through Lifelogging: Lessons Learned

    Get PDF
    The emergence of widespread pervasive sensing, personal recording technologies, and systems for the quantified self are creating an environment in which one can capture fine-grained activity traces. Such traces have wide applicability in domains such as human memory augmentation, behavior change, and healthcare. However, obtaining these traces for research is nontrivial, especially those containing photographs of everyday activities. To source data for their own work, the authors created an experimental setup in which they collected detailed traces of a group of researchers over 2.75 days. They share their experiences of this process and present a series of lessons learned for other members of the research community conducting similar studies

    Detecting Periods of Eating in Everyday Life by Tracking Wrist Motion — What is a Meal?

    Get PDF
    Eating is one of the most basic activities observed in sentient animals, a behavior so natural that humans often eating without giving the activity a second thought. Unfortunately, this often leads to consuming more calories than expended, which can cause weight gain - a leading cause of diseases and death. This proposal describes research in methods to automatically detect periods of eating by tracking wrist motion so that calorie consumption can be tracked. We first briefly discuss how obesity is caused due to an imbalance in calorie intake and expenditure. Calorie consumption and expenditure can be tracked manually using tools like paper diaries, however it is well known that human bias can affect the accuracy of such tracking. Researchers in the upcoming field of automated dietary monitoring (ADM) are attempting to track diet using electronic methods in an effort to mitigate this bias. We attempt to replicate a previous algorithm that detects eating by tracking wrist motion electronically. The previous algorithm was evaluated on data collected from 43 subjects using an iPhone as the sensor. Periods of time are segmented first, and then classified using a naive Bayesian classifier. For replication, we describe the collection of the Clemson all-day data set (CAD), a free-living eating activity dataset containing 4,680 hours of wrist motion collected from 351 participants - the largest of its kind known to us. We learn that while different sensors are available to log wrist acceleration data, no unified convention exists, and this data must thus be transformed between conventions. We learn that the performance of the eating detection algorithm is affected due to changes in the sensors used to track wrist motion, increased variability in behavior due to a larger participant pool, and the ratio of eating to non-eating in the dataset. We learn that commercially available acceleration sensors contain noise in their reported readings which affects wrist tracking specifically due to the low magnitude of wrist acceleration. Commercial accelerometers can have noise up to 0.06g which is acceptable in applications like automobile crash testing or pedestrian indoor navigation, but not in ones using wrist motion. We quantify linear acceleration noise in our free-living dataset. We explain sources of noise, a method to mitigate it, and also evaluate the effect of this noise on the eating detection algorithm. By visualizing periods of eating in the collected dataset we learn that that people often conduct secondary activities while eating, such as walking, watching television, working, and doing household chores. These secondary activities cause wrist motions that obfuscate wrist motions associated with eating, which increases the difficulty of detecting periods of eating (meals). Subjects reported conducting secondary activities in 72% of meals. Analysis of wrist motion data revealed that the wrist was resting 12.8% of the time during self-reported meals, compared to only 6.8% of the time in a cafeteria dataset. Walking motion was found during 5.5% of the time during meals in free-living, compared to 0% in the cafeteria. Augmenting an eating detection classifier to include walking and resting detection improved the average per person accuracy from 74% to 77% on our free-living dataset (t[353]=7.86, p\u3c0.001). This suggests that future data collections for eating activity detection should also collect detailed ground truth on secondary activities being conducted during eating. Finally, learning from this data collection, we describe a convolutional neural network (CNN) to detect periods of eating by tracking wrist motion during everyday life. Eating uses hand-to-mouth gestures for ingestion, each of which lasts appx 1-5 sec. The novelty of our new approach is that we analyze a much longer window (0.5-15 min) that can contain other gestures related to eating, such as cutting or manipulating food, preparing foods for consumption, and resting between ingestion events. The context of these other gestures can improve the detection of periods of eating. We found that accuracy at detecting eating increased by 15% in longer windows compared to shorter windows. Overall results on CAD were 89% detection of meals with 1.7 false positives for every true positive (FP/TP), and a time weighted accuracy of 80%

    Media of things : supporting the production and consumption of object-based media with the internet of things

    Get PDF
    Ph. D. Thesis.Visual media consumption habits are in a constant state of flux, predicting which platforms and consumption mediums will succeed and which will fail is a fateful business. Virtual Reality and Augmented Reality could be the 3D TVs that went before them, or they could push forward a new level of content immersion and radically change media production forever. Content producers are constantly trying to adapt to these shifts in habits and respond to new technologies. Smaller independent studios buoyed by their new-found audience penetration through sites like YouTube and Facebook can inherently respond to these emerging technologies faster, not weighed down by the “legacy” many. Broadcasters such as the BBC are keen to evolve their content to respond to the challenges of this new world. Producing content that is both more compelling in terms of immersion, and more responsive to technological advances in terms of input and output mediums. This is where the concept of Object-based Broadcasting was born, content that is responsive to the user consuming their content on a phone over a short period of time whilst also providing an immersive multi-screen experience for a smart home environment. One of the primary barriers to the development of Object-based Media is in a feasible set of mechanisms to generate supporting assets and adequately exploit the input and output mediums of the modern home. The underlying question here is how we build these experiences, we obviously can’t produce content for each of the thousands of combinations of devices and hardware we have available to us. I view this challenge to content makers as one of a distinct lack of descriptive and abstract detail at both ends of the production pipeline. In investigating the contribution that the Internet of Things may have to this space I first look to create well described assets in productions using embedded sensing. Detecting non-visual actions and generating detail not possible from vision alone. I then look to exploit existing datasets from production and consumption environments to gain greater understanding of generated media assets and a means to coordinate input/output in the home. Finally, I investigate the opportunities for rich and expressive interaction with devices and content in the home exploiting favourable characteristics of existing interfaces to construct a compelling control interface to Smart Home devices and Object-based experiences. I resolve that the Internet of Things is vital to the development of Object-based Broadcasting and its wider roll-out.British Broadcasting Corporatio
    corecore