146 research outputs found
Scraping social media photos posted in Kenya and elsewhere to detect and analyze food types
Monitoring population-level changes in diet could be useful for education and for implementing interventions to improve health. Research has shown that data from social media sources can be used for monitoring dietary behavior. We propose a scrape-by-location methodology to create food image datasets from Instagram posts. We used it to collect 3.56 million images over a period of 20 days in March 2019. We also propose a scrape-by-keywords methodology and used it to scrape âŒ30,000 images and their captions of 38 Kenyan food types. We publish two datasets of 104,000 and 8,174 image/caption pairs, respectively. With the first dataset, Kenya104K, we train a Kenyan Food Classifier, called KenyanFC, to distinguish Kenyan food from non-food images posted in
Kenya. We used the second dataset, KenyanFood13, to train a classifier KenyanFTR, short for Kenyan Food Type Recognizer, to recognize 13 popular food types in Kenya. The KenyanFTR is a multimodal deep neural network that can identify 13 types of Kenyan foods using both images and their corresponding captions. Experiments show that the average top-1 accuracy of KenyanFC is 99% over 10,400 tested Instagram images and of KenyanFTR is 81% over 8,174 tested data points. Ablation studies show that three of the 13 food types are particularly difficult to categorize based on image content only and that adding analysis of captions to the image analysis yields a classifier that is 9 percent points more accurate than a classifier that relies only on images. Our food trend analysis revealed that cakes and roasted meats were the most popular foods in photographs on Instagram in Kenya in March 2019.Accepted manuscrip
Browse-to-search
This demonstration presents a novel interactive online shopping application based on visual search technologies. When users want to buy something on a shopping site, they usually have the requirement of looking for related information from other web sites. Therefore users need to switch between the web page being browsed and other websites that provide search results. The proposed application enables users to naturally search products of interest when they browse a web page, and make their even causal purchase intent easily satisfied. The interactive shopping experience is characterized by: 1) in session - it allows users to specify the purchase intent in the browsing session, instead of leaving the current page and navigating to other websites; 2) in context - -the browsed web page provides implicit context information which helps infer user purchase preferences; 3) in focus - users easily specify their search interest using gesture on touch devices and do not need to formulate queries in search box; 4) natural-gesture inputs and visual-based search provides users a natural shopping experience. The system is evaluated against a data set consisting of several millions commercial product images. © 2012 Authors
Smart Kitchens for People with Cognitive Impairments: A Qualitative Study of Design Requirements
Individuals with cognitive impairments currently leverage extensive human resources during their transitions from assisted living to independent living. In Western Europe, many government-supported volunteer organizations provide sheltered living facilities; supervised environments in which people with cognitive impairments collaboratively learn daily living skills. In this paper, we describe communal cooking practices in sheltered living facilities and identify opportunities for supporting these with interactive technology to reduce volunteer workload. We conducted two contextual observations of twelve people with cognitive impairments cooking in sheltered living facilities and supplemented this data through interviews with four employees and volunteers who supervise them. Through thematic analysis, we identified four themes to inform design requirements for communal cooking activities: Work organization, community, supervision, and practicalities. Based on these, we present five design implications for assistive systems in kitchens for people with cognitive deficiencies
FoodFab: Creating Food Perception Illusions using Food 3D Printing
Personalization of eating such that everyone consumes only what they need allows improving our management of food waste. In this paper, we explore the use of food 3D printing to create perceptual illusions for controlling the level of perceived satiety given a defined amount of calories. We present FoodFab, a system that allows users to control their food intake through modifying a food's internal structure via two 3D printing parameters: infill pattern and infill density. In two experiments with a total of 30 participants, we studied the effect of these parameters on users' chewing time that is known to affect people's feeling of satiety. Our results show that we can indeed modify the chewing time by varying infill pattern and density, and thus control perceived satiety. Based on the results, we propose two computational models and integrate them into a user interface that simplifies the creation of personalized food structures
FoodFab: creating food perception illusions using food 3D printing
Food 3D printing enables the creation of customized food structures based on a personâs individual needs. In this paper, we explore the use of food 3D printing to create perceptual illusions for controlling the level of perceived satiety given a defined amount of calories. We present FoodFab, a system that allows users to control their food intake through modifying a foodâs internal structure via two 3D printing parameters: infill pattern and infill density. In two experiments with a total of 30 participants, we studied the effect of these parameters on usersâ chewing time that is known to affect peopleâs feeling of satiety. Our results show that we can indeed modify the chewing time by varying infill pattern and density, and thus control perceived satiety. Based on the results, we propose two computational models and integrate them into a user interface that simplifies the creation of personalized food structures
Using Natural Language Processing and Artificial Intelligence to Explore the Nutrition and Sustainability of Recipes and Food
Copyright © 2021 van Erp, Reynolds, Maynard, Starke, Ibåñez MartĂn, Andres, Leite, Alvarez de Toledo, Schmidt Rivera, Trattner, Brewer, Adriano Martins, Kluczkovski, Frankowska, Bridle, Levy, Rauber, Tereza da Silva and Bosma. In this paper, we discuss the use of natural language processing and artificial intelligence to analyze nutritional and sustainability aspects of recipes and food. We present the state-of-the-art and some use cases, followed by a discussion of challenges. Our perspective on addressing these is that while they typically have a technical nature, they nevertheless require an interdisciplinary approach combining natural language processing and artificial intelligence with expert domain knowledge to create practical tools and comprehensive analysis for the food domain.Research Councils UK, the University of Manchester, the University of Sheffield, the STFC Food Network+ and the HEFCE Catalyst-funded N8 AgriFood Resilience Programme with matched funding from the N8 group of Universities; AHRC funded AHRC US-UK Food Digital Scholarship Network (Grant Reference: AH/S012591/1), STFC GCRF funded project âTrends in greenhouse gas emissions from Brazilian foods using GGDOTâ (ST/S003320/1), the STFC funded project âPiloting Zooniverse for food, health and sustainability citizen scienceâ (ST/T001410/1), and the STFC Food Network+ Awarded Scoping Project âPiloting Zooniverse to help us understand citizen food perceptionsâ; ESRC via the University of Sheffield Social Sciences Partnerships, Impact and Knowledge Exchange fund for âRecipe environmental impact calculatorâ; and through Research England via the University of Sheffield QR Strategic Priorities Fund projects âCooking as part of a Sustainable Food System â creating an wider evidence base for policy makersâ, and âFood based citizen science in the UK as a policy toolâ; N8 AgriFood-funded project âGreenhouse Gas and Dietary choices Open-source Toolkit (GGDOT) hacknights.â; Brunel University internal Research England GCRF QR Fund; The University of Manchester GCRF QR Visiting Researcher Fellowship; National Institute of Informatics, Japan
Human activity recognition for pervasive interaction
PhD ThesisThis thesis addresses the challenge of computing food preparation context in the kitchen. The automatic
recognition of fine-grained human activities and food ingredients is realized through pervasive sensing
which we achieve by instrumenting kitchen objects such as knives, spoons, and chopping boards with
sensors. Context recognition in the kitchen lies at the heart of a broad range of real-world applications. In
particular, activity and food ingredient recognition in the kitchen is an essential component for situated
services such as automatic prompting services for cognitively impaired kitchen users and digital situated
support for healthier eating interventions. Previous works, however, have addressed the activity
recognition problem by exploring high-level-human activities using wearable sensing (i.e. worn sensors
on human body) or using technologies that raise privacy concerns (i.e. computer vision). Although such
approaches have yielded significant results for a number of activity recognition problems, they are not
applicable to our domain of investigation, for which we argue that the technology itself must be genuinely
âinvisibleâ, thereby allowing users to perform their activities in a completely natural manner.
In this thesis we describe the development of pervasive sensing technologies and algorithms for finegrained
human activity and food ingredient recognition in the kitchen. After reviewing previous work on
food and activity recognition we present three systems that constitute increasingly sophisticated
approaches to the challenge of kitchen context recognition. Two of these systems, Slice&Dice and Classbased
Threshold Dynamic Time Warping (CBT-DTW), recognize fine-grained food preparation
activities. Slice&Dice is a proof-of-concept application, whereas CBT-DTW is a real-time application
that also addresses the problem of recognising unknown activities. The final system, KitchenSense is a
real-time context recognition framework that deals with the recognition of a more complex set of
activities, and includes the recognition of food ingredients and events in the kitchen. For each system, we
describe the prototyping of pervasive sensing technologies, algorithms, as well as real-world experiments
and empirical evaluations that validate the proposed solutions.Vietnamese governmentâs 322 project, executed by the Vietnamese Ministry of
Education and Training
- âŠ