3 research outputs found

    Recognizing food places in egocentric photo-streams using multi-scale atrous convolutional networks and self-attention mechanism.

    Get PDF
    Wearable sensors (e.g., lifelogging cameras) represent very useful tools to monitor people's daily habits and lifestyle. Wearable cameras are able to continuously capture different moments of the day of their wearers, their environment, and interactions with objects, people, and places reflecting their personal lifestyle. The food places where people eat, drink, and buy food, such as restaurants, bars, and supermarkets, can directly affect their daily dietary intake and behavior. Consequently, developing an automated monitoring system based on analyzing a person's food habits from daily recorded egocentric photo-streams of the food places can provide valuable means for people to improve their eating habits. This can be done by generating a detailed report of the time spent in specific food places by classifying the captured food place images to different groups. In this paper, we propose a self-attention mechanism with multi-scale atrous convolutional networks to generate discriminative features from image streams to recognize a predetermined set of food place categories. We apply our model on an egocentric food place dataset called 'EgoFoodPlaces' that comprises of 43 392 images captured by 16 individuals using a lifelogging camera. The proposed model achieved an overall classification accuracy of 80% on the 'EgoFoodPlaces' dataset, respectively, outperforming the baseline methods, such as VGG16, ResNet50, and InceptionV3

    Food places classification in egocentric images using Siamese neural networks.

    Get PDF
    Wearable cameras have become more popular in recent years for capturing unscripted moments in the first-person, which help in analysis of the user's lifestyle. In this work, we aim to identify the daily food patterns of a person through recognition of places relating to food in person-focused images ("selfies"). This has the potential for a system that can assist with improvements to eating habits and prevention of diet-related conditions. In this paper, we use Siamese Neural Networks (SNN) to learn similarities between images with one-shot "food places" classification. We tested our proposed method with "MiniEgoFoodPlaces", using 15 food-related locations. The proposed SNN model with MobileNet achieved an overall classification accuracy of 76.74% and 77.53% on the validation and test sets of the "MiniEgoFoodPlaces" dataset, outperforming the base models such as ResNet50, InceptionV3 and InceptionResNetV2

    Recognizing Food Places in Egocentric Photo-Streams Using Multi-Scale Atrous Convolutional Networks and Self-Attention Mechanism

    No full text
    corecore