6 research outputs found

    Machine learning approaches for dietary assessment

    Get PDF
    When considering individuals with dietary limitations, automatic food recognition, and assessment are paramount. Smartphone-oriented applications are convenient and handy when dish recognition and the elements inside are required. Machine learning (deep learning) applied to image recognition, alongside other classification techniques (for example, bag-of-words), are possible approaches to tackle this problem. The current most promising approach to the classification problem is deep leaning, which requires high computation for training, but it is an extremely fast and computationally light classifier. Since the requirement for the classifiers to be as accurate as possible, the humans must also be considered as the classifier. This work tests and compares deep-learning methods bag-of-words applied to computer vision, and the human visual system. Results show that deep learning is better when considering a low number of food categories. However, with more food categories, the human overcomes the machine algorithms

    Material Measurement Units: Foundations Through a Survey

    Full text link
    Long-term availability of minerals and industrial materials is a necessary condition for sustainable development as they are the constituents of any manufacturing product. In particular, technologies with increasing demand such as GPUs and photovoltaic panels are made of critical raw materials. To enhance the efficiency of material management, in this paper we make three main contributions: first, we identify in the literature an emerging computer-vision-enabled material monitoring technology which we call Material Measurement Unit (MMU); second, we provide a survey of works relevant to the development of MMUs; third, we describe a material stock monitoring sensor network deploying multiple MMUs.Comment: In preparation for submission to ACM Computing Survey

    Mobile multi-food recognition using deep learning

    No full text
    In this article, we propose a mobile food recognition system that uses the picture of the food, taken by the user’s mobile device, to recognize multiple food items in the same meal, such as steak and potatoes on the same plate, to estimate the calorie and nutrition of the meal. To speed up and make the process more accurate, the user is asked to quickly identify the general area of the food by drawing a bounding circle on the food picture by touching the screen. The system then uses image processing and computational intelligence for food item recognition. The advantage of recognizing items, instead of the whole meal, is that the system can be trained with only single item food images. At the training stage, we first use region proposal algorithms to generate candidate regions and extract the convolutional neural network (CNN) features of all regions. Second, we perform region mining to select positive regions for each food category using maximum cover by our proposed submodular optimization method. At the testing stage, we first generate a set of candidate regions. For each region, a classification score is computed based on its extracted CNN features and predicted food names of the selected regions. Since fast response is one of the important parameters for the user who wants to eat the meal, certain heavy computational parts of the application are offloaded to the cloud. Hence, the processes of food recognition and calorie estimation are performed in cloud server. Our experiments, conducted with the FooDD dataset, show an average recall rate of 90.98%, precision rate of 93.05%, and accuracy of 94.11% compared to 50.8% to 88% accuracy of other existing food recognition systems
    corecore