1,458 research outputs found

    Vision Based Extraction of Nutrition Information from Skewed Nutrition Labels

    Get PDF
    An important component of a healthy diet is the comprehension and retention of nutritional information and understanding of how different food items and nutritional constituents affect our bodies. In the U.S. and many other countries, nutritional information is primarily conveyed to consumers through nutrition labels (NLs) which can be found in all packaged food products. However, sometimes it becomes really challenging to utilize all this information available in these NLs even for consumers who are health conscious as they might not be familiar with nutritional terms or find it difficult to integrate nutritional data collection into their daily activities due to lack of time, motivation, or training. So it is essential to automate this data collection and interpretation process by integrating Computer Vision based algorithms to extract nutritional information from NLs because it improves the user’s ability to engage in continuous nutritional data collection and analysis. To make nutritional data collection more manageable and enjoyable for the users, we present a Proactive NUTrition Management System (PNUTS). PNUTS seeks to shift current research and clinical practices in nutrition management toward persuasion, automated nutritional information processing, and context-sensitive nutrition decision support. PNUTS consists of two modules, firstly a barcode scanning module which runs on smart phones and is capable of vision-based localization of One Dimensional (1D) Universal Product Code (UPC) and International Article Number (EAN) barcodes with relaxed pitch, roll, and yaw camera alignment constraints. The algorithm localizes barcodes in images by computing Dominant Orientations of Gradients (DOGs) of image segments and grouping smaller segments with similar DOGs into larger connected components. Connected components that pass given morphological criteria are marked as potential barcodes. The algorithm is implemented in a distributed, cloud-based system. The system’s front end is a smartphone application that runs on Android smartphones with Android 4.2 or higher. The system’s back end is deployed on a five node Linux cluster where images are processed. The algorithm was evaluated on a corpus of 7,545 images extracted from 506 videos of bags, bottles, boxes, and cans in a supermarket. The DOG algorithm was coupled to our in-place scanner for 1D UPC and EAN barcodes. The scanner receives from the DOG algorithm the rectangular planar dimensions of a connected component and the component’s dominant gradient orientation angle referred to as the skew angle. The scanner draws several scan lines at that skew angle within the component to recognize the barcode in place without any rotations. The scanner coupled to the localizer was tested on the same corpus of 7,545 images. Laboratory experiments indicate that the system can localize and scan barcodes of any orientation in the yaw plane, of up to 73.28 degrees in the pitch plane, and of up to 55.5 degrees in the roll plane. The videos have been made public for all interested research communities to replicate our findings or to use them in their own research. The front end Android application is available for free download at Google Play under the title of NutriGlass. This module is also coupled to a comprehensive NL database from which nutritional information can be retrieved on demand. Currently our NL database consists of more than 230,000 products. The second module of PNUTS is an algorithm whose objective is to determine the text skew angle of an NL image without constraining the angle’s magnitude. The horizontal, vertical, and diagonal matrices of the (Two Dimensional) 2D Haar Wavelet Transform are used to identify 2D points with significant intensity changes. The set of points is bounded with a minimum area rectangle whose rotation angle is the text’s skew. The algorithm’s performance is compared with the performance of five text skew detection algorithms on 1001 U.S. nutrition label images and 2200 single- and multi-column document images in multiple languages. To ensure the reproducibility of the reported results, the source code of the algorithm and the image data have been made publicly available. If the skew angle is estimated correctly, optical character recognition (OCR) techniques can be used to extract nutrition information

    Crowdsourcing for Engineering Design: Objective Evaluations and Subjective Preferences

    Full text link
    Crowdsourcing enables designers to reach out to large numbers of people who may not have been previously considered when designing a new product, listen to their input by aggregating their preferences and evaluations over potential designs, aiming to improve ``good'' and catch ``bad'' design decisions during the early-stage design process. This approach puts human designers--be they industrial designers, engineers, marketers, or executives--at the forefront, with computational crowdsourcing systems on the backend to aggregate subjective preferences (e.g., which next-generation Brand A design best competes stylistically with next-generation Brand B designs?) or objective evaluations (e.g., which military vehicle design has the best situational awareness?). These crowdsourcing aggregation systems are built using probabilistic approaches that account for the irrationality of human behavior (i.e., violations of reflexivity, symmetry, and transitivity), approximated by modern machine learning algorithms and optimization techniques as necessitated by the scale of data (millions of data points, hundreds of thousands of dimensions). This dissertation presents research findings suggesting the unsuitability of current off-the-shelf crowdsourcing aggregation algorithms for real engineering design tasks due to the sparsity of expertise in the crowd, and methods that mitigate this limitation by incorporating appropriate information for expertise prediction. Next, we introduce and interpret a number of new probabilistic models for crowdsourced design to provide large-scale preference prediction and full design space generation, building on statistical and machine learning techniques such as sampling methods, variational inference, and deep representation learning. Finally, we show how these models and algorithms can advance crowdsourcing systems by abstracting away the underlying appropriate yet unwieldy mathematics, to easier-to-use visual interfaces practical for engineering design companies and governmental agencies engaged in complex engineering systems design.PhDDesign ScienceUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133438/1/aburnap_1.pd

    Audio-based classroom activity detection for primary school lessons

    Get PDF
    Classroom Activity Detection (CAD) is a challenging task, especially for primary school lessons, where student participation is fragmented, short, and often concurrent with teacher speech and background noise. This thesis proposes and evaluates three CAD models: two based on supervised audio classification (trained on a proprietary dataset that was annotated for this work), and one based on unsupervised diarization. These models are assessed through the visualization of the estimated label density, rather than typical CAD segment visualizations. This approach proves to be more effective in dealing with the highly fragmented segments observed in this specific use case. The main metric to compare these models is the correlation coefficient between estimated and ground-truth label densities. The density and correlation are used to evaluate the accuracy of the models in capturing the temporal distribution of the different classroom activities. Complimentary to that, another metric that is also used is the error in the total time estimated for each label (e.g., estimated Teacher Talking Time or TTT). The supervised models, based on an LSTM neural network and a decision tree classifier, achieve similar classification performance, outperforming the unsupervised diarization pipeline. Even a small amount of training data is enough for the supervised models to achieve the performance of the diarization system, and they generalize well to previously unseen voices. The unsupervised diarization model does not require training data for this particular task, but its performance is not as good as the supervised models to detect the teacher’s voice. Additionally, it cannot distinguish properly between the labels “single student” and “group work”. Overall, the supervised CAD models proposed in this thesis demonstrate promising results for primary school lessons, even with limited training data. These models could be used to develop valuable tools to support classroom observation and evaluation.Beca de Maestría ANI

    Proceedings of the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    This book is a collection of 15 reviewed technical reports summarizing the presentations at the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory. The covered topics include image processing, optical signal processing, visual inspection, pattern recognition and classification, human-machine interaction, world and situation modeling, autonomous system localization and mapping, information fusion, and trust propagation in sensor networks

    Quantitative Techniques in Participatory Forest Management

    Get PDF
    Forest management has evolved from a mercantilist view to a multi-functional one that integrates economic, social, and ecological aspects. However, the issue of sustainability is not yet resolved. Quantitative Techniques in Participatory Forest Management brings together global research in three areas of application: inventory of the forest variables that determine the main environmental indices, description and design of new environmental indices, and the application of sustainability indices for regional implementations. All these quantitative techniques create the basis for the development of scientific methodologies of participatory sustainable forest management

    Anomaly detection in brain imaging

    Get PDF
    Modern healthcare systems employ a variety of medical imaging technologies, such as X-ray, MRI and CT, to improve patient outcomes, time and cost efficiency, and enable further research. Artificial intelligence and machine learning have shown promise in enhancing medical image analysis systems, leading to a proliferation of research in the field. However, many proposed approaches, such as image classification or segmentation, require large amounts of professional annotations, which are costly and time-consuming to acquire. Anomaly detection is an approach that requires less manual effort and thus can benefit from scaling to datasets of ever-increasing size. In this thesis, we focus on anomaly localisation for pathology detection with models trained on healthy data without dense annotations. We identify two key weaknesses of current image reconstruction-based anomaly detection methods: poor image reconstruction and overdependency on pixel/voxel intensity for identification of anomalies. To address these weaknesses, we develop two novel methods: denoising autoencoder and context-tolocal feature matching, respectively. Finally, we apply both methods to in-hospital data in collaboration with NHS Greater Glasgow and Clyde. We discuss the issues of data collection, filtering, processing, and evaluation arising in applying anomaly detection methods beyond curated datasets. We design and run a clinical evaluation contrasting our proposed methods and revealing difficulties in gauging performance of anomaly detection systems. Our findings suggest that further research is needed to fully realise the potential of anomaly detection for practical medical imaging applications. Specifically, we suggest investigating anomaly detection methods that are able to take advantage of more types of supervision (e.g. weak-labels), more context (e.g. prior scans) and make structured end-to-end predictions (e.g. bounding boxes)

    Evolution of A Common Vector Space Approach to Multi-Modal Problems

    Get PDF
    A set of methods to address computer vision problems has been developed. Video un- derstanding is an activate area of research in recent years. If one can accurately identify salient objects in a video sequence, these components can be used in information retrieval and scene analysis. This research started with the development of a course-to-fine frame- work to extract salient objects in video sequences. Previous work on image and video frame background modeling involved methods that ranged from simple and efficient to accurate but computationally complex. It will be shown in this research that the novel approach to implement object extraction is efficient and effective that outperforms the existing state-of-the-art methods. However, the drawback to this method is the inability to deal with non-rigid motion. With the rapid development of artificial neural networks, deep learning approaches are explored as a solution to computer vision problems in general. Focusing on image and text, the image (or video frame) understanding can be achieved using CVS. With this concept, modality generation and other relevant applications such as automatic im- age description, text paraphrasing, can be explored. Specifically, video sequences can be modeled by Recurrent Neural Networks (RNN), the greater depth of the RNN leads to smaller error, but that makes the gradient in the network unstable during training.To overcome this problem, a Batch-Normalized Recurrent Highway Network (BNRHN) was developed and tested on the image captioning (image-to-text) task. In BNRHN, the highway layers are incorporated with batch normalization which diminish the gradient vanishing and exploding problem. In addition, a sentence to vector encoding framework that is suitable for advanced natural language processing is developed. This semantic text embedding makes use of the encoder-decoder model which is trained on sentence paraphrase pairs (text-to-text). With this scheme, the latent representation of the text is shown to encode sentences with common semantic information with similar vector rep- resentations. In addition to image-to-text and text-to-text, an image generation model is developed to generate image from text (text-to-image) or another image (image-to- image) based on the semantics of the content. The developed model, which refers to the Multi-Modal Vector Representation (MMVR), builds and encodes different modalities into a common vector space that achieve the goal of keeping semantics and conversion between text and image bidirectional. The concept of CVS is introduced in this research to deal with multi-modal conversion problems. In theory, this method works not only on text and image, but also can be generalized to other modalities, such as video and audio. The characteristics and performance are supported by both theoretical analysis and experimental results. Interestingly, the MMVR model is one of the many possible ways to build CVS. In the final stages of this research, a simple and straightforward framework to build CVS, which is considered as an alternative to the MMVR model, is presented

    Quantitative Techniques in Participatory Forest Management

    Get PDF
    Forest management has evolved from a mercantilist view to a multi-functional one that integrates economic, social, and ecological aspects. However, the issue of sustainability is not yet resolved. Quantitative Techniques in Participatory Forest Management brings together global research in three areas of application: inventory of the forest variables that determine the main environmental indices, description and design of new environmental indices, and the application of sustainability indices for regional implementations. All these quantitative techniques create the basis for the development of scientific methodologies of participatory sustainable forest management

    Focused image search in the social Web.

    Get PDF
    Recently, social multimedia-sharing websites, which allow users to upload, annotate, and share online photo or video collections, have become increasingly popular. The user tags or annotations constitute the new multimedia meta-data . We present an image search system that exploits both image textual and visual information. First, we use focused crawling and DOM Tree based web data extraction methods to extract image textual features from social networking image collections. Second, we propose the concept of visual words to handle the image\u27s visual content for fast indexing and searching. We also develop several user friendly search options to allow users to query the index using words and image feature descriptions (visual words). The developed image search system tries to bridge the gap between the scalable industrial image search engines, which are based on keyword search, and the slower content based image retrieval systems developed mostly in the academic field and designed to search based on image content only. We have implemented a working prototype by crawling and indexing over 16,056 images from flickr.com, one of the most popular image sharing websites. Our experimental results on a working prototype confirm the efficiency and effectiveness of the methods, that we proposed
    • …
    corecore