45,418 research outputs found

    A survey of comics research in computer science

    Full text link
    Graphical novels such as comics and mangas are well known all over the world. The digital transition started to change the way people are reading comics, more and more on smartphones and tablets and less and less on paper. In the recent years, a wide variety of research about comics has been proposed and might change the way comics are created, distributed and read in future years. Early work focuses on low level document image analysis: indeed comic books are complex, they contains text, drawings, balloon, panels, onomatopoeia, etc. Different fields of computer science covered research about user interaction and content generation such as multimedia, artificial intelligence, human-computer interaction, etc. with different sets of values. We propose in this paper to review the previous research about comics in computer science, to state what have been done and to give some insights about the main outlooks

    Processing Color in Astronomical Imagery

    Get PDF
    Every year, hundreds of images from telescopes on the ground and in space are released to the public, making their way into popular culture through everything from computer screens to postage stamps. These images span the entire electromagnetic spectrum from radio waves to infrared light to X-rays and gamma rays, a majority of which is undetectable to the human eye without technology. Once these data are collected, one or more specialists must process the data to create an image. Therefore, the creation of astronomical imagery involves a series of choices. How do these choices affect the comprehension of the science behind the images? What is the best way to represent data to a non-expert? Should these choices be based on aesthetics, scientific veracity, or is it possible to satisfy both? This paper reviews just one choice out of the many made by astronomical image processors: color. The choice of color is one of the most fundamental when creating an image taken with modern telescopes. We briefly explore the concept of the image as translation, particularly in the case of astronomical images from invisible portions of the electromagnetic spectrum. After placing modern astronomical imagery and photography in general in the context of its historical beginnings, we review the standards (or lack thereof) in making the basic choice of color. We discuss the possible implications for selecting one color palette over another in the context of the appropriateness of using these images as science communication products with a specific focus on how the non-expert perceives these images and how that affects their trust in science. Finally, we share new data sets that begin to look at these issues in scholarly research and discuss the need for a more robust examination of this and other related topics in the future to better understand the implications for science communications.Comment: 10 pages, 6 figures, published in Studies in Media and Communicatio

    Fourteenth Biennial Status Report: März 2017 - February 2019

    No full text

    Scraping social media photos posted in Kenya and elsewhere to detect and analyze food types

    Full text link
    Monitoring population-level changes in diet could be useful for education and for implementing interventions to improve health. Research has shown that data from social media sources can be used for monitoring dietary behavior. We propose a scrape-by-location methodology to create food image datasets from Instagram posts. We used it to collect 3.56 million images over a period of 20 days in March 2019. We also propose a scrape-by-keywords methodology and used it to scrape ∼30,000 images and their captions of 38 Kenyan food types. We publish two datasets of 104,000 and 8,174 image/caption pairs, respectively. With the first dataset, Kenya104K, we train a Kenyan Food Classifier, called KenyanFC, to distinguish Kenyan food from non-food images posted in Kenya. We used the second dataset, KenyanFood13, to train a classifier KenyanFTR, short for Kenyan Food Type Recognizer, to recognize 13 popular food types in Kenya. The KenyanFTR is a multimodal deep neural network that can identify 13 types of Kenyan foods using both images and their corresponding captions. Experiments show that the average top-1 accuracy of KenyanFC is 99% over 10,400 tested Instagram images and of KenyanFTR is 81% over 8,174 tested data points. Ablation studies show that three of the 13 food types are particularly difficult to categorize based on image content only and that adding analysis of captions to the image analysis yields a classifier that is 9 percent points more accurate than a classifier that relies only on images. Our food trend analysis revealed that cakes and roasted meats were the most popular foods in photographs on Instagram in Kenya in March 2019.Accepted manuscrip
    • …
    corecore