3 research outputs found

    On- Device Information Extraction from Screenshots in form of tags

    Full text link
    We propose a method to make mobile screenshots easily searchable. In this paper, we present the workflow in which we: 1) preprocessed a collection of screenshots, 2) identified script presentin image, 3) extracted unstructured text from images, 4) identifiedlanguage of the extracted text, 5) extracted keywords from the text, 6) identified tags based on image features, 7) expanded tag set by identifying related keywords, 8) inserted image tags with relevant images after ranking and indexed them to make it searchable on device. We made the pipeline which supports multiple languages and executed it on-device, which addressed privacy concerns. We developed novel architectures for components in the pipeline, optimized performance and memory for on-device computation. We observed from experimentation that the solution developed can reduce overall user effort and improve end user experience while searching, whose results are published

    A Robust Algorithm for Emoji Detection in Smartphone Screenshot Images

    Get PDF
    The increasing use of smartphones and social media apps for communication results in a massive number of screenshot images. These images enrich the written language through text and emojis. In this regard, several studies in the image analysis field have considered text. However, they ignored the use of emojis. In this study, a robust two-stage algorithm for detecting emojis in screenshot images is proposed. The first stage localizes the regions of candidate emojis by using the proposed RGB-channel analysis method followed by a connected component method with a set of proposed rules. In the second verification stage, each of the emojis and non-emojis are classified by using proposed features with a decision tree classifier. Experiments were conducted to evaluate each stage independently and assess the performance of the proposed algorithm completely by using a self-collected dataset. The results showed that the proposed RGB-channel analysis method achieved better performance than the Niblack and Sauvola methods. Moreover, the proposed feature extraction method with decision tree classifier achieved more satisfactory performance than the LBP feature extraction method with all Bayesian network, perceptron neural network, and decision table rules. Overall, the proposed algorithm exhibited high efficiency in detecting emojis in screenshot images

    Screenomics : a new approach for observing and studying individuals' digital lives

    Get PDF
    This study describes when and how adolescents engage with their fast-moving and dynamic digital environment as they go about their daily lives. We illustrate a new approach—screenomics—for capturing, visualizing, and analyzing screenomes, the record of individuals’ day-to-day digital experiences. Sample includes over 500,000 smartphone screenshots provided by four Latino/Hispanic youth, age 14 to 15 years, from low-income, racial/ethnic minority neighborhoods. Screenomes collected from smartphones for 1 to 3 months, as sequences of smartphone screenshots obtained every 5 seconds that the device is activated, are analyzed using computational machinery for processing images and text, machine learning algorithms, human labeling, and qualitative inquiry. Adolescents’ digital lives differ substantially across persons, days, hours, and minutes. Screenomes highlight the extent of switching among multiple applications, and how each adolescent is exposed to different content at different times for different durations—with apps, food-related content, and sentiment as illustrative examples. We propose that the screenome provides the fine granularity of data needed to study individuals’ digital lives, for testing existing theories about media use, and for generation of new theory about the interplay between digital media and development
    corecore