25,228 research outputs found

    Recognizing faces in news photographs on the web

    Get PDF
    We propose a graph based method in order to recognize the faces that appear on the web using a small training set. First, relevant pictures of the desired people are collected by querying the name in a text based search engine in order to construct the data set. Then, detected faces in these photographs are represented using SIFT features extracted from facial features. The similarities of faces are represented in a graph which is then used in random walk with restart algorithm to provide links between faces. Those links are used for recognition by using two different methods. © 2009 IEEE

    Complex Event Recognition from Images with Few Training Examples

    Full text link
    We propose to leverage concept-level representations for complex event recognition in photographs given limited training examples. We introduce a novel framework to discover event concept attributes from the web and use that to extract semantic features from images and classify them into social event categories with few training examples. Discovered concepts include a variety of objects, scenes, actions and event sub-types, leading to a discriminative and compact representation for event images. Web images are obtained for each discovered event concept and we use (pretrained) CNN features to train concept classifiers. Extensive experiments on challenging event datasets demonstrate that our proposed method outperforms several baselines using deep CNN features directly in classifying images into events with limited training examples. We also demonstrate that our method achieves the best overall accuracy on a dataset with unseen event categories using a single training example.Comment: Accepted to Winter Applications of Computer Vision (WACV'17

    Brand Logos More Prevalent In Recent News Sports Photos

    Get PDF
    The exposure is non-intrusive, serving as a backdrop to the sports action occurring at the arena. Because it cannot be separated from the action, this communication form is difficult to tune out perceptually

    Can we ID from CCTV? Image quality in digital CCTV and face identification performance

    Get PDF
    CCTV is used for an increasing number Of purposes, and the new generation of digital systems can be tailored to serve a wide range of security requirements. However, configuration decisions are often made without considering specific task requirements, e.g. the video quality needed for reliable person identification. Our Study investigated the relationship between video quality and the ability of untrained viewers to identify faces from digital CCTV images. The task required 80 participants to identify 64 faces belonging to 4 different ethnicities. Participants compared face images taken from a high quality photographs and low quality CCTV stills, which were recorded at 4 different video quality bit rates (32, 52, 72 and 92 Kbps). We found that the number of correct identifications decreased by 12 (similar to 18%) as MPEG-4 quality decreased from 92 to 32 Kbps, and by 4 (similar to 6%) as Wavelet video quality decreased from 92 to 32 Kbps. To achieve reliable and effective face identification, we recommend that MPEG-4 CCTV systems should be used over Wavelet, and video quality should not be lowered below 52 Kbps during video compression. We discuss the practical implications of these results for security, and contribute a contextual methodology for assessing CCTV video quality

    Interesting faces: A graph-based approach for finding people in news

    Get PDF
    Cataloged from PDF version of article.In this study, we propose a method for finding people in large news photograph and video collections. Our method exploits the multi-modal nature of these data sets to recognize people and does not require any supervisory input. It first uses the name of the person to populate an initial set of candidate faces. From this set, which is likely to include the faces of other people, it selects the group of most similar faces corresponding to the queried person in a variety of conditions. Our main contribution is to transform the problem of recognizing the faces of the queried person in a set of candidate faces to the problem of finding the highly connected sub-graph (the densest component) in a graph representing the similarities of faces. We also propose a novel technique for finding the similarities of faces by matching interest points extracted from the faces. The proposed method further allows the classification of new faces without needing to re-build the graph. The experiments are performed on two data sets: thousands of news photographs from Yahoo! news and over 200 news videos from TRECVid2004. The results show that the proposed method provides significant improvements over textbased methods. (C) 2009 Elsevier Ltd. All rights reserve

    Level Playing Field for Million Scale Face Recognition

    Full text link
    Face recognition has the perception of a solved problem, however when tested at the million-scale exhibits dramatic variation in accuracies across the different algorithms. Are the algorithms very different? Is access to good/big training data their secret weapon? Where should face recognition improve? To address those questions, we created a benchmark, MF2, that requires all algorithms to be trained on same data, and tested at the million scale. MF2 is a public large-scale set with 672K identities and 4.7M photos created with the goal to level playing field for large scale face recognition. We contrast our results with findings from the other two large-scale benchmarks MegaFace Challenge and MS-Celebs-1M where groups were allowed to train on any private/public/big/small set. Some key discoveries: 1) algorithms, trained on MF2, were able to achieve state of the art and comparable results to algorithms trained on massive private sets, 2) some outperformed themselves once trained on MF2, 3) invariance to aging suffers from low accuracies as in MegaFace, identifying the need for larger age variations possibly within identities or adjustment of algorithms in future testings

    Cultural Diffusion and Trends in Facebook Photographs

    Full text link
    Online social media is a social vehicle in which people share various moments of their lives with their friends, such as playing sports, cooking dinner or just taking a selfie for fun, via visual means, that is, photographs. Our study takes a closer look at the popular visual concepts illustrating various cultural lifestyles from aggregated, de-identified photographs. We perform analysis both at macroscopic and microscopic levels, to gain novel insights about global and local visual trends as well as the dynamics of interpersonal cultural exchange and diffusion among Facebook friends. We processed images by automatically classifying the visual content by a convolutional neural network (CNN). Through various statistical tests, we find that socially tied individuals more likely post images showing similar cultural lifestyles. To further identify the main cause of the observed social correlation, we use the Shuffle test and the Preference-based Matched Estimation (PME) test to distinguish the effects of influence and homophily. The results indicate that the visual content of each user's photographs are temporally, although not necessarily causally, correlated with the photographs of their friends, which may suggest the effect of influence. Our paper demonstrates that Facebook photographs exhibit diverse cultural lifestyles and preferences and that the social interaction mediated through the visual channel in social media can be an effective mechanism for cultural diffusion.Comment: 10 pages, To appear in ICWSM 2017 (Full Paper
    corecore