585 research outputs found

    Beyond Frontal Faces: Improving Person Recognition Using Multiple Cues

    Full text link
    We explore the task of recognizing peoples' identities in photo albums in an unconstrained setting. To facilitate this, we introduce the new People In Photo Albums (PIPA) dataset, consisting of over 60000 instances of 2000 individuals collected from public Flickr photo albums. With only about half of the person images containing a frontal face, the recognition task is very challenging due to the large variations in pose, clothing, camera viewpoint, image resolution and illumination. We propose the Pose Invariant PErson Recognition (PIPER) method, which accumulates the cues of poselet-level person recognizers trained by deep convolutional networks to discount for the pose variations, combined with a face recognizer and a global recognizer. Experiments on three different settings confirm that in our unconstrained setup PIPER significantly improves on the performance of DeepFace, which is one of the best face recognizers as measured on the LFW dataset

    MediAssist: Using content-based analysis and context to manage personal photo collections

    Get PDF
    We present work which organises personal digital photo collections based on contextual information, such as time and location, combined with content-based analysis such as face detection and other feature detectors. The MediAssist demonstration system illustrates the results of our research into digital photo management, showing how a combination of automatically extracted context and content-based information, together with user annotation, facilitates efficient searching of personal photo collections

    Automatic text searching for personal photos

    Get PDF
    This demonstration presents the MediAssist prototype system for organisation of personal digital photo collections based on contextual information, such as time and location of image capture, and content-based analysis, such as face detection and recognition. This metadata is used directly for identification of photos which match specified attributes, and also to create text surrogates for photos, allowing for text-based queries of photo collections without relying on manual annotation. MediAssist illustrates our research into digital photo management, showing how a combination of automatically extracted context and content-based information, together with user annotation and traditional text indexing techniques, facilitates efficient searching of personal photo collections

    Identifying person re-occurrences for personal photo management applications

    Get PDF
    Automatic identification of "who" is present in individual digital images within a photo management system using only content-based analysis is an extremely difficult problem. The authors present a system which enables identification of person reoccurrences within a personal photo management application by combining image content-based analysis tools with context data from image capture. This combined system employs automatic face detection and body-patch matching techniques, which collectively facilitate identifying person re-occurrences within images grouped into events based on context data. The authors introduce a face detection approach combining a histogram-based skin detection model and a modified BDF face detection method to detect multiple frontal faces in colour images. Corresponding body patches are then automatically segmented relative to the size, location and orientation of the detected faces in the image. The authors investigate the suitability of using different colour descriptors, including MPEG-7 colour descriptors, color coherent vectors (CCV) and color correlograms for effective body-patch matching. The system has been successfully integrated into the MediAssist platform, a prototype Web-based system for personal photo management, and runs on over 13000 personal photos

    Context-aware person identification in personal photo collections

    Get PDF
    Identifying the people in photos is an important need for users of photo management systems. We present MediAssist, one such system which facilitates browsing, searching and semi-automatic annotation of personal photos, using analysis of both image content and the context in which the photo is captured. This semi-automatic annotation includes annotation of the identity of people in photos. In this paper, we focus on such person annotation, and propose person identification techniques based on a combination of context and content. We propose language modelling and nearest neighbor approaches to context-based person identification, in addition to novel face color and image color content-based features (used alongside face recognition and body patch features). We conduct a comprehensive empirical study of these techniques using the real private photo collections of a number of users, and show that combining context- and content-based analysis improves performance over content or context alone

    Sharing, privacy and trust issues for photo collections

    Get PDF
    Digital libraries are quickly being adopted by the masses. Technological developments now allow community groups, clubs, and even ordinary individuals to create their own, publicly accessible collections. However, users may not be fully aware of the potential privacy implications of submitting their documents to a digital library, and may hold misconceptions of the technological support for preserving their privacy. We present results from 18 autoethnographic investigations and 19 observations / interviews into privacy issues that arise when people make their personal photo collections available online. The Adams' privacy model is used to discuss the findings according to information receiver, information sensitivity, and information usage. Further issues of trust and ad hoc poorly supported protection strategies are presented. Ultimately while photographic data is potentially highly sensitive, the privacy risks are often hidden and the protection mechanisms are limited

    Mean shift clustering for personal photo album organization

    Get PDF
    In this paper we propose a probabilistic approach for the automatic organization of pictures in personal photo album. Images are analyzed in term of faces and low-level visual features of the background. The description of the background is based on RGB color histogram and on Gabor filter energy accounting for texture information. The face descriptor is obtained by projection of detected and rectified faces on a common low dimensional eigenspace. Vectors representing faces and background are clustered in an unsupervised fashion exploiting a mean shift clustering technique. We observed that, given the peculiarity of the domain of personal photo libraries where most of the pictures contain faces of a relatively small number of different individuals, clusters tend to be not only visually but also semantically significant. Experimental results are reported

    On the Design and Exploitation of User's Personal and Public Information for Semantic Personal Digital Photograph Annotation

    Get PDF
    Automating the process of semantic annotation of digital personal photographs is a crucial step towards efficient and effective management of this increasingly high volume of content. However, this is still a highly challenging task for the research community. This paper proposes a novel solution. Our solution integrates all contextual information available to and from the users, such as their daily emails, schedules, chat archives, web browsing histories, documents, online news, Wikipedia data, and so forth. We then analyze this information and extract important semantic terms, using them as semantic keyword suggestions for their photos. Those keywords are in the form of named entities, such as names of people, organizations, locations, and date/time as well as high frequency terms. Experiments conducted with 10 subjects and a total of 313 photos proved that our proposed approach can significantly help users with the annotation process. We achieved a 33% gain in annotation time as compared to manual annotation. We also obtained very positive results in the accuracy rate of our suggested keywords
    corecore