93,812 research outputs found

    Familiarization through Ambient Images Alone

    Get PDF
    The term “ambient images” has begun to show up in much of the current literature on facial recognition. Ambient images refer to naturally occurring views of a face that captures the idiosyncratic ways in which a target face may vary (Ritchie & Burton, 2017). Much of the literature on ambient images have concluded that exposing people to ambient images of a target face can lead to improved facial recognition for that target face. Some studies have even suggested that familiarity is the result of increased exposure to ambient images of a target face (Burton, Kramer, Ritchie, & Jenkins, 2016). The current study extended the literature on ambient images. Using the face sorting paradigm from Jenkins, White, Van Montfort, and Burton (2011), the current study served three purposes. First, this study captured whether there was an incremental benefit in showing ambient images. Particularly, we observed whether performance improved as participants were shown a low, medium, or high number of ambient images. Next, this study attempted to provide a strong enough manipulation that participant would be able to perform the face sorting task perfectly, after being exposed to a high number (45 total) of ambient images. Lastly, this study introduced time data as a measure of face familiarity. The results found support for one aim of this study and partial support for another aim of this study. Time data were found to be an effective quantitative measure of familiarity. Also, there was some evidence of an incremental benefit of ambient images, but that benefit disappeared after viewing around 15 unique exemplar presentations of a novel identity’s face. Lastly, exposing participants to 45 ambient images alone did not cause them to reach perfect performance. The paper concludes with a discussion on the need to extend past ambient images to understand how to best mimic natural familiarity in a lab setting

    Learning scale-variant and scale-invariant features for deep image classification

    Get PDF
    Convolutional Neural Networks (CNNs) require large image corpora to be trained on classification tasks. The variation in image resolutions, sizes of objects and patterns depicted, and image scales, hampers CNN training and performance, because the task-relevant information varies over spatial scales. Previous work attempting to deal with such scale variations focused on encouraging scale-invariant CNN representations. However, scale-invariant representations are incomplete representations of images, because images contain scale-variant information as well. This paper addresses the combined development of scale-invariant and scale-variant representations. We propose a multi- scale CNN method to encourage the recognition of both types of features and evaluate it on a challenging image classification task involving task-relevant characteristics at multiple scales. The results show that our multi-scale CNN outperforms single-scale CNN. This leads to the conclusion that encouraging the combined development of a scale-invariant and scale-variant representation in CNNs is beneficial to image recognition performance

    Sparse Radial Sampling LBP for Writer Identification

    Full text link
    In this paper we present the use of Sparse Radial Sampling Local Binary Patterns, a variant of Local Binary Patterns (LBP) for text-as-texture classification. By adapting and extending the standard LBP operator to the particularities of text we get a generic text-as-texture classification scheme and apply it to writer identification. In experiments on CVL and ICDAR 2013 datasets, the proposed feature-set demonstrates State-Of-the-Art (SOA) performance. Among the SOA, the proposed method is the only one that is based on dense extraction of a single local feature descriptor. This makes it fast and applicable at the earliest stages in a DIA pipeline without the need for segmentation, binarization, or extraction of multiple features.Comment: Submitted to the 13th International Conference on Document Analysis and Recognition (ICDAR 2015

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure
    • …
    corecore