3 research outputs found

    Cross-Age LFW: A Database for Studying Cross-Age Face Recognition in Unconstrained Environments

    Full text link
    Labeled Faces in the Wild (LFW) database has been widely utilized as the benchmark of unconstrained face verification and due to big data driven machine learning methods, the performance on the database approaches nearly 100%. However, we argue that this accuracy may be too optimistic because of some limiting factors. Besides different poses, illuminations, occlusions and expressions, cross-age face is another challenge in face recognition. Different ages of the same person result in large intra-class variations and aging process is unavoidable in real world face verification. However, LFW does not pay much attention on it. Thereby we construct a Cross-Age LFW (CALFW) which deliberately searches and selects 3,000 positive face pairs with age gaps to add aging process intra-class variance. Negative pairs with same gender and race are also selected to reduce the influence of attribute difference between positive/negative pairs and achieve face verification instead of attributes classification. We evaluate several metric learning and deep learning methods on the new database. Compared to the accuracy on LFW, the accuracy drops about 10%-17% on CALFW.Comment: 10 pages, 9 figure

    A Fast and Accurate Unconstrained Face Detector

    Full text link
    We propose a method to address challenges in unconstrained face detection, such as arbitrary pose variations and occlusions. First, a new image feature called Normalized Pixel Difference (NPD) is proposed. NPD feature is computed as the difference to sum ratio between two pixel values, inspired by the Weber Fraction in experimental psychology. The new feature is scale invariant, bounded, and is able to reconstruct the original image. Second, we propose a deep quadratic tree to learn the optimal subset of NPD features and their combinations, so that complex face manifolds can be partitioned by the learned rules. This way, only a single soft-cascade classifier is needed to handle unconstrained face detection. Furthermore, we show that the NPD features can be efficiently obtained from a look up table, and the detection template can be easily scaled, making the proposed face detector very fast. Experimental results on three public face datasets (FDDB, GENKI, and CMU-MIT) show that the proposed method achieves state-of-the-art performance in detecting unconstrained faces with arbitrary pose variations and occlusions in cluttered scenes.Comment: This paper has been accepted by TPAMI. The source code is available on the project page http://www.cbsr.ia.ac.cn/users/scliao/projects/npdface/index.htm

    Scalable Kernel K-Means Clustering with Nystrom Approximation: Relative-Error Bounds

    Full text link
    Kernel kk-means clustering can correctly identify and extract a far more varied collection of cluster structures than the linear kk-means clustering algorithm. However, kernel kk-means clustering is computationally expensive when the non-linear feature map is high-dimensional and there are many input points. Kernel approximation, e.g., the Nystr\"om method, has been applied in previous works to approximately solve kernel learning problems when both of the above conditions are present. This work analyzes the application of this paradigm to kernel kk-means clustering, and shows that applying the linear kk-means clustering algorithm to kϵ(1+o(1))\frac{k}{\epsilon} (1 + o(1)) features constructed using a so-called rank-restricted Nystr\"om approximation results in cluster assignments that satisfy a 1+ϵ1 + \epsilon approximation ratio in terms of the kernel kk-means cost function, relative to the guarantee provided by the same algorithm without the use of the Nystr\"om method. As part of the analysis, this work establishes a novel 1+ϵ1 + \epsilon relative-error trace norm guarantee for low-rank approximation using the rank-restricted Nystr\"om approximation. Empirical evaluations on the 8.18.1 million instance MNIST8M dataset demonstrate the scalability and usefulness of kernel kk-means clustering with Nystr\"om approximation. This work argues that spectral clustering using Nystr\"om approximation---a popular and computationally efficient, but theoretically unsound approach to non-linear clustering---should be replaced with the efficient and theoretically sound combination of kernel kk-means clustering with Nystr\"om approximation. The superior performance of the latter approach is empirically verified
    corecore