591 research outputs found

    On-line Learning of Mutually Orthogonal Subspaces for Face Recognition by Image Sets

    No full text
    We address the problem of face recognition by matching image sets. Each set of face images is represented by a subspace (or linear manifold) and recognition is carried out by subspace-to-subspace matching. In this paper, 1) a new discriminative method that maximises orthogonality between subspaces is proposed. The method improves the discrimination power of the subspace angle based face recognition method by maximizing the angles between different classes. 2) We propose a method for on-line updating the discriminative subspaces as a mechanism for continuously improving recognition accuracy. 3) A further enhancement called locally orthogonal subspace method is presented to maximise the orthogonality between competing classes. Experiments using 700 face image sets have shown that the proposed method outperforms relevant prior art and effectively boosts its accuracy by online learning. It is shown that the method for online learning delivers the same solution as the batch computation at far lower computational cost and the locally orthogonal method exhibits improved accuracy. We also demonstrate the merit of the proposed face recognition method on portal scenarios of multiple biometric grand challenge

    Assessing similarity of feature selection techniques in high-dimensional domains

    Get PDF
    Recent research efforts attempt to combine multiple feature selection techniques instead of using a single one. However, this combination is often made on an “ad hoc” basis, depending on the specific problem at hand, without considering the degree of diversity/similarity of the involved methods. Moreover, though it is recognized that different techniques may return quite dissimilar outputs, especially in high dimensional/small sample size domains, few direct comparisons exist that quantify these differences and their implications on classification performance. This paper aims to provide a contribution in this direction by proposing a general methodology for assessing the similarity between the outputs of different feature selection methods in high dimensional classification problems. Using as benchmark the genomics domain, an empirical study has been conducted to compare some of the most popular feature selection methods, and useful insight has been obtained about their pattern of agreement

    Surface representations for 3D face recognition

    Get PDF

    Hashing Neural Video Decomposition with Multiplicative Residuals in Space-Time

    Full text link
    We present a video decomposition method that facilitates layer-based editing of videos with spatiotemporally varying lighting and motion effects. Our neural model decomposes an input video into multiple layered representations, each comprising a 2D texture map, a mask for the original video, and a multiplicative residual characterizing the spatiotemporal variations in lighting conditions. A single edit on the texture maps can be propagated to the corresponding locations in the entire video frames while preserving other contents' consistencies. Our method efficiently learns the layer-based neural representations of a 1080p video in 25s per frame via coordinate hashing and allows real-time rendering of the edited result at 71 fps on a single GPU. Qualitatively, we run our method on various videos to show its effectiveness in generating high-quality editing effects. Quantitatively, we propose to adopt feature-tracking evaluation metrics for objectively assessing the consistency of video editing. Project page: https://lightbulb12294.github.io/hashing-nvd

    Técnicas embedding para clasificación de imágenes en grandes bancos de datos

    Get PDF
    En este trabajo se considera el problema de clasificación de imágenes en gran escala mediante embeddings lineales. En un modelo embedding, además de generar una representación para las imágenes (entradas) se genera una representación para las clases o conceptos de interés (salidas). De esta forma, al comparar estas representaciones intermedias (imágenes y clases) en un espacio de representación común, es posible abordar de manera unificada problemas como los de clasificación y búsqueda de imágenes por contenido. Los métodos embedding son particularmente atractivos en cuanto permiten generar proyecciones a espacios de imensionalidad reducida, lo que hace posible el abordaje de problemas en gran escala (millones de imágenes, cientos de miles de conceptos) de manera eficiente. En particular, se analiza el algoritmo WSABIE propuesto por [Weston et al.,2011b] el cual, a diferencia de los esquemas tradicionales, aborda el problema de aprendizaje mediante la optimización de una función objetivo que tiene en cuenta no solo si una muestra fue bien o mal clasificada, sino cómo se ubicó su etiqueta verdadera respecto de las k mejores predicciones en una lista ordenada de posibles anotaciones.In this work we consider the problem of large scale image classification using linear embeddings. In an embedding model, a representation of both images (inputs) and classes (outputs) is generated. Then, by comparing these intermediate representations (images and classes) in a common representation space, it is possible to solve problems like classification and image retrieval in a unified manner. Embedding methods are attractive because they allow the projection into spaces of low dimensionality where large scale problems (millions of images and hundreds of thousands of concepts) can be handled efficiently. In particular, we analyze the WSABIE algorithm proposed by [Weston et al., 2011b] which, unlike traditional methods, approaches the learning problem through the optimization of an objective function that considers not only whether the sample was correctly classified, but also the rank of the true label with respect to the k best predictions in a sorted list of possible annotations
    corecore