29 research outputs found

    Facial identity across the lifespan

    Get PDF
    We can recognise people that we know across their lifespan. We see family members age, and we can recognise celebrities across long careers. How is this possible, despite the very large facial changes that occur as people get older? Here we analyse the statistical properties of faces as they age, sampling photos of the same people from their 20s to their 70s. Across a number of simulations, we observe that individuals’ faces retain some idiosyncratic physical properties across the adult lifespan that can be used to support moderate levels of age-independent recognition. However, we found that models based exclusively on image-similarity only achieved limited success in recognising faces across age. In contrast, more robust recognition was achieved with the introduction of a minimal top-down familiarisation procedure. Such models can incorporate the within-person variability associated with a particular individual to show a surprisingly high level of generalisation, even across the lifespan. The analysis of this variability reveals a powerful statistical tool for understanding recognition, and demonstrates how visual representations may support operations typically thought to require conceptual properties

    Estimación eficiente de atributos demográficos del rostro humano en imágenes

    Full text link
    Sin duda, el rostro humano ofrece mucha más información de la que pensamos. La cara transmite sin nuestro consentimiento señales no verbales, a partir de las interacciones faciales, que dejan al descubierto nuestro estado afectivo, actividad cognitiva, personalidad y enfermedades. Estudios recientes [OFT14, TODMS15] demuestran que muchas de nuestras decisiones sociales e interpersonales derivan de un previo análisis facial de la cara que nos permite establecer si esa persona es confiable, trabajadora, inteligente, etc. Esta interpretación, propensa a errores, deriva de la capacidad innata de los seres humanas de encontrar estas señales e interpretarlas. Esta capacidad es motivo de estudio, con un especial interés en desarrollar métodos que tengan la habilidad de calcular de manera automática estas señales o atributos asociados a la cara. Así, el interés por la estimación de atributos faciales ha crecido rápidamente en los últimos años por las diversas aplicaciones en que estos métodos pueden ser utilizados: marketing dirigido, sistemas de seguridad, interacción hombre-máquina, etc. Sin embargo, éstos están lejos de ser perfectos y robustos en cualquier dominio de problemas. La principal dificultad encontrada es causada por la alta variabilidad intra-clase debida a los cambios en la condición de la imagen: cambios de iluminación, oclusiones, expresiones faciales, edad, género, etnia, etc.; encontradas frecuentemente en imágenes adquiridas en entornos no controlados. Este de trabajo de investigación estudia técnicas de análisis de imágenes para estimar atributos faciales como el género, la edad y la postura, empleando métodos lineales y explotando las dependencias estadísticas entre estos atributos. Adicionalmente, nuestra propuesta se centrará en la construcción de estimadores que tengan una fuerte relación entre rendimiento y coste computacional. Con respecto a éste último punto, estudiamos un conjunto de estrategias para la clasificación de género y las comparamos con una propuesta basada en un clasificador Bayesiano y una adecuada extracción de características. Analizamos en profundidad el motivo de porqué las técnicas lineales no han logrado resultados competitivos hasta la fecha y mostramos cómo obtener rendimientos similares a las mejores técnicas no-lineales. Se propone un segundo algoritmo para la estimación de edad, basado en un regresor K-NN y una adecuada selección de características tal como se propuso para la clasificación de género. A partir de los experimentos desarrollados, observamos que el rendimiento de los clasificadores se reduce significativamente si los ´estos han sido entrenados y probados sobre diferentes bases de datos. Hemos encontrado que una de las causas es la existencia de dependencias entre atributos faciales que no han sido consideradas en la construcción de los clasificadores. Nuestro resultados demuestran que la variabilidad intra-clase puede ser reducida cuando se consideran las dependencias estadísticas entre los atributos faciales de el género, la edad y la pose; mejorando el rendimiento de nuestros clasificadores de atributos faciales con un coste computacional pequeño. Abstract Surely the human face provides much more information than we think. The face provides without our consent nonverbal cues from facial interactions that reveal our emotional state, cognitive activity, personality and disease. Recent studies [OFT14, TODMS15] show that many of our social and interpersonal decisions derive from a previous facial analysis that allows us to establish whether that person is trustworthy, hardworking, intelligent, etc. This error-prone interpretation derives from the innate ability of human beings to find and interpret these signals. This capability is being studied, with a special interest in developing methods that have the ability to automatically calculate these signs or attributes associated with the face. Thus, the interest in the estimation of facial attributes has grown rapidly in recent years by the various applications in which these methods can be used: targeted marketing, security systems, human-computer interaction, etc. However, these are far from being perfect and robust in any domain of problems. The main difficulty encountered is caused by the high intra-class variability due to changes in the condition of the image: lighting changes, occlusions, facial expressions, age, gender, ethnicity, etc.; often found in images acquired in uncontrolled environments. This research work studies image analysis techniques to estimate facial attributes such as gender, age and pose, using linear methods, and exploiting the statistical dependencies between these attributes. In addition, our proposal will focus on the construction of classifiers that have a good balance between performance and computational cost. We studied a set of strategies for gender classification and we compare them with a proposal based on a Bayesian classifier and a suitable feature extraction based on Linear Discriminant Analysis. We study in depth why linear techniques have failed to provide competitive results to date and show how to obtain similar performances to the best non-linear techniques. A second algorithm is proposed for estimating age, which is based on a K-NN regressor and proper selection of features such as those proposed for the classification of gender. From our experiments we note that performance estimates are significantly reduced if they have been trained and tested on different databases. We have found that one of the causes is the existence of dependencies between facial features that have not been considered in the construction of classifiers. Our results demonstrate that intra-class variability can be reduced when considering the statistical dependencies between facial attributes gender, age and pose, thus improving the performance of our classifiers with a reduced computational cost

    Psychometric properties of Addenbrooke's Cognitive Examination III (ACE-III): An item response theory approach

    No full text
    The Addenbrooke's Cognitive Examination III is one of the most widely used tests to assess cognitive impairment. Although previous studies have shown adequate levels of diagnostic utility to detect severe impairment, it has not shown sensitivity to detect mild decline. The aim of this study was to evaluate the psychometric properties of Addenbrooke's Cognitive Examination III in a large sample of elderly people through Item Response Theory, due to the lack of studies using this approach. A cross-sectional study was conducted with 1164 people from the age of 60 upwards, of which 63 had a prior diagnosis of Alzheimer dementia. The results showed that, globally, the Addenbrooke's Cognitive Examination III possesses adequate psychometrics properties. Furthermore, the information function test shows that the subscales have different sensitivity to different levels of impairment. These results can contribute to determining patterns of cognitive deterioration for the adequate detection of different levels of dementia. An optimized version is suggested which may be an economic alternative in the applied field

    Gender Recognition Using Cognitive Modeling

    No full text

    Gender Classification in Large Databases

    No full text

    Recognition of Facial Attributes Using Adaptive Sparse Representations of Random Patches

    No full text
    Abstract. It is well known that some facial attributes –like soft bio-metric traits – can increase the performance of traditional biometric sys-tems and help recognition based on human descriptions. In addition, other facial attributes –like facial expressions – can be used in human– computer interfaces, image retrieval, talking heads and human emotion analysis. This paper addresses the problem of automated recognition of facial attributes by proposing a new general approach called Adap-tive Sparse Representation of Random Patches (ASR+). In the learning stage, random patches are extracted from representative face images of each class (e.g., in gender recognition –a two-class problem–, images of females/males) in order to construct representative dictionaries. In the testing stage, random test patches of the query image are extracted, and for each test patch a dictionary is built concatenating the ‘best ’ repre-sentative dictionary of each class. Using this adapted dictionary, each test patch is classified following the Sparse Representation Classification (SRC) methodology. Finally, the query image is classified by patch vot-ing. Thus, our approach is able to learn a model for each recognition task dealing with a larger degree of variability in ambient lighting, pose, expression, occlusion, face size and distance from the camera. Experi-ments were carried out on seven face databases in order to recognize facial expression, gender, race and disguise. Results show that ASR+ deals well with unconstrained conditions, outperforming various repre-sentative methods in the literature in many complex scenarios

    MEG: Multi-Expert Gender Classification from Face Images in a Demographics-Balanced Dataset

    No full text
    In this paper we focus on gender classification from face images, which is still a challenging task in unrestricted scenarios. This task can be useful in a number of ways, e.g., as a preliminary step in biometric identity recognition supported by demographic information. We compare a feature based approach with two score based ones. In the former, we stack a number of feature vectors obtained by different operators, and train a SVM based on them. In the latter, we separately compute the individual scores from the same operators, then either we feed them to a SVM, or exploit likelihood ratio based on a pairwise comparison of their answers. Experiments use EGA database, which presents a good balance with respect to demographic features of stored face images. As expected, feature level fusion achieves an often better classification performance but it is also quite computationally expensive. Our contribution has a threefold value: 1) the proposed score level fusion approaches, though less demanding, achieve results which are rather similar or slightly better than feature level fusion, especially when a particular set of experts are fused; since experts are trained individually, it is not required to evaluate a complex multi-feature distribution and the training process is more efficient; 2) the number of uncertain cases significantly decreases; 3) the operators used are not computationally expensive in themselves
    corecore