54 research outputs found

    Large-scale geo-facial image analysis

    Get PDF
    While face analysis from images is a well-studied area, little work has explored the dependence of facial appearance on the geographic location from which the image was captured. To fill this gap, we constructed GeoFaces, a large dataset of geotagged face images, and used it to examine the geo-dependence of facial features and attributes, such as ethnicity, gender, or the presence of facial hair. Our analysis illuminates the relationship between raw facial appearance, facial attributes, and geographic location, both globally and in selected major urban areas. Some of our experiments, and the resulting visualizations, confirm prior expectations, such as the predominance of ethnically Asian faces in Asia, while others highlight novel information that can be obtained with this type of analysis, such as the major city with the highest percentage of people with a mustache

    Building a Sound Localization System for a Robot Head

    No full text
    La localización de sonido juega un papel muy importante en la función perceptiva de los seres vivos, siendo vital para la supervivencia de muchas especies animales. También puede llegar a jugar un papel importante en interacción persona-ordenador y en interacción de un robot con su entorno. En un escenario típico dos señales sonoras recogidas por sendos micrófonos montados a ambos lados de una cabeza son procesadas para extraer características significativas que permitan derivar la localización horizontal aproximada de la fuente de sonido. En este artículo se describe un nuevo método de extracción de características para localización de sonido, que ha sido desarrollado para una cabeza robot actualmente en construcción. El método propuesto se compara en experimentos off-line con otro método de extracción de características desarrollado para el robot humanoide Cog, mostrando un comportamiento mejor con todas las señales probadas

    A Simple Habituation Mechanism for Perceptual User Interfaces

    No full text
    Las interfaces hombre-máquina complejas están cada vez más haciendo uso de conceptos de alto nivel extraídos de datos sensoriales para detectar aspectos relacionados con estados emocionales como fatiga, sorpresa, aburrimiento, etc. Patrones sensoriales repetitivos, por ejemplo, casi siempre significarán que el robot o agente pasará a un estado "aburrido", o que focalizará su atención en otra entidad. Estructuras nuevas en los datos sensoriales normalmente causarán sorpresa, aumento de la atención o incluso reacciones defensivas. El objetivo de este trabajo es introducir un sencillo mecanismo para detectar tales patrones repetitivos en datos sensoriales. Básicamente, los datos sensoriales pueden presentar dos tipos de patrones monótonos: frecuencia constante (sea cero o mayor que cero, sea una única frecuencia o un amplio espectro de ellas) y cambios repetitivos del espectro de frecuencias. Ambos tipos se tratan con el método propuesto en un marco computacional y conceptualmen te simple. Experimentos con datos sensoriales extraídos de los dominios visual y sonoro muestran la validez del método

    ENCARA2: Real-time detection of multiple faces at different resolutions in video streams

    No full text
    This paper describes a face detection system which goes beyond traditional face detection approaches normally designed for still images. The system described in this paper has been designed taking into account the temporal coherence contained in a video stream in order to build a robust detector. Multiple and real-time detection is achieved by means of cue combination. The resulting system builds a feature based model for each detected face, and searches them using the various model information in the next frame.The experiments have been focused on video streams, where our system can actually exploit the bene ts of the temporal coherence integration. The results achieved for video stream processing outperform Rowley-Kanade''s and Viola-Jones'' solutions providing eye and face data in real-time with a notable correct detection rate, aprox. 99:9% faces and 87:5% eye pairs on 26338 images

    Identity and Gender Recognition Using the ENCARA Real-Time Face Detector”. X Conferencia de la Asociación Española para la Inteligencia Artificial

    No full text
    Abstract This paper presents recognition results based on a PCA representation and classification with SVMs and temporal coherence.

    An Incremental Learning Algorithm for Face Recognition

    No full text
    In face recognition, where high-dimensional representation spaces are generally used, it is very important to take advantage of all the available information. In particular, many labelled facial images will be accumulated while the recognition system is functioning, and due to practical reasons some of them are often discarded. In this paper, we propose an algorithm for using this information. The algorithm has the fundamental characteristic of being incremental. On the other hand, the algorithm makes use of a combination of classification results for the images in the input sequence. Experiments with sequences obtained with a real person detection and tracking system allow us to analyze the performance of the algorithm, as well as its potential improvements

    A Proposal of a Homeostatic Regulation Mechanism for a Vision System

    No full text
    Abstract. In this work, we propose the introduction of a homeostatic regulation mechanism in a vision system. This homeostatic mechanism takes charge of controlling the luminance, white balance, contrast and size of the object of interest in the image, using naive methods except for the contrast, for which we have implemented a method that avoids the hill climbing search for the best focus position. We carry out some experiments in order to test the possible increase of the performance in a face detection application.

    Cue Combination for Robust Real-Time Multiple Face Detection at Different Resolutions

    No full text
    The face detection problem, defined as: to determine any face-if any- in the image returning the location and extent of each [Yang et al., 2002], seems to be solved, according to some recent works [Schneiderman and Kanade, 2000] [Viola and Jones, 2001]. Particularly for video stream processing, these approache
    corecore