7 research outputs found

    Mouth features extraction for emotion classification

    Full text link

    Face Detection in Intelligent Ambiences with Colored Illumination

    Full text link
    Human face detection is an essential step in the creation of intelligent lighting ambiences, but the constantly changing multi-color illumination makes reliable face detection more challenging. Therefore, we introduce a new face detection and localization algorithm, which retains a high performance under various indoor illumination conditions. The method is based on the creation of a robust skin mask, using general color constancy techniques, and the application of the Viola-Jones face detector on the candidate face areas. Extensive experiments, using a challenging state-of-the-art database and a new one with a wider variation in colored illumination and cluttered background, show a significantly better performance for the newly proposed algorithm than for the most widely used face detection algorithms

    Théorie de l’évidence pour suivi de visage

    Get PDF
    Le suivi de visage par caméra vidéo est abordé ici sous l’angle de la fusion évidentielle. La méthode proposée repose sur un apprentissage sommaire basé sur une initialisation supervisée. Le formalisme du modèle de croyances transférables est utilisé pour pallier l’incomplétude du modèle a priori de visage due au manque d’exhaustivité de la base d’apprentissage. L’algorithme se décompose en deux étapes. La phase de détection de visage synthétise un modèle évidentiel où les attributs du détecteur de Viola et Jones sont convertis en fonctions de croyance, et fusionnés avec des fonctions de masse couleur modélisant un détecteur de teinte chair, opérant dans un espace chromatique original obtenu par transformation logarithmique. Pour fusionner les sources couleur dépendantes, nous proposons un opérateur de compromis inspiré de la règle prudente de Denœux. Pour la phase de suivi, les probabilités pignistiques issues du modèle de visage garantissent la compatibilité entre les cadres crédibiliste et probabiliste. Elles alimentent un filtre particulaire classique qui permet le suivi du visage en temps réel. Nous analysons l’influence des paramètres du modèle évidentiel sur la qualité du suivi.This paper deals with real time face detection and tracking by a video camera. The method is based on a simple and fast initializing stage for learning. The transferable belief model is used to deal with the prior model incompleteness due to the lack of exhaustiveness of the learning stage. The algorithm works in two steps. The detection phase synthesizes an evidential face model by merging basic beliefs elaborated from the Viola and Jones face detector and from colour mass functions. These functions are computed from information sources in a logarithmic colour space. To deal with the colour information dependence in the fusion process, we propose a compromise operator close to the Denœux cautious rule. As regards the tracking phase, the pignistic probabilities from the face model guarantee the compatibility between the believes and the probability formalism. They are the inputs of a particle filter which ensures face tracking at video rate. The optimal parameter tuning of the evidential model is discussed

    Expresión de emociones de alegría para personajes virtuales mediante la risa y la sonrisa

    Get PDF
    La animación facial es uno de los tópicos todavía no resueltos tanto en el campo de la interacción hombre máquina como en el de la informática gráfica. Las expresiones de alegría asociadas a risa y sonrisa son por su significado e importancia, parte fundamental de estos campos. En esta tesis se hace una aproximación a la representación de los diferentes tipos de risa en animación facial a la vez que se presenta un nuevo método capaz de reproducir todos estos tipos. El método se valida mediante la recreación de secuencias cinematográficas y mediante la utilización de bases de datos de expresiones faciales genéricas y específicas de sonrisa. Adicionalmente se crea una base de datos propia que recopila los diferentes tipos de risas clasificados y generados en este trabajo. De acuerdo a esta base de datos propia se generan las expresiones más representativas de cada una de las risas y sonrisas consideradas en el estudio.L'animació facial és un dels tòpics encara no resolts tant en el camp de la interacció home màquina com en el de la informàtica gràfica. Les expressions d'alegria associades a riure i somriure són pel seu significat i importància, part fonamental d'aquests camps. En aquesta tesi es fa una aproximació a la representació dels diferents tipus de riure en animació facial alhora que es presenta un nou mètode capaç de reproduir tots aquests tipus. El mètode es valida mitjançant la recreació de seqüències cinematogràfiques i mitjançant la utilització de bases de dades d'expressions facials genèriques i específiques de somriure. Addicionalment es crea una base de dades pròpia que recull els diferents tipus de rialles classificats i generats en aquest treball. D'acord a aquesta base de dades pròpia es generen les expressions més representatives de cadascuna de les rialles i somriures considerades en l'estudi.Nowadays, facial animation is one of the most relevant research topics still unresolved both in the field of human machine interaction and in the computer graphics. Expressions of joy associated with laughter and smiling are a key part of these fields mainly due to its meaning and importance. In this thesis an approach to the representation of different types of laughter in facial animation is done while a new method to reproduce all these types is proposed. The method is validated by recreating movie sequences and using databases of generic and specific facial smile expressions. Additionally, a proprietary database that lists the different types of classified and generated laughs in this work is created. According to this proprietary database the most representative of every smile expression considered in the study is generated

    INCORPORATING MACHINE VISION IN PRECISION DAIRY FARMING TECHNOLOGIES

    Get PDF
    The inclusion of precision dairy farming technologies in dairy operations is an area of increasing research and industry direction. Machine vision based systems are suitable for the dairy environment as they do not inhibit workflow, are capable of continuous operation, and can be fully automated. The research of this dissertation developed and tested 3 machine vision based precision dairy farming technologies tailored to the latest generation of RGB+D cameras. The first system focused on testing various imaging approaches for the potential use of machine vision for automated dairy cow feed intake monitoring. The second system focused on monitoring the gradual change in body condition score (BCS) for 116 cows over a nearly 7 month period. Several proposed automated BCS systems have been previously developed by researchers, but none have monitored the gradual change in BCS for a duration of this magnitude. These gradual changes infer a great deal of beneficial and immediate information on the health condition of every individual cow being monitored. The third system focused on automated dairy cow feature detection using Haar cascade classifiers to detect anatomical features. These features included the tailhead, hips, and rear regions of the cow body. The features chosen were done so in order to aid machine vision applications in determining if and where a cow is present in an image or video frame. Once the cow has been detected, it must then be automatically identified in order to keep the system fully automated, which was also studied in a machine vision based approach in this research as a complimentary aspect to incorporate along with cow detection. Such systems have the potential to catch poor health conditions developing early on, aid in balancing the diet of the individual cow, and help farm management to better facilitate resources, monetary and otherwise, in an appropriate and efficient manner. Several different applications of this research are also discussed along with future directions for research, including the potential for additional automated precision dairy farming technologies, integrating many of these technologies into a unified system, and the use of alternative, potentially more robust machine vision cameras

    3D Face Tracking Using Stereo Cameras with Whole Body View

    Get PDF
    All visual tracking tasks associated with people tracking are in a great demand for modern applications dedicated to make human life easier and safer. In this thesis, a special case of people tracking - 3D face tracking in whole body view video is explored. Whole body view video means that the tracked face typically occupies not more than 5-10% of the frame area. Currently there is no reliable tracker that can track a face in long-term whole body view videos with luminance cameras in the 3D space. I followed a non-classical approach to designing a 3D tracker: first a 2D face tracking algorithm was developed in one view and then extended into stereo tracking. I recorded and annotated my own extensive dataset specifically for 2D face tracking in whole body view video and evaluated 17 state of the art 2D tracking algorithms. Based on the TLD tracker, I developed a face adapted median flow tracker that shows superior results compared to state of the art generic trackers. I explored different ways of extending 2D tracking into 3D and developed a method of using the epipolar constraint to check consistency of 3D tracking results. This method allows to detect tracking failures early and improves overall 3D tracking accuracy. I demonstrated how a Kinect based method can be compared to visual tracking methods and compared four different visual tracking methods running on low resolution fisheye stereo video and the Kinect face tracking application. My main contributions are: - I developed a face adaptation of generic trackers that improves tracking performance in long-term whole body view videos. - I designed a method of using the epipolar constraint to check consistency of 3D tracking results
    corecore