53 research outputs found

    Assessing the existence of visual clues of human ovulation

    Get PDF
    Is the concealed human ovulation a myth? The author of this work tries to answer the above question by using a medium-size database of facial images specially created and tagged. Analyzing possible facial modifications during the mensal period is a formal tool to assess the veracity about the concealed ovulation. In normal view, the human ovulation remains concealed. In other words, there is no visible external sign of the mensal period in humans. These external signs are very much visible in many animals such as baboons, dogs or elephants. Some are visual (baboons) and others are biochemical (dogs). Insects use pheromones and other animals can use sounds to inform the partners of their fertility period. The objective is not just to study the visual female ovulation signs but also to understand and explain automatic image processing methods which could be used to extract precise landmarks from the facial pictures. This could later be applied to the studies about the fluctuant asymmetry. The field of fluctuant asymmetry is a growing field in evolutionary biology but cannot be easily developed because of the necessary time to manually extract the landmarks. In this work we have tried to see if any perceptible sign is present in human face during the ovulation and how we can detect formal changes, if any, in face appearance during the mensal period. We have taken photography from 50 girls for 32 days. Each day we took many photos of each girl. At the end we chose a set of 30 photos per girl representing the whole mensal cycle. From these photos 600 were chosen to be manually tagged for verification issues. The photos were organized in a rating software to allow human raters to watch and choose the two best looking pictures for each girl. These results were then checked to highlight the relation between chosen photos and ovulation period in the cycle. Results were indicating that in fact there are some clues in the face of human which could eventually give a hint about their ovulation. Later, different automatic landmark detection methods were applied to the pictures to highlight possible modifications in the face during the period. Although the precision of the tested methods, are far from being perfect, the comparison of these measurements to the state of art indexes of beauty shows a slight modification of the face towards a prettier face during the ovulation. The automatic methods tested were Active Appearance Model (AAM), the neural deep learning and the regression trees. It was observed that for this kind of applications the best method was the regression trees. Future work has to be conducted to firmly confirm these data, number of human raters should be augmented, and a proper learning data base should be developed to allow a learning process specific to this problematic. We also think that low level image processing will be necessary to achieve the final precision which could reveal more details of possible changes in human faces.A ovulação no ser humano é, em geral, considerada “oculta”, ou seja, sem sinais exteriores. Mas a ovulação ou o período mensal é uma mudança hormonal extremamente importante que se repete em cada ciclo. Acreditar que esta mudança hormonal não tem nenhum sinal visível parece simplista. Estes sinais externos são muito visíveis em animais, como babuínos, cães ou elefantes. Alguns são visuais (babuínos) e outros são bioquímicos (cães). Insetos usam feromonas e outros animais podem usar sons para informar os parceiros do seu período de fertilidade. O ser humano tem vindo a esconder ou pelo menos camuflar sinais desses durante a evolução. As razoes para esconder ou camuflar a ovulação no ser humano não são claros e não serão discutidos nesta dissertação. Na primeira parte deste trabalho, a autora deste trabalho, depois de criar um base de dados de tamanho médio de imagens faciais e anotar as fotografias vai verificar se sinais de ovulação podem ser detetados por outros pessoas. Ou seja, se modificações que ‘as priori’ são invisíveis podem ser percebidas de maneira inconsciente pelo observador. Na segunda parte, a autora vai analisar as eventuais modificações faciais durante o período, de uma maneira formal, utilizando medidas faciais. Métodos automáticos de analise de imagem aplicados permitem obter os dados necessários. Uma base de dados de imagens para efetuar este trabalho foi criado de raiz, uma vez que nenhuma base de dados existia na literatura. 50 raparigas aceitaram de participar na criação do base de dados. Durante 32 dias e diariamente, cada rapariga foi fotografada. Em cada sessão foi tirada várias fotos. As fotos foram depois apuradas para deixar só 30 fotos ao máximo, para cada rapariga. 600 fotos foram depois escolhidas para serem manualmente anotadas. Essas 600 fotos anotadas, definam a base de dados de verificação. Assim as medidas obtidas automaticamente podem ser verificadas comparando com a base de 600 fotos anotadas. O objetivo deste trabalho não é apenas estudar os sinais visuais da ovulação feminina, mas também testar e explicar métodos de processamento automático de imagens que poderiam ser usados para extrair pontos de interesse, das imagens faciais. A automatização de extração dos pontos de interesse poderia mais tarde ser aplicado aos estudos sobre a assimetria flutuante. O campo da assimetria flutuante é um campo crescente na biologia evolucionária, mas não pode ser desenvolvido facilmente. O tempo necessário para extrair referencias e pontos de interesse é proibitivo. Por além disso, estudos de assimetria flutuante, muitas vezes, baseado numa só fotografia pode vier a ser invalido, se modificações faciais temporárias existirem. Modificações temporárias, tipo durante o período mensal, revela que estudos fenotípicos baseados numa só fotografia não pode constituir uma base viável para estabelecer ligas genótipo-fenótipo. Para tentar ver se algum sinal percetível está presente no rosto humano durante a ovulação, as fotos foram organizadas num software de presentação para permitir o observador humano escolher duas fotos (as mais atraentes) de cada rapariga. Estes resultados foram então analisados para destacar a relação entre as fotos escolhidas e o período de ovulação no ciclo mensal. Os resultados sugeriam que, de facto, existem algumas indicações no rosto que poderiam eventualmente dar informações sobre o período de ovulação. Os observadores escolheram como mais atraente de cada rapariga, aquelas que tinham sido tiradas nos dias imediatos antes ou depois da ovulação. Ou seja, foi claramente estabelecido que a mesma rapariga parecia mais atraente durante os dias próximos da data da ovulação. O software também permite recolher dados sobre o observador para analise posterior de comportamento dos observadores perante as fotografias. Os dados dos observadores podem dar indicações sobre as razoes da ovulação escondida que foi desenvolvida durante a evolução. A seguir, diferentes métodos automáticos de deteção de pontos de interesse foram aplicados às imagens para detetar o tipo de modificações no rosto durante o período. A precisão dos métodos testados, apesar de não ser perfeita, permite observar algumas relações entre as modificações e os índices de atratividade. Os métodos automáticos testados foram Active Appearance Model (AAM), Convolutional Neural Networks (CNN) e árvores de regressão (Dlib-Rt). AAM e CNN foram implementados em Python utilizando o modulo Keras library. Dlib-Rt foi implementado em C++ utilizando OpenCv. Os métodos utilizados, estão todos baseados em aprendizagem e sacrificam a precisão. Comparando os resultados dos métodos automáticos com os resultados manualmente obtidos, indicaram que os métodos baseados em aprendizagem podem não ter a precisão necessária para estudos em simetria flutuante ou para estudos de modificação faciais finas. Apesar de falta de precisão, observou-se que, para este tipo de aplicação, o melhor método (entre os testados) foi as árvores de regressão. Os dados e medidas obtidas, constituíram uma base de dados com a data de período, medidas faciais, dados sociais e dados de atratividade que poderem ser utilizados para trabalhos posteriores. O trabalho futuro tem de ser conduzido para confirmar firmemente estes dados, o número de avaliadores humanos deve ser aumentado, e uma base de dados de aprendizagem adequada deve ser desenvolvida para permitir a definição de um processo de aprendizagem específico para esta problemática. Também foi observado que o processamento de imagens de baixo nível será necessário para alcançar a precisão final que poderia revelar detalhes finos de mudanças em rostos humanos. Transcrever os dados e medidas para o índice de atratividade e aplicar métodos de data-mining pode revelar exatamente quais são as modificações implicadas durante o período mensal. A autora também prevê a utilização de uma câmara fotográfica tipo true-depth permite obter os dados de profundidade e volumo que podem afinar os estudos. Os dados de pigmentação da pele e textura da mesma também devem ser considerados para obter e observar todos tipos de modificação facial durante o período mensal. Os dados também devem separar raparigas com métodos químicos de contraceção, uma vez que estes métodos podem interferir com os níveis hormonais e introduzir erros de apreciação. Por fim o mesmo estudo poderia ser efetuado nos homens, uma vez que homens não sofrem de mudanças hormonais, a aparição de qualquer modificação facial repetível pode indicar existência de fatos camuflados

    Automatic analysis of facial actions: a survey

    Get PDF
    As one of the most comprehensive and objective ways to describe facial expressions, the Facial Action Coding System (FACS) has recently received significant attention. Over the past 30 years, extensive research has been conducted by psychologists and neuroscientists on various aspects of facial expression analysis using FACS. Automating FACS coding would make this research faster and more widely applicable, opening up new avenues to understanding how we communicate through facial expressions. Such an automated process can also potentially increase the reliability, precision and temporal resolution of coding. This paper provides a comprehensive survey of research into machine analysis of facial actions. We systematically review all components of such systems: pre-processing, feature extraction and machine coding of facial actions. In addition, the existing FACS-coded facial expression databases are summarised. Finally, challenges that have to be addressed to make automatic facial action analysis applicable in real-life situations are extensively discussed. There are two underlying motivations for us to write this survey paper: the first is to provide an up-to-date review of the existing literature, and the second is to offer some insights into the future of machine recognition of facial actions: what are the challenges and opportunities that researchers in the field face

    Multi-View Face Recognition From Single RGBD Models of the Faces

    Get PDF
    This work takes important steps towards solving the following problem of current interest: Assuming that each individual in a population can be modeled by a single frontal RGBD face image, is it possible to carry out face recognition for such a population using multiple 2D images captured from arbitrary viewpoints? Although the general problem as stated above is extremely challenging, it encompasses subproblems that can be addressed today. The subproblems addressed in this work relate to: (1) Generating a large set of viewpoint dependent face images from a single RGBD frontal image for each individual; (2) using hierarchical approaches based on view-partitioned subspaces to represent the training data; and (3) based on these hierarchical approaches, using a weighted voting algorithm to integrate the evidence collected from multiple images of the same face as recorded from different viewpoints. We evaluate our methods on three datasets: a dataset of 10 people that we created and two publicly available datasets which include a total of 48 people. In addition to providing important insights into the nature of this problem, our results show that we are able to successfully recognize faces with accuracies of 95% or higher, outperforming existing state-of-the-art face recognition approaches based on deep convolutional neural networks

    Markerless facial motion capture: deep learning approaches on RGBD data

    Get PDF
    Facial expressions are a series of fast, complex and interconnected movement that causes an array of deformations, such as stretching, compressing and folding of the skin. Identifying expression is a natural process in human vision, but due to the diversity of faces, it has many challenges for computer vision. Research in markerless facial motion capture using single Red Green Blue (RGB) camera has gained popularity due to the wide access of the data, such as from mobile phones. The motivation behind this work is much of the existing work attempts to infer the 3-Dimensional (3D) data from 2-Dimensional (2D) images, such as in motion capture multiple 2D cameras are calibration to allow some depth prediction. Whereas, the inclusion of Red Green Blue Depth (RGBD) sensors that give ground truth depth data could gain a better understanding of the human face and how expressions are visualised. The aim of this thesis is to investigate and develop novel methods of markerless facial motion capture, where the focus is on the inclusions of RGBD data to provide 3D data. The contributions are: A tool to aid in the annotation of 3D facial landmarks; A novel neural network that demonstrate the ability of predicting 2D and 3D landmarks by merging RGBD data; Working application that demonstrates complex deep learning network on portable handheld devices; A review of existing methods of denoising fine detail in depth maps using neural networks; A network for the complete analysis of facial landmarks and expressions in 3D. The 3D annotator was developed to overcome the issues of relying on existing 3D modelling software, which made feature identification difficult. The technique of predicting 2D and 3D with auxiliary information, allowed high accuracy 3D landmarking, without the need for full model generation. Also, it outperformed other recent techniques of landmarking. The networks running on the handheld devices show as a proof of concept that even without much optimisation, a complex task can be performed in near real-time. Denoising Time of Flight (ToF) depth maps, showed much more complexity than the tradition RGB denoising, where we reviewed and applied an array of techniques to the task. The full facial analysis showed that when neural networks perform on a wide range of related task for auxiliary information allow for deep understanding of the overall task. The research for facial processing is vast, but still with many new problems and challenges to face and improve upon. While RGB cameras are used widely, we see the inclusion of high accuracy and cost-effective depth sensing device available. The new devices allow better understanding of facial features and expression. By using and merging RGB data, the area of facial landmarking, and expression intensity recognition can be improved

    Geometric Expression Invariant 3D Face Recognition using Statistical Discriminant Models

    No full text
    Currently there is no complete face recognition system that is invariant to all facial expressions. Although humans find it easy to identify and recognise faces regardless of changes in illumination, pose and expression, producing a computer system with a similar capability has proved to be particularly di cult. Three dimensional face models are geometric in nature and therefore have the advantage of being invariant to head pose and lighting. However they are still susceptible to facial expressions. This can be seen in the decrease in the recognition results using principal component analysis when expressions are added to a data set. In order to achieve expression-invariant face recognition systems, we have employed a tensor algebra framework to represent 3D face data with facial expressions in a parsimonious space. Face variation factors are organised in particular subject and facial expression modes. We manipulate this using single value decomposition on sub-tensors representing one variation mode. This framework possesses the ability to deal with the shortcomings of PCA in less constrained environments and still preserves the integrity of the 3D data. The results show improved recognition rates for faces and facial expressions, even recognising high intensity expressions that are not in the training datasets. We have determined, experimentally, a set of anatomical landmarks that best describe facial expression e ectively. We found that the best placement of landmarks to distinguish di erent facial expressions are in areas around the prominent features, such as the cheeks and eyebrows. Recognition results using landmark-based face recognition could be improved with better placement. We looked into the possibility of achieving expression-invariant face recognition by reconstructing and manipulating realistic facial expressions. We proposed a tensor-based statistical discriminant analysis method to reconstruct facial expressions and in particular to neutralise facial expressions. The results of the synthesised facial expressions are visually more realistic than facial expressions generated using conventional active shape modelling (ASM). We then used reconstructed neutral faces in the sub-tensor framework for recognition purposes. The recognition results showed slight improvement. Besides biometric recognition, this novel tensor-based synthesis approach could be used in computer games and real-time animation applications

    Drunk Selfie Detection

    Get PDF
    The goal of this project was to extract key features from photographs of faces and use machine learning to classify subjects as either sober or drunk. To do this we analyzed photographs of 53 subjects after drinking wine and extracted key features which we used to classify drunkenness. We used random forest machine learning to achieve 81% accuracy. We built an android application that using our classifiers to estimate the subjects drunkenness from a selfie

    Facial Texture Super-Resolution by Fitting 3D Face Models

    Get PDF
    This book proposes to solve the low-resolution (LR) facial analysis problem with 3D face super-resolution (FSR). A complete processing chain is presented towards effective 3D FSR in real world. To deal with the extreme challenges of incorporating 3D modeling under the ill-posed LR condition, a novel workflow coupling automatic localization of 2D facial feature points and 3D shape reconstruction is developed, leading to a robust pipeline for pose-invariant hallucination of the 3D facial texture

    3D hand pose estimation using convolutional neural networks

    Get PDF
    3D hand pose estimation plays a fundamental role in natural human computer interactions. The problem is challenging due to complicated variations caused by complex articulations, multiple viewpoints, self-similar parts, severe self-occlusions, different shapes and sizes. To handle these challenges, the thesis makes the following contributions. First, the problem of the multiple viewpoints and complex articulations of hand pose estimation is tackled by decomposing and transforming the input and output space by spatial transformations following the hand structure. By the transformation, both the variation of the input space and output is reduced, which makes the learning easier. The second contribution is a probabilistic framework integrating all the hierarchical regressions. Variants with/without sampling, using different regressors and optimization methods are constructed and compared to provide an insight of the components under this framework. The third contribution is based on the observation that for images with occlusions, there exist multiple plausible configurations for the occluded parts. A hierarchical mixture density network is proposed to handle the multi-modality of the locations for occluded hand joints. It leverages the state-of-the-art hand pose estimators based on Convolutional Neural Networks to facilitate feature learning while models the multiple modes in a two-level hierarchy to reconcile single-valued (for visible joints) and multi-valued (for occluded joints) mapping in its output. In addition, a complete labeled real hand datasets is collected by a tracking system with six 6D magnetic sensors and inverse kinematics to automatically obtain 21-joints hand pose annotations of depth maps.Open Acces

    Facial expression recognition and intensity estimation.

    Get PDF
    Doctoral Degree. University of KwaZulu-Natal, Durban.Facial Expression is one of the profound non-verbal channels through which human emotion state is inferred from the deformation or movement of face components when facial muscles are activated. Facial Expression Recognition (FER) is one of the relevant research fields in Computer Vision (CV) and Human-Computer Interraction (HCI). Its application is not limited to: robotics, game, medical, education, security and marketing. FER consists of a wealth of information. Categorising the information into primary emotion states only limit its performance. This thesis considers investigating an approach that simultaneously predicts the emotional state of facial expression images and the corresponding degree of intensity. The task also extends to resolving FER ambiguous nature and annotation inconsistencies with a label distribution learning method that considers correlation among data. We first proposed a multi-label approach for FER and its intensity estimation using advanced machine learning techniques. According to our findings, this approach has not been considered for emotion and intensity estimation in the field before. The approach used problem transformation to present FER as a multilabel task, such that every facial expression image has unique emotion information alongside the corresponding degree of intensity at which the emotion is displayed. A Convolutional Neural Network (CNN) with a sigmoid function at the final layer is the classifier for the model. The model termed ML-CNN (Multilabel Convolutional Neural Network) successfully achieve concurrent prediction of emotion and intensity estimation. ML-CNN prediction is challenged with overfitting and intraclass and interclass variations. We employ Visual Geometric Graphics-16 (VGG-16) pretrained network to resolve the overfitting challenge and the aggregation of island loss and binary cross-entropy loss to minimise the effect of intraclass and interclass variations. The enhanced ML-CNN model shows promising results and outstanding performance than other standard multilabel algorithms. Finally, we approach data annotation inconsistency and ambiguity in FER data using isomap manifold learning with Graph Convolutional Networks (GCN). The GCN uses the distance along the isomap manifold as the edge weight, which appropriately models the similarity between adjacent nodes for emotion predictions. The proposed method produces a promising result in comparison with the state-of-the-art methods.Author's List of Publication is on page xi of this thesis
    corecore