79 research outputs found

    Analysis of 3D Face Reconstruction

    No full text
    This thesis investigates the long standing problem of 3D reconstruction from a single 2D face image. Face reconstruction from a single 2D face image is an ill posed problem involving estimation of the intrinsic and the extrinsic camera parameters, light parameters, shape parameters and the texture parameters. The proposed approach has many potential applications in the law enforcement, surveillance, medicine, computer games and the entertainment industries. This problem is addressed using an analysis by synthesis framework by reconstructing a 3D face model from identity photographs. The identity photographs are a widely used medium for face identi cation and can be found on identity cards and passports. The novel contribution of this thesis is a new technique for creating 3D face models from a single 2D face image. The proposed method uses the improved dense 3D correspondence obtained using rigid and non-rigid registration techniques. The existing reconstruction methods use the optical ow method for establishing 3D correspondence. The resulting 3D face database is used to create a statistical shape model. The existing reconstruction algorithms recover shape by optimizing over all the parameters simultaneously. The proposed algorithm simplifies the reconstruction problem by using a step wise approach thus reducing the dimension of the parameter space and simplifying the opti- mization problem. In the alignment step, a generic 3D face is aligned with the given 2D face image by using anatomical landmarks. The texture is then warped onto the 3D model by using the spatial alignment obtained previously. The 3D shape is then recovered by optimizing over the shape parameters while matching a texture mapped model to the target image. There are a number of advantages of this approach. Firstly, it simpli es the optimization requirements and makes the optimization more robust. Second, there is no need to accurately recover the illumination parameters. Thirdly, there is no need for recovering the texture parameters by using a texture synthesis approach. Fourthly, quantitative analysis is used for improving the quality of reconstruction by improving the cost function. Previous methods use qualitative methods such as visual analysis, and face recognition rates for evaluating reconstruction accuracy. The improvement in the performance of the cost function occurs as a result of improvement in the feature space comprising the landmark and intensity features. Previously, the feature space has not been evaluated with respect to reconstruction accuracy thus leading to inaccurate assumptions about its behaviour. The proposed approach simpli es the reconstruction problem by using only identity images, rather than placing eff ort on overcoming the pose, illumination and expression (PIE) variations. This makes sense, as frontal face images under standard illumination conditions are widely available and could be utilized for accurate reconstruction. The reconstructed 3D models with texture can then be used for overcoming the PIE variations

    Automatic analysis of facial actions: a survey

    Get PDF
    As one of the most comprehensive and objective ways to describe facial expressions, the Facial Action Coding System (FACS) has recently received significant attention. Over the past 30 years, extensive research has been conducted by psychologists and neuroscientists on various aspects of facial expression analysis using FACS. Automating FACS coding would make this research faster and more widely applicable, opening up new avenues to understanding how we communicate through facial expressions. Such an automated process can also potentially increase the reliability, precision and temporal resolution of coding. This paper provides a comprehensive survey of research into machine analysis of facial actions. We systematically review all components of such systems: pre-processing, feature extraction and machine coding of facial actions. In addition, the existing FACS-coded facial expression databases are summarised. Finally, challenges that have to be addressed to make automatic facial action analysis applicable in real-life situations are extensively discussed. There are two underlying motivations for us to write this survey paper: the first is to provide an up-to-date review of the existing literature, and the second is to offer some insights into the future of machine recognition of facial actions: what are the challenges and opportunities that researchers in the field face

    Speech Signal and Facial Image Processing for Obstructive Sleep Apnea Assessment

    Full text link
    Obstructive sleep apnea (OSA) is a common sleep disorder characterized by recurring breathing pauses during sleep caused by a blockage of the upper airway (UA). OSA is generally diagnosed through a costly procedure requiring an overnight stay of the patient at the hospital. This has led to proposing less costly procedures based on the analysis of patients' facial images and voice recordings to help in OSA detection and severity assessment. In this paper we investigate the use of both image and speech processing to estimate the apnea-hypopnea index, AHI (which describes the severity of the condition), over a population of 285 male Spanish subjects suspected to suffer from OSA and referred to a Sleep Disorders Unit. Photographs and voice recordings were collected in a supervised but not highly controlled way trying to test a scenario close to an OSA assessment application running on a mobile device (i.e., smartphones or tablets). Spectral information in speech utterances is modeled by a state-of-the-art low-dimensional acoustic representation, called i-vector. A set of local craniofacial features related to OSA are extracted from images after detecting facial landmarks using Active Appearance Models (AAMs). Support vector regression (SVR) is applied on facial features and i-vectors to estimate the AHI.The activities in this paper were funded by the Spanish Ministry of Economy and Competitiveness and the European Union (FEDER) as part of the TEC2012-37585-C02 (CMC-V2) project. Authors also thank Sonia Martinez Diaz for her effort in collecting the OSA database that is used in this study

    Geometric Expression Invariant 3D Face Recognition using Statistical Discriminant Models

    No full text
    Currently there is no complete face recognition system that is invariant to all facial expressions. Although humans find it easy to identify and recognise faces regardless of changes in illumination, pose and expression, producing a computer system with a similar capability has proved to be particularly di cult. Three dimensional face models are geometric in nature and therefore have the advantage of being invariant to head pose and lighting. However they are still susceptible to facial expressions. This can be seen in the decrease in the recognition results using principal component analysis when expressions are added to a data set. In order to achieve expression-invariant face recognition systems, we have employed a tensor algebra framework to represent 3D face data with facial expressions in a parsimonious space. Face variation factors are organised in particular subject and facial expression modes. We manipulate this using single value decomposition on sub-tensors representing one variation mode. This framework possesses the ability to deal with the shortcomings of PCA in less constrained environments and still preserves the integrity of the 3D data. The results show improved recognition rates for faces and facial expressions, even recognising high intensity expressions that are not in the training datasets. We have determined, experimentally, a set of anatomical landmarks that best describe facial expression e ectively. We found that the best placement of landmarks to distinguish di erent facial expressions are in areas around the prominent features, such as the cheeks and eyebrows. Recognition results using landmark-based face recognition could be improved with better placement. We looked into the possibility of achieving expression-invariant face recognition by reconstructing and manipulating realistic facial expressions. We proposed a tensor-based statistical discriminant analysis method to reconstruct facial expressions and in particular to neutralise facial expressions. The results of the synthesised facial expressions are visually more realistic than facial expressions generated using conventional active shape modelling (ASM). We then used reconstructed neutral faces in the sub-tensor framework for recognition purposes. The recognition results showed slight improvement. Besides biometric recognition, this novel tensor-based synthesis approach could be used in computer games and real-time animation applications

    Generative Interpretation of Medical Images

    Get PDF

    Assessing the existence of visual clues of human ovulation

    Get PDF
    Is the concealed human ovulation a myth? The author of this work tries to answer the above question by using a medium-size database of facial images specially created and tagged. Analyzing possible facial modifications during the mensal period is a formal tool to assess the veracity about the concealed ovulation. In normal view, the human ovulation remains concealed. In other words, there is no visible external sign of the mensal period in humans. These external signs are very much visible in many animals such as baboons, dogs or elephants. Some are visual (baboons) and others are biochemical (dogs). Insects use pheromones and other animals can use sounds to inform the partners of their fertility period. The objective is not just to study the visual female ovulation signs but also to understand and explain automatic image processing methods which could be used to extract precise landmarks from the facial pictures. This could later be applied to the studies about the fluctuant asymmetry. The field of fluctuant asymmetry is a growing field in evolutionary biology but cannot be easily developed because of the necessary time to manually extract the landmarks. In this work we have tried to see if any perceptible sign is present in human face during the ovulation and how we can detect formal changes, if any, in face appearance during the mensal period. We have taken photography from 50 girls for 32 days. Each day we took many photos of each girl. At the end we chose a set of 30 photos per girl representing the whole mensal cycle. From these photos 600 were chosen to be manually tagged for verification issues. The photos were organized in a rating software to allow human raters to watch and choose the two best looking pictures for each girl. These results were then checked to highlight the relation between chosen photos and ovulation period in the cycle. Results were indicating that in fact there are some clues in the face of human which could eventually give a hint about their ovulation. Later, different automatic landmark detection methods were applied to the pictures to highlight possible modifications in the face during the period. Although the precision of the tested methods, are far from being perfect, the comparison of these measurements to the state of art indexes of beauty shows a slight modification of the face towards a prettier face during the ovulation. The automatic methods tested were Active Appearance Model (AAM), the neural deep learning and the regression trees. It was observed that for this kind of applications the best method was the regression trees. Future work has to be conducted to firmly confirm these data, number of human raters should be augmented, and a proper learning data base should be developed to allow a learning process specific to this problematic. We also think that low level image processing will be necessary to achieve the final precision which could reveal more details of possible changes in human faces.A ovulação no ser humano é, em geral, considerada “oculta”, ou seja, sem sinais exteriores. Mas a ovulação ou o período mensal é uma mudança hormonal extremamente importante que se repete em cada ciclo. Acreditar que esta mudança hormonal não tem nenhum sinal visível parece simplista. Estes sinais externos são muito visíveis em animais, como babuínos, cães ou elefantes. Alguns são visuais (babuínos) e outros são bioquímicos (cães). Insetos usam feromonas e outros animais podem usar sons para informar os parceiros do seu período de fertilidade. O ser humano tem vindo a esconder ou pelo menos camuflar sinais desses durante a evolução. As razoes para esconder ou camuflar a ovulação no ser humano não são claros e não serão discutidos nesta dissertação. Na primeira parte deste trabalho, a autora deste trabalho, depois de criar um base de dados de tamanho médio de imagens faciais e anotar as fotografias vai verificar se sinais de ovulação podem ser detetados por outros pessoas. Ou seja, se modificações que ‘as priori’ são invisíveis podem ser percebidas de maneira inconsciente pelo observador. Na segunda parte, a autora vai analisar as eventuais modificações faciais durante o período, de uma maneira formal, utilizando medidas faciais. Métodos automáticos de analise de imagem aplicados permitem obter os dados necessários. Uma base de dados de imagens para efetuar este trabalho foi criado de raiz, uma vez que nenhuma base de dados existia na literatura. 50 raparigas aceitaram de participar na criação do base de dados. Durante 32 dias e diariamente, cada rapariga foi fotografada. Em cada sessão foi tirada várias fotos. As fotos foram depois apuradas para deixar só 30 fotos ao máximo, para cada rapariga. 600 fotos foram depois escolhidas para serem manualmente anotadas. Essas 600 fotos anotadas, definam a base de dados de verificação. Assim as medidas obtidas automaticamente podem ser verificadas comparando com a base de 600 fotos anotadas. O objetivo deste trabalho não é apenas estudar os sinais visuais da ovulação feminina, mas também testar e explicar métodos de processamento automático de imagens que poderiam ser usados para extrair pontos de interesse, das imagens faciais. A automatização de extração dos pontos de interesse poderia mais tarde ser aplicado aos estudos sobre a assimetria flutuante. O campo da assimetria flutuante é um campo crescente na biologia evolucionária, mas não pode ser desenvolvido facilmente. O tempo necessário para extrair referencias e pontos de interesse é proibitivo. Por além disso, estudos de assimetria flutuante, muitas vezes, baseado numa só fotografia pode vier a ser invalido, se modificações faciais temporárias existirem. Modificações temporárias, tipo durante o período mensal, revela que estudos fenotípicos baseados numa só fotografia não pode constituir uma base viável para estabelecer ligas genótipo-fenótipo. Para tentar ver se algum sinal percetível está presente no rosto humano durante a ovulação, as fotos foram organizadas num software de presentação para permitir o observador humano escolher duas fotos (as mais atraentes) de cada rapariga. Estes resultados foram então analisados para destacar a relação entre as fotos escolhidas e o período de ovulação no ciclo mensal. Os resultados sugeriam que, de facto, existem algumas indicações no rosto que poderiam eventualmente dar informações sobre o período de ovulação. Os observadores escolheram como mais atraente de cada rapariga, aquelas que tinham sido tiradas nos dias imediatos antes ou depois da ovulação. Ou seja, foi claramente estabelecido que a mesma rapariga parecia mais atraente durante os dias próximos da data da ovulação. O software também permite recolher dados sobre o observador para analise posterior de comportamento dos observadores perante as fotografias. Os dados dos observadores podem dar indicações sobre as razoes da ovulação escondida que foi desenvolvida durante a evolução. A seguir, diferentes métodos automáticos de deteção de pontos de interesse foram aplicados às imagens para detetar o tipo de modificações no rosto durante o período. A precisão dos métodos testados, apesar de não ser perfeita, permite observar algumas relações entre as modificações e os índices de atratividade. Os métodos automáticos testados foram Active Appearance Model (AAM), Convolutional Neural Networks (CNN) e árvores de regressão (Dlib-Rt). AAM e CNN foram implementados em Python utilizando o modulo Keras library. Dlib-Rt foi implementado em C++ utilizando OpenCv. Os métodos utilizados, estão todos baseados em aprendizagem e sacrificam a precisão. Comparando os resultados dos métodos automáticos com os resultados manualmente obtidos, indicaram que os métodos baseados em aprendizagem podem não ter a precisão necessária para estudos em simetria flutuante ou para estudos de modificação faciais finas. Apesar de falta de precisão, observou-se que, para este tipo de aplicação, o melhor método (entre os testados) foi as árvores de regressão. Os dados e medidas obtidas, constituíram uma base de dados com a data de período, medidas faciais, dados sociais e dados de atratividade que poderem ser utilizados para trabalhos posteriores. O trabalho futuro tem de ser conduzido para confirmar firmemente estes dados, o número de avaliadores humanos deve ser aumentado, e uma base de dados de aprendizagem adequada deve ser desenvolvida para permitir a definição de um processo de aprendizagem específico para esta problemática. Também foi observado que o processamento de imagens de baixo nível será necessário para alcançar a precisão final que poderia revelar detalhes finos de mudanças em rostos humanos. Transcrever os dados e medidas para o índice de atratividade e aplicar métodos de data-mining pode revelar exatamente quais são as modificações implicadas durante o período mensal. A autora também prevê a utilização de uma câmara fotográfica tipo true-depth permite obter os dados de profundidade e volumo que podem afinar os estudos. Os dados de pigmentação da pele e textura da mesma também devem ser considerados para obter e observar todos tipos de modificação facial durante o período mensal. Os dados também devem separar raparigas com métodos químicos de contraceção, uma vez que estes métodos podem interferir com os níveis hormonais e introduzir erros de apreciação. Por fim o mesmo estudo poderia ser efetuado nos homens, uma vez que homens não sofrem de mudanças hormonais, a aparição de qualquer modificação facial repetível pode indicar existência de fatos camuflados

    Automatic Detection and Intensity Estimation of Spontaneous Smiles

    Get PDF
    Both the occurrence and intensity of facial expression are critical to what the face reveals. While much progress has been made towards the automatic detection of expression occurrence, controversy exists about how best to estimate expression intensity. Broadly, one approach is to adapt classifiers trained on binary ground truth to estimate expression intensity. An alternative approach is to explicitly train classifiers for the estimation of expression intensity. We investigated this issue by comparing multiple methods for binary smile detection and smile intensity estimation using two large databases of spontaneous expressions. SIFT and Gabor were used for feature extraction; Laplacian Eigenmap and PCA were used for dimensionality reduction; and binary SVM margins, multiclass SVMs, and ε-SVR models were used for prediction. Both multiclass SVMs and ε-SVR classifiers explicitly trained on intensity ground truth outperformed binary SVM margins for smile intensity estimation. A surprising finding was that multiclass SVMs also outperformed binary SVM margins on binary smile detection. This suggests that training on intensity ground truth is worthwhile even for binary expression detection

    Shape-appearance-correlated active appearance model

    Full text link
    © 2016 Elsevier Ltd Among the challenges faced by current active shape or appearance models, facial-feature localization in the wild, with occlusion in a novel face image, i.e. in a generic environment, is regarded as one of the most difficult computer-vision tasks. In this paper, we propose an Active Appearance Model (AAM) to tackle the problem of generic environment. Firstly, a fast face-model initialization scheme is proposed, based on the idea that the local appearance of feature points can be accurately approximated with locality constraints. Nearest neighbors, which have similar poses and textures to a test face, are retrieved from a training set for constructing the initial face model. To further improve the fitting of the initial model to the test face, an orthogonal CCA (oCCA) is employed to increase the correlation between shape features and appearance features represented by Principal Component Analysis (PCA). With these two contributions, we propose a novel AAM, namely the shape-appearance-correlated AAM (SAC-AAM), and the optimization is solved by using the recently proposed fast simultaneous inverse compositional (Fast-SIC) algorithm. Experiment results demonstrate a 5–10% improvement on controlled and semi-controlled datasets, and with around 10% improvement on wild face datasets in terms of fitting accuracy compared to other state-of-the-art AAM models

    Reconstruction of three-dimensional facial geometric features related to fetal alcohol syndrome using adult surrogates

    Get PDF
    Fetal alcohol syndrome (FAS) is a condition caused by prenatal alcohol exposure. The diagnosis of FAS is based on the presence of central nervous system impairments, evidence of growth abnormalities and abnormal facial features. Direct anthropometry has traditionally been used to obtain facial data to assess the FAS facial features. Research efforts have focused on indirect anthropometry such as 3D surface imaging systems to collect facial data for facial analysis. However, 3D surface imaging systems are costly. As an alternative, approaches for 3D reconstruction from a single 2D image of the face using a 3D morphable model (3DMM) were explored in this research study. The research project was accomplished in several steps. 3D facial data were obtained from the publicly available BU-3DFE database, developed by the State University of New York. The 3D face scans in the training set were landmarked by different observers. The reliability and precision in selecting 3D landmarks were evaluated. The intraclass correlation coefficients for intra- and inter-observer reliability were greater than 0.95. The average intra-observer error was 0.26 mm and the average inter-observer error was 0.89 mm. A rigid registration was performed on the 3D face scans in the training set. Following rigid registration, a dense point-to-point correspondence across a set of aligned face scans was computed using the Gaussian process model fitting approach. A 3DMM of the face was constructed from the fully registered 3D face scans. The constructed 3DMM of the face was evaluated based on generalization, specificity, and compactness. The quantitative evaluations show that the constructed 3DMM achieves reliable results. 3D face reconstructions from single 2D images were estimated based on the 3DMM. The MetropolisHastings algorithm was used to fit the 3DMM features to 2D image features to generate the 3D face reconstruction. Finally, the geometric accuracy of the reconstructed 3D faces was evaluated based on ground-truth 3D face scans. The average root mean square error for the surface-to-surface comparisons between the reconstructed faces and the ground-truth face scans was 2.99 mm. In conclusion, a framework to estimate 3D face reconstructions from single 2D facial images was developed and the reconstruction errors were evaluated. The geometric accuracy of the 3D face reconstructions was comparable to that found in the literature. However, future work should consider minimizing reconstruction errors to acceptable clinical standards in order for the framework to be useful for 3D-from-2D reconstruction in general, and also for developing FAS applications. Finally, future work should consider estimating a 3D face using multi-view 2D images to increase the information available for 3D-from-2D reconstruction
    corecore