30 research outputs found

    Gaussian processes for modeling of facial expressions

    Get PDF
    Automated analysis of facial expressions has been gaining significant attention over the past years. This stems from the fact that it constitutes the primal step toward developing some of the next-generation computer technologies that can make an impact in many domains, ranging from medical imaging and health assessment to marketing and education. No matter the target application, the need to deploy systems under demanding, real-world conditions that can generalize well across the population is urgent. Hence, careful consideration of numerous factors has to be taken prior to designing such a system. The work presented in this thesis focuses on tackling two important problems in automated analysis of facial expressions: (i) view-invariant facial expression analysis; (ii) modeling of the structural patterns in the face, in terms of well coordinated facial muscle movements. Driven by the necessity for efficient and accurate inference mechanisms we explore machine learning techniques based on the probabilistic framework of Gaussian processes (GPs). Our ultimate goal is to design powerful models that can efficiently handle imagery with spontaneously displayed facial expressions, and explain in detail the complex configurations behind the human face in real-world situations. To effectively decouple the head pose and expression in the presence of large out-of-plane head rotations we introduce a manifold learning approach based on multi-view learning strategies. Contrary to the majority of existing methods that typically treat the numerous poses as individual problems, in this model we first learn a discriminative manifold shared by multiple views of a facial expression. Subsequently, we perform facial expression classification in the expression manifold. Hence, the pose normalization problem is solved by aligning the facial expressions from different poses in a common latent space. We demonstrate that the recovered manifold can efficiently generalize to various poses and expressions even from a small amount of training data, while also being largely robust to corrupted image features due to illumination variations. State-of-the-art performance is achieved in the task of facial expression classification of basic emotions. The methods that we propose for learning the structure in the configuration of the muscle movements represent some of the first attempts in the field of analysis and intensity estimation of facial expressions. In these models, we extend our multi-view approach to exploit relationships not only in the input features but also in the multi-output labels. The structure of the outputs is imposed into the recovered manifold either from heuristically defined hard constraints, or in an auto-encoded manner, where the structure is learned automatically from the input data. The resulting models are proven to be robust to data with imbalanced expression categories, due to our proposed Bayesian learning of the target manifold. We also propose a novel regression approach based on product of GP experts where we take into account people's individual expressiveness in order to adapt the learned models on each subject. We demonstrate the superior performance of our proposed models on the task of facial expression recognition and intensity estimation.Open Acces

    Multi-Conditional Latent Variable Model for Joint Facial Action Unit Detection

    Get PDF
    We propose a novel multi-conditional latent variable model for simultaneous facial feature fusion and detection of facial action units. In our approach we exploit the structure-discovery capabilities of generative models such as Gaussian processes, and the discriminative power of classifiers such as logistic function. This leads to superior performance compared to existing classifiers for the target task that exploit either the discriminative or generative property, but not both. The model learning is performed via an efficient, newly proposed Bayesian learning strategy based on Monte Carlo sampling. Consequently, the learned model is robust to data overfitting, regardless of the number of both input features and jointly estimated facial action units. Extensive qualitative and quantitative experimental evaluations are performed on three publicly available datasets (CK+, Shoulder-pain and DISFA). We show that the proposed model outperforms the state-of-the-art methods for the target task on (i) feature fusion, and (ii) multiple facial action unit detection

    Sparse Gaussian Processes with Spherical Harmonic Features Revisited

    Full text link
    We revisit the Gaussian process model with spherical harmonic features and study connections between the associated RKHS, its eigenstructure and deep models. Based on this, we introduce a new class of kernels which correspond to deep models of continuous depth. In our formulation, depth can be estimated as a kernel hyper-parameter by optimizing the evidence lower bound. Further, we introduce sparseness in the eigenbasis by variational learning of the spherical harmonic phases. This enables scaling to larger input dimensions than previously, while also allowing for learning of high frequency variations. We validate our approach on machine learning benchmark datasets

    DeepCoder: Semi-parametric Variational Autoencoders for Automatic Facial Action Coding

    Full text link
    Human face exhibits an inherent hierarchy in its representations (i.e., holistic facial expressions can be encoded via a set of facial action units (AUs) and their intensity). Variational (deep) auto-encoders (VAE) have shown great results in unsupervised extraction of hierarchical latent representations from large amounts of image data, while being robust to noise and other undesired artifacts. Potentially, this makes VAEs a suitable approach for learning facial features for AU intensity estimation. Yet, most existing VAE-based methods apply classifiers learned separately from the encoded features. By contrast, the non-parametric (probabilistic) approaches, such as Gaussian Processes (GPs), typically outperform their parametric counterparts, but cannot deal easily with large amounts of data. To this end, we propose a novel VAE semi-parametric modeling framework, named DeepCoder, which combines the modeling power of parametric (convolutional) and nonparametric (ordinal GPs) VAEs, for joint learning of (1) latent representations at multiple levels in a task hierarchy1, and (2) classification of multiple ordinal outputs. We show on benchmark datasets for AU intensity estimation that the proposed DeepCoder outperforms the state-of-the-art approaches, and related VAEs and deep learning models.Comment: ICCV 2017 - accepte

    Oxidative Stress in Patients Undergoing Peritoneal Dialysis: A Current Review of the Literature

    Get PDF
    Peritoneal dialysis (PD) patients manifest excessive oxidative stress (OS) compared to the general population and predialysis chronic kidney disease patients, mainly due to the composition of the PD solution (high-glucose content, low pH, elevated osmolality, increased lactate concentration and glucose degradation products). However, PD could be considered a more biocompatible form of dialysis compared to hemodialysis (HD), since several studies showed that the latter results in an excess accumulation of oxidative products and loss of antioxidants. OS in PD is tightly linked with chronic inflammation, atherogenesis, peritoneal fibrosis, and loss of residual renal function. Although exogenous supplementation of antioxidants, such as vitamins E and C, N-acetylcysteine, and carotenoids, in some cases showed potential beneficial effects in PD patients, relevant recommendations have not been yet adopted in everyday clinical practice

    Self-reported risk of obstructive sleep apnea syndrome, and awareness about it in the community of 4 insular complexes comprising 41 Greek Islands

    Get PDF
    Obstructive Sleep Apnea Syndrome (OSAS) is a chronic disease that significantly increases morbidity and mortality of the affected population. There is lack of data concerning the OSAS prevalence in the insular part of Greece. The purpose of this study was to investigate the self-reported prevalence of OSAS in 4 Greek insular complexes comprising 41 islands, and to assess the awareness of the population regarding OSAS and its diagnosis. Our study comprised 700 participants from 41 islands of the Ionian, Cyclades, Dodecanese and Northeast Aegean island complexes that were studied by means of questionnaires via a telephone randomized survey (responsiveness rate of 25.74%). Participants were assessed by the Berlin Questionnaire (BQ) for evaluation of OSA risk, by the Epworth Sleepiness Scale (ESS) for evaluation of excessive daytime sleepiness, and by 3 questions regarding the knowledge and diagnosis of OSAS. The percentage of participants at high risk according to BQ was 27.29% and the percentage of people who were at high risk according to ESS was 15.43%. A percentage of 6.29% of the population was at high risk for OSAS (high risk both in BQ and ESS). A high percentage of 73.43%, were aware of OSAS as a syndrome however a significantly less percentage (28.00%) was aware of how a diagnosis of OSAS is established. The community prevalence of OSAS in Greek islands in combination with the low-level awareness of the OSAS diagnostic methods highlights the need for development of health promotion programs aiming at increasing the detection of patients at risk while increasing the awareness of OSAS

    Discriminative shared Gaussian processes for multi-view and view-invariant facial expression recognition

    Get PDF
    Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multiview and/or view-invariant facial expression recognition typically perform classification of the observed expression using either classifiers learned separately for each view or a single classifier learned for all views. However, these approaches ignore the fact that different views of a facial expression are just different manifestations of the same facial expression. By accounting for this redundancy, we can design more effective classifiers for the target task. To this end, we propose a discriminative shared Gaussian process latent variable model (DS-GPLVM) for multiview and view-invariant classification of facial expressions from multiple views. In this model, we first learn a discriminative manifold shared by multiple views of a facial expression. Subsequently, we perform facial expression classification in the expression manifold. Finally, classification of an observed facial expression is carried out either in the view-invariant manner (using only a single view of the expression) or in the multiview manner (using multiple views of the expression). The proposed model can also be used to perform fusion of different facial features in a principled manner. We validate the proposed DS-GPLVM on both posed and spontaneously displayed facial expressions from three publicly available datasets (MultiPIE, labeled face parts in the wild, and static facial expressions in the wild). We show that this model outperforms the state-of-the-art methods for multiview and view-invariant facial expression classification, and several state-of-the-art methods for multiview learning and feature fusion
    corecore