370 research outputs found

    Gauge Invariant Framework for Shape Analysis of Surfaces

    Full text link
    This paper describes a novel framework for computing geodesic paths in shape spaces of spherical surfaces under an elastic Riemannian metric. The novelty lies in defining this Riemannian metric directly on the quotient (shape) space, rather than inheriting it from pre-shape space, and using it to formulate a path energy that measures only the normal components of velocities along the path. In other words, this paper defines and solves for geodesics directly on the shape space and avoids complications resulting from the quotient operation. This comprehensive framework is invariant to arbitrary parameterizations of surfaces along paths, a phenomenon termed as gauge invariance. Additionally, this paper makes a link between different elastic metrics used in the computer science literature on one hand, and the mathematical literature on the other hand, and provides a geometrical interpretation of the terms involved. Examples using real and simulated 3D objects are provided to help illustrate the main ideas.Comment: 15 pages, 11 Figures, to appear in IEEE Transactions on Pattern Analysis and Machine Intelligence in a better resolutio

    Automatic Analysis of Facial Expressions Based on Deep Covariance Trajectories

    Get PDF
    In this paper, we propose a new approach for facial expression recognition using deep covariance descriptors. The solution is based on the idea of encoding local and global Deep Convolutional Neural Network (DCNN) features extracted from still images, in compact local and global covariance descriptors. The space geometry of the covariance matrices is that of Symmetric Positive Definite (SPD) matrices. By conducting the classification of static facial expressions using Support Vector Machine (SVM) with a valid Gaussian kernel on the SPD manifold, we show that deep covariance descriptors are more effective than the standard classification with fully connected layers and softmax. Besides, we propose a completely new and original solution to model the temporal dynamic of facial expressions as deep trajectories on the SPD manifold. As an extension of the classification pipeline of covariance descriptors, we apply SVM with valid positive definite kernels derived from global alignment for deep covariance trajectories classification. By performing extensive experiments on the Oulu-CASIA, CK+, and SFEW datasets, we show that both the proposed static and dynamic approaches achieve state-of-the-art performance for facial expression recognition outperforming many recent approaches.Comment: A preliminary version of this work appeared in "Otberdout N, Kacem A, Daoudi M, Ballihi L, Berretti S. Deep Covariance Descriptors for Facial Expression Recognition, in British Machine Vision Conference 2018, BMVC 2018, Northumbria University, Newcastle, UK, September 3-6, 2018. ; 2018 :159." arXiv admin note: substantial text overlap with arXiv:1805.0386

    Flood Risk and Vulnerability of Jeddah City, Saudi Arabia

    Get PDF
    Coastal cities are often vulnerable and subject to the risks associated with floods, hence the need to sensitize decision-makers on the threats posed by climate hazards and uncontrolled urbanization. This study is part of this logic and aims to identify and map the flood zones of the city of Jeddah in order to reduce their vulnerability and to integrate them into the strategies of prevention and fight against the risks of flooding. The recent floods in 2009 and 2011 have caused heavy human and material losses that will permanently mark the collective memory of the inhabitants of the city. The multisource and diachronic data used as well as the methodology adopted made it possible to perform a multi-criteria spatial analysis by combining optical satellite imagery and radar DEM, topographic and geological maps, rainfall records, and available statistical data. Thus, risk factors have been identified and combined to understand and appreciate the gravity of recent disasters and provide planners and decision-makers with tools to assist in the effective and adequate management of the ever-changing urban space, in a context of climate change and increased anthropogenic pressure on coastal cities

    Dynamic Facial Expression Generation on Hilbert Hypersphere with Conditional Wasserstein Generative Adversarial Nets

    Full text link
    In this work, we propose a novel approach for generating videos of the six basic facial expressions given a neutral face image. We propose to exploit the face geometry by modeling the facial landmarks motion as curves encoded as points on a hypersphere. By proposing a conditional version of manifold-valued Wasserstein generative adversarial network (GAN) for motion generation on the hypersphere, we learn the distribution of facial expression dynamics of different classes, from which we synthesize new facial expression motions. The resulting motions can be transformed to sequences of landmarks and then to images sequences by editing the texture information using another conditional Generative Adversarial Network. To the best of our knowledge, this is the first work that explores manifold-valued representations with GAN to address the problem of dynamic facial expression generation. We evaluate our proposed approach both quantitatively and qualitatively on two public datasets; Oulu-CASIA and MUG Facial Expression. Our experimental results demonstrate the effectiveness of our approach in generating realistic videos with continuous motion, realistic appearance and identity preservation. We also show the efficiency of our framework for dynamic facial expressions generation, dynamic facial expression transfer and data augmentation for training improved emotion recognition models

    Designing Islamic Finance Programmes in a Competitive Educational Space: The Islamic Economics Institute Experiment

    Get PDF
    AbstractThis paper aims at exploring the experiment of the Islamic Economics Institute (IEI) of King Abdulaziz University in the design of the first ever Islamic finance higher educational programme at a Saudi Public University. An evaluative analytical framework has been utilized to meet this goal. Results show that the Institute has pursued a ‘glocalization’; thinking globally and acting locally approach in designing the programme. This approach aims at providing learners with ‘cutting-edge’ skills that will enhance their chances for employment at the local as well as regional markets. What are the advantages of this approach? And how can the Institute preserve its ‘distinctive research’ positioning that it has gained over the years, at the same time, being able to provide ‘world-class’ educational programmes

    Blocking Adult Images Based on Statistical Skin Detection

    Get PDF
    This work is aimed at the detection of adult images that appear in Internet. Skin detection is of the paramount importance in the detection of adult images. We build a maximum entropy model for this task. This model, called the First Order Model in this paper, is subject to constraints on the color gradients of neighboring pixels. Parameter estimation as well as optimization cannot be tackled without approximations. With Bethe tree approximation, parameter estimation is eradicated and the Belief Propagation algorithm permits to obtain exact and fast solution for skin probabilities at pixel locations. We show by the Receiver Operating Characteristics (ROC) curves that our skin detection improves the performance in the previous work in the context of skin pixel detecton rate and false positive rate. The output of skin detection is a grayscale skin map with the gray level indicating the belief of skin. We then calculate 9 simple features from this map which form a feature vector. We use the fit ellipses to catch the characteristics of skin distribution. Two fit ellipses are used for each skin map---the fit ellipse of all skin regions and the fit ellipse of the largest skin region. They are called respectively Global Fit Ellipse and Local Fit Ellipse in this paper. A multi-layer perceptron classifier is trained for these features. Plenty of experimental results are presented including photographs and a ROC curve calculated over a test set of 5,084 photographs, which show stimulating performance for such simple features

    Transformer-based Self-supervised Multimodal Representation Learning for Wearable Emotion Recognition

    Full text link
    Recently, wearable emotion recognition based on peripheral physiological signals has drawn massive attention due to its less invasive nature and its applicability in real-life scenarios. However, how to effectively fuse multimodal data remains a challenging problem. Moreover, traditional fully-supervised based approaches suffer from overfitting given limited labeled data. To address the above issues, we propose a novel self-supervised learning (SSL) framework for wearable emotion recognition, where efficient multimodal fusion is realized with temporal convolution-based modality-specific encoders and a transformer-based shared encoder, capturing both intra-modal and inter-modal correlations. Extensive unlabeled data is automatically assigned labels by five signal transforms, and the proposed SSL model is pre-trained with signal transformation recognition as a pretext task, allowing the extraction of generalized multimodal representations for emotion-related downstream tasks. For evaluation, the proposed SSL model was first pre-trained on a large-scale self-collected physiological dataset and the resulting encoder was subsequently frozen or fine-tuned on three public supervised emotion recognition datasets. Ultimately, our SSL-based method achieved state-of-the-art results in various emotion classification tasks. Meanwhile, the proposed model proved to be more accurate and robust compared to fully-supervised methods on low data regimes.Comment: Accepted IEEE Transactions On Affective Computin

    Analyse locale de la forme 3D pour la reconnaissance d'expressions faciales

    Get PDF
    National audienceIn this paper we propose a novel approach for indentityindependent 3D facial expression recognition. Our approach is based on shape analysis of local patches extracted from 3D facial shape model. A Riemannian framework is applied to compute geodesic distances between correspondent patches belonging to different faces of the BU-3DFE database and conveying different expressions. Quantitative measures of similarity are obtained and then used as inputs to several classification methods. Using Multiboosting and Support Vector Machines (SVM) classifiers, we achieved average recognition rates respectively equal to 98.81% and 97.75%.Dans cet article, nous proposons une nouvelle approche pour la reconnaissance d'expressions faciales 3D invariante par rapport à l'identité. Cette approche est basée sur l'analyse de formes de " patches "locaux extraits à partir de modèles de visages 3D. Un cadre Riemannien est utilisé pour le calcul de distances géodésiques entre les patches correspondants appartenant a des visages différents sous différentes expressions. Des mesures quantitatives de similarité sont alors obtenues et sont utilisées comme des paramètres d'entrée pour des algorithmes de classification multiclasses. En utilisant des techniques de Multiboosting et de Machines à Vecteurs de Support (SVM), les taux de reconnaissance des six expressions de base obtenus sur la base BU-3DFE sont respectivement 98.81% et 97.75%
    • …
    corecore