16 research outputs found

    La connaissance des biométries douces et la reconnaissance des expressions faciales

    No full text
    Since soft biometrics traits can provide sufficient evidence to precisely determine the identity of human, there has been increasing attention for face based soft biometrics identification in recent years. Among those face based soft biometrics, gender and ethnicity are both key demographic attributes of human beings and they play a very fundamental and important role in automatic machine based face analysis. Meanwhile, facial expression recognition is another challenge problem in face analysis because of the diversity and hybridity of human expressions among different subjects in different cultures, genders and contexts. This Ph.D thesis work is dedicated to combine 2D facial Texture and 3D face morphology for estimating people’s soft biometrics: gender, ethnicity, etc., and recognizing facial expression. For the gender and ethnicity recognition, we present an effective and efficient approach on this issue by combining both boosted local texture and shape features extracted from 3D face models, in contrast to the existing ones that only depend on either 2D texture or 3D shape of faces. In order to comprehensively represent the difference between different genders or ethnics groups, we propose a novel local descriptor, namely local circular patterns (LCP). LCP improves the widely utilized local binary patterns (LBP) and its variants by replacing the binary quantization with a clustering based one, resulting in higher discriminative power as well as better robustness to noise. Meanwhile, the following Adaboost based feature selection finds the most discriminative gender- and ethnic-related features and assigns them with different weights to highlight their importance in classification, which not only further raises the performance but reduces the time and memory cost as well. Experimental results achieved on the FRGC v2.0 and BU-3DFE data sets clearly demonstrate the advantages of the proposed method. For facial expression recognition, we present a fully automatic multi-modal 2D + 3D feature-based facial expression recognition approach and demonstrate its performance on the BU–3DFE database. Our approach combines multi-order gradientbased local texture and shape descriptors in order to achieve efficiency a nd robustness. First, a large set of fiducial facial landmarks of 2D face images along with their 3D face scans are localized using a novel algorithm namely incremental Parallel Cascade of Linear Regression (iPar–CLR). Then, a novel Histogram of Second Order Gradients (HSOG) based local image descriptor in conjunction with the widely used first-order gradient based SIFT descriptor are employed to describe the local texture around each 2D landmark. Similarly, the local geometry around each 3D landmark is described by two novel local shape descriptors constructed using the first-order and the second-order surface differential geometry quantities, i.e., Histogram of mesh Gradients (meshHOG) and Histogram of mesh Shape index (curvature quantization, meshHOS). Finally, the Support Vector Machine (SVM) based recognition results of all 2D and 3D descriptors are fused at both featurelevel and score-level to further improve the accuracy. Comprehensive experimental results demonstrate that there exist impressive complementary characteristics between the 2D and 3D descriptors. We use the BU–3DFE benchmark to compare our approach to the state-of-the-art ones. Our multi-modal feature-based approach outperforms the others by achieving an average recognition accuracy of 86,32%. Moreover, a good generalization ability is shown on the Bosphorus database.Puisque les traits de biométrie douce peuvent fournir des preuves supplémentaires pour aider à déterminer précisément l’identité de l’homme, il y a eu une attention croissante sur la reconnaissance faciale basée sur les biométrie douce ces dernières années. Parmi tous les biométries douces, le sexe et l’ethnicité sont les deux caractéristiques démographiques importantes pour les êtres humains et ils jouent un rôle très fondamental dans l’analyse de visage automatique. En attendant, la reconnaissance des expressions faciales est un autre challenge dans le domaine de l’analyse de visage en raison de la diversité et de l’hybridité des expressions humaines dans différentes cultures, genres et contextes. Ce thèse est dédié à combiner la texture du visage 2D et la morphologie du visage 3D pour estimer les biométries douces: le sexe, l’ethnicité, etc., et reconnaître les expressions faciales. Pour la reconnaissance du sexe et de l’ethnicité, nous présentons une approche efficace en combinant à la fois des textures locales et des caractéristiques de forme extraites à partir des modèles de visage 3D, contrairement aux méthodes existantes qui ne dépendent que des textures ou des caractéristiques de forme. Afin de souligne exhaustivement la différence entre les groupes sexuels et ethniques, nous proposons un nouveau descripteur, à savoir local circular patterns (LCP). Ce descripteur améliore Les motifs binaires locaux (LBP) et ses variantes en remplaçant la quantification binaire par une quantification basée sur le regroupement, entraînant d’une puissance plus discriminative et une meilleure résistance au bruit. En même temps, l’algorithme Adaboost est engagé à sélectionner les caractéristiques discriminatives fortement liés au sexe et à l’ethnicité. Les résultats expérimentaux obtenus sur les bases de données FRGC v2.0 et BU-3DFE démontrent clairement les avantages de la méthode proposée. Pour la reconnaissance des expressions faciales, nous présentons une méthode automatique basée sur les multi-modalité 2D + 3D et démontrons sa performance sur la base des données BU-3DFE. Notre méthode combine des textures locales et des descripteurs de formes pour atteindre l’efficacité et la robustesse. Tout d’abord, un grand ensemble des points des caractéristiques d’images 2D et de modèles 3D sont localisés à l’aide d’un nouvel algorithme, à savoir la cascade parallèle incrémentielle de régression linéaire (iPar-CLR). Ensuite, on utilise un nouveau descripteur basé sur les histogrammes des gradients d’ordre secondaire (HSOG) en conjonction avec le descripteur SIFT pour décrire la texture locale autour de chaque point de caractéristique 2D. De même, la géométrie locale autour de chaque point de caractéristique 3D est décrite par deux nouveaux descripteurs de forme construits à l’aide des quantités différentielle de géométries de la surface au premier ordre et au second ordre, à savoir meshHOG et meshHOS. Enfin, les résultats de reconnaissance des descripteurs 2D et 3D fournis par le classifier SVM sont fusionnés à la fois au niveau de fonctionnalité et de score pour améliorer la précision. Les expérimentaux résultats démontrent clairement qu’il existe des caractéristiques complémentaires entre les descripteurs 2D et 3D. Notre approche basée sur les multi-modalités surpasse les autres méthodes de l’état de l’art en obtenant une précision de reconnaissance 86,32%. De plus, une bonne capacité de généralisation est aussi présentée sur la base de données Bosphorus

    Bivariate polynomial-based secret sharing schemes with secure secret reconstruction

    No full text
    A (t,n)-threshold scheme with secure secret reconstruction, or a (t,n)-SSR scheme for short, is a (t,n)-threshold scheme against the outside adversary who has no valid share, but can impersonate a participant to take part in the secret reconstruction phase. We point out that previous bivariate polynomial-based (t,n)-SSR schemes, such as those of Harn et al. (Information Sciences 2020), are insecure, which is because the outside adversary may obtain the secret by solving a system of [Formula presented] linear equations. We revise Harn et al. scheme and get a secure (t,n)-SSR scheme based on a symmetric bivariate polynomial for the first time, where t⩽n⩽2t-1. To increase the range of n for a given t, we construct a secure (t,n)-SSR scheme based on an asymmetric bivariate polynomial for the first time, where n⩾t. We find that the share sizes of our schemes are the same or almost the same as other existing insecure (t,n)-SSR schemes based on bivariate polynomials. Moreover, our asymmetric bivariate polynomial-based (t,n)-SSR scheme is more easy to be constructed compared to the Chinese Remainder Theorem-based (t,n)-SSR scheme with the stringent condition on moduli, and their share sizes are almost the same.Ministry of Education (MOE)The work of Jian Ding and Changlu Lin was supported in part by National Natural Science Foundation of China under Grant Nos. U1705264 and 61572132, in part by Natural Science Foundation of Fujian Province under Grant No. 2019J01275, in part by Guangxi Key Laboratory of Trusted Software under Grant No. KX202039, and in part by University Natural Science Research Project of Anhui Province under Grant No. KJ2020A0779. The work of Pinhui Ke was supported by National Natural Science Foundation of China under Grant Nos. 61772292 and 61772476. The work of Huaxiong Wang was supported by the Singapore Ministry of Education under Grant Nos. RG12/19 and RG21/18 (S)

    Full threshold change range of threshold changeable secret sharing

    No full text
    A threshold changeable secret sharing (TCSS) scheme is designed for changing the initial threshold pair of the privacy threshold and reconstruction threshold to a given threshold pair after the dealer distributes shares to participants, while a universal threshold changeable secret sharing (uTCSS) scheme is threshold changeable to multiple new threshold pairs. We focus on the threshold changeability in a dealer-free scenario with an outside adversary and the absence of secure channels among participants. There are some known threshold change regimes that are realized by (optimal) TCSS schemes or (optimal) uTCSS schemes. In this work, by combining the frequently used two methods in previous constructions: folding shares of a given secret sharing scheme and packing shares of multiple secret sharing schemes, we construct an optimal TCSS scheme and an optimal uTCSS scheme with a new threshold change regime, respectively. This helps us determine the full threshold change range that can be realized by optimal TCSS schemes and optimal uTCSS schemes, respectively. Moreover, we construct some near optimal TCSS schemes to show that the full threshold change range of TCSS schemes (without requiring optimality) is completely covered by the threshold change regimes of our near optimal TCSS schemes together with the full threshold change range of optimal TCSS schemes.National Research Foundation (NRF)Submitted/Accepted versionThis research of Wang is supported by the National Research Foundation, Singapore under its Strategic Capability Research Centres Funding Initiative

    Local circular patterns for multi-modal facial gender and ethnicity classification

    No full text
    International audienceGender and ethnicity are both key demographic attributes of human beings and they play a very fundamental and important role in automatic machine based face analysis, therefore, there has been increasing attention for face based gender and ethnicity classification in recent years. In this paper, we present an effective and efficient approach on this issue by combining both boosted local texture and shape features extracted from 3D face models, in contrast to the existing ones that only depend on either 2D texture or 3D shape of faces. In order to comprehensively represent the difference between different genders or ethnicities, we propose a novel local de- scriptor, namely local circular patterns (LCP). LCP improves the widely utilized local binary patterns (LBP) and its variants by replacing the binary quantization with a clustering based one, resulting in higher discriminative power as well as better robustness to noise. Meanwhile the following Adaboost based feature selection finds the most discriminative gender- and race-related features and assigns them with different weights to highlight their importance in classification, which not only further raises the performance but reduces the time and mem- ory cost as well. Experimental results achieved on the FRGC v2.0 and BU-3DFE datasets clearly demonstrate the advantages of the proposed method

    Permutation polynomials of the form (xp -x +d)s +L(x)

    Get PDF
    Recently, several classes of permutation polynomials of the form (x2 + x + δ)s + x over F2m have been discovered. They are related to Kloosterman sums. In this paper, the permutation behavior of polynomials of the form (xp − x + δ)s + L(x) over Fpm is investigated, where L(x) is a linearized polynomial with coefficients in Fp. Six classes of permutation polynomials on F2m are derived. Three classes of permutation polynomials over F3m are also presented

    Children Facial Expression Production: Influence of Age, Gender, Emotion Subtype, Elicitation Condition and Culture

    No full text
    The production of facial expressions (FEs) is an important skill that allows children to share and adapt emotions with their relatives and peers during social interactions. These skills are impaired in children with Autism Spectrum Disorder. However, the way in which typical children develop and master their production of FEs has still not been clearly assessed. This study aimed to explore factors that could influence the production of FEs in childhood such as age, gender, emotion subtype (sadness, anger, joy, and neutral), elicitation task (on request, imitation), area of recruitment (French Riviera and Parisian) and emotion multimodality. A total of one hundred fifty-seven children aged 6–11 years were enrolled in Nice and Paris, France. We asked them to produce FEs in two different tasks: imitation with an avatar model and production on request without a model. Results from a multivariate analysis revealed that: (1) children performed better with age. (2) Positive emotions were easier to produce than negative emotions. (3) Children produced better FE on request (as opposed to imitation); and (4) Riviera children performed better than Parisian children suggesting regional influences on emotion production. We conclude that facial emotion production is a complex developmental process influenced by several factors that needs to be acknowledged in future research
    corecore