4 research outputs found

    Evaluacion de modelos de aprendizaje profundo mediante redes neuronales guiadas por datos para materiales no lineales

    Get PDF
    Nonlinear materials are often difficult to model with classical methods like the FiniteElement Method, have a complex and sometimes inaccurate physical and mathematicaldescription or simply we do not know how to describe such materials in terms ofrelations between external and internal variables. In many disciplines, neural networkmethods have arisen as powerful tools to deal with nonlinear problems. In this work, thevery recently developed concept of Physically-Guided Neural Networks with InternalVariables (PGNNIV) is applied for nonlinear materials, providing us with a tool to addphysically meaningful constraints to deep neural networks from a model-free perspective.These latter outperform classical simulation methods in terms of computational powerfor the evaluation of the prediction of external and specially internal variables, sincethey are less computationally intensive and easily scalable. Furthermore, in comparisonwith classical neural networks, they lter numerical noise, have faster convergence, areless data demanding and can have improved extrapolation capacity. In addition, as theyare not based on conventional parametric models (model-free character), a reductionin the time required to develop material models is achieved compared to the use ofmethods such as Finite Elements. In this work, it is shown that the same PGNNIVis capable of achieving good results in the predictions regardless of the nature of theelastic material considered (linear, with hardening or softening behavior), being able tounravel the constitutive law of the material and explain its nature. The results showthat PGNNIV is a useful tool to deal with the problems of solid mechanics, both fromthe point of view of predicting the response to new load situations, and to explain thebehavior of materials, placing the method in what is known as Explainable ArticialIntelligence (XAI).<br /

    Improving Person-Independent Facial Expression Recognition Using Deep Learning

    Get PDF
    Over the past few years, deep learning, e.g., Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs), have shown promise on facial expression recog- nition. However, the performance degrades dramatically especially in close-to-real-world settings due to high intra-class variations and high inter-class similarities introduced by subtle facial appearance changes, head pose variations, illumination changes, occlusions, and identity-related attributes, e.g., age, race, and gender. In this work, we developed two novel CNN frameworks and one novel GAN approach to learn discriminative features for facial expression recognition. First, a novel island loss is proposed to enhance the discriminative power of learned deep features. Specifically, the island loss is designed to reduce the intra-class variations while enlarging the inter-class differences simultaneously. Experimental results on three posed facial expression datasets and, more importantly, two spontaneous facial expression datasets have shown that the proposed island loss outperforms the baseline CNNs with the traditional softmax loss or the center loss and achieves better or at least comparable performance compared with the state-of-the-art methods. Second, we proposed a novel Probabilistic Attribute Tree-CNN (PAT-CNN) to explic- itly deal with the large intra-class variations caused by identity-related attributes. Specif- ically, a novel PAT module with an associated PAT loss was proposed to learn features in a hierarchical tree structure organized according to identity-related attributes, where the final features are less affected by the attributes. We further proposed a semi-supervised strategy to learn the PAT-CNN from limited attribute-annotated samples to make the best use of available data. Experimental results on three posed facial expression datasets as well as four spontaneous facial expression datasets have demonstrated that the proposed PAT- CNN achieves the best performance compared with state-of-the-art methods by explicitly modeling attributes. Impressively, the PAT-CNN using a single model achieves the best performance on the SFEW test dataset, compared with the state-of-the-art methods using an ensemble of hundreds of CNNs. Last, we present a novel Identity-Free conditional Generative Adversarial Network (IF- GAN) to explicitly reduce high inter-subject variations caused by identity-related attributes, e.g., age, race, and gender, for facial expression recognition. Specifically, for any given in- put facial expression image, a conditional generative model was developed to transform it to an “average” identity expressive face with the same expression as the input face image. Since the generated images have the same synthetic “average” identity, they differ from each other only by the displayed expressions and thus can be used for identity-free facial expression classification. In this work, an end-to-end system was developed to perform facial expression generation and facial expression recognition in the IF-GAN framework. Experimental results on four well-known facial expression datasets including a sponta- neous facial expression dataset have demonstrated that the proposed IF-GAN outperforms the baseline CNN model and achieves the best performance compared with the state-of- the-art methods for facial expression recognition
    corecore