20 research outputs found

    Facial Expression Recognition Based on TensorFlow Platform

    Full text link

    New Approach of Estimating Sarcasm Based on the Percentage of Happiness of Facial Expression Using Fuzzy Inference System

    Get PDF
    The procedure of determining whether micro expressions are present is accorded a high priority in the majority of settings. This is due to the fact that despite the best attempts of the person, these expressions will always expose the genuine sentiments that are buried under the surface. The purpose of this study is to provide a novel approach to the problem of measuring sarcasm by using a fuzzy inference system. The method involves analysing a person's facial expressions to evaluate the degree to which they are taking pleasure in something. It is feasible to distinguish five separate areas of a person's face, and precise active distances may be determined from the outline points of each of these regions. This category includes the brows on both sides of the face, as well as the eyes and lips. In order to arrive at a representation of an individual's degree of happiness while working within the parameters of the fuzzy inference system that has been provided, membership functions are first applied to computed distances. After that, the findings from the membership functions are put to use in yet another membership function so that an estimate of the sarcasm percentage may be derived from them. The suggested method is validated by using photos of human faces taken from the SMIC, SAMM, and CAS(ME) 2 datasets, which are the industry standards. This helps to guarantee that the method is effective

    Validation of Neural Network Predictions for the Outcome of Refractive Surgery for Myopia

    Get PDF
    Background: Refractive surgery (RS) for myopia has made a very big progress regarding its safety and predictability of the outcome. Still, a small percentage of operations require retreatment. Therefore, both legally and ethically, patients should be informed that sometimes a corrective RS may be required. We addressed this issue using Neural Networks (NN) in RS for myopia. This was a recently developed validation study of a NN.  Methods: We anonymously searched the Ophthalmica Institute of Ophthalmology and Microsurgery database for patients who underwent RS with PRK, LASEK, Epi-LASIK or LASIK between 2010 and 2018. We used a total of 13 factors related to RS. Data was divided into four sets of successful RS outcomes used for training the NN, successful RS outcomes used for testing the NN performance, RS outcomes that required retreatment used for training the NN and RS outcomes that required retreatment used for testing the NN performance. We created eight independent Learning Vector Quantization (LVQ) networks, each one responding to a specific query with 0 (for the retreat class) or 1 (for the correct class). The results of the 8 LVQs were then averaged so we could obtain a best estimate of the NN performance. Finally, a voting procedure was used to reach to a conclusion. Results: There was a statistically significant agreement (Cohen’s Kappa = 0.7658) between the predicted and the actual results regarding the need for retreatment. Our predictions had good sensitivity (0.8836) and specificity (0.9186). Conclusion: We validated our previously published results and confirmed our expectations for the NN we developed. Our results allow us to be optimistic about the future of NNs in predicting the outcome and, eventually, in planning RS

    Fusing dynamic deep learned features and handcrafted features for facial expression recognition

    Get PDF
    The automated recognition of facial expressions has been actively researched due to its wide-ranging applications. The recent advances in deep learning have improved the performance facial expression recognition (FER) methods. In this paper, we propose a framework that combines discriminative features learned using convolutional neural networks and handcrafted features that include shape- and appearance-based features to further improve the robustness and accuracy of FER. In addition, texture information is extracted from facial patches to enhance the discriminative power of the extracted textures. By encoding shape, appearance, and deep dynamic information, the proposed framework provides high performance and outperforms state-of-the-art FER methods on the CK+ dataset
    corecore