485,478 research outputs found

    Efficient smile detection by Extreme Learning Machine

    Get PDF
    Smile detection is a specialized task in facial expression analysis with applications such as photo selection, user experience analysis, and patient monitoring. As one of the most important and informative expressions, smile conveys the underlying emotion status such as joy, happiness, and satisfaction. In this paper, an efficient smile detection approach is proposed based on Extreme Learning Machine (ELM). The faces are first detected and a holistic flow-based face registration is applied which does not need any manual labeling or key point detection. Then ELM is used to train the classifier. The proposed smile detector is tested with different feature descriptors on publicly available databases including real-world face images. The comparisons against benchmark classifiers including Support Vector Machine (SVM) and Linear Discriminant Analysis (LDA) suggest that the proposed ELM based smile detector in general performs better and is very efficient. Compared to state-of-the-art smile detector, the proposed method achieves competitive results without preprocessing and manual registration

    Every Smile is Unique: Landmark-Guided Diverse Smile Generation

    Full text link
    Each smile is unique: one person surely smiles in different ways (e.g., closing/opening the eyes or mouth). Given one input image of a neutral face, can we generate multiple smile videos with distinctive characteristics? To tackle this one-to-many video generation problem, we propose a novel deep learning architecture named Conditional Multi-Mode Network (CMM-Net). To better encode the dynamics of facial expressions, CMM-Net explicitly exploits facial landmarks for generating smile sequences. Specifically, a variational auto-encoder is used to learn a facial landmark embedding. This single embedding is then exploited by a conditional recurrent network which generates a landmark embedding sequence conditioned on a specific expression (e.g., spontaneous smile). Next, the generated landmark embeddings are fed into a multi-mode recurrent landmark generator, producing a set of landmark sequences still associated to the given smile class but clearly distinct from each other. Finally, these landmark sequences are translated into face videos. Our experimental results demonstrate the effectiveness of our CMM-Net in generating realistic videos of multiple smile expressions.Comment: Accepted as a poster in Conference on Computer Vision and Pattern Recognition (CVPR), 201

    Can You Spot the Fake?

    Get PDF
    The ability to correctly interpret smiles is a skill that can be helpful in many aspects of life. One key feature that people look at is a smile, but smiles may not always be genuine. In our study, we focused on the detection of genuine and fake smiles and trained subjects to detect deception. The first training group was given applicable information, through PowerPoint, on distinguishing between smiles along with two videos presenting a genuine and fake smile. The second group viewed a PowerPoint with applicable information without videos. The third group viewed a PowerPoint containing just videos. Our control group was asked to think about situations where a fake or genuine smile would be used. Before training, participants viewed 10 smile videos and indicated whether each smile was genuine or fake. Following training, the participants viewed 10 new videos and again indicated whether each smile was genuine or fake. We hypothesized that the training groups would identify more smiles correctly than the control group. One week later, all groups viewed the same 20 videos again to determine whether the training had a lasting effect

    FX volatility smile construction

    Get PDF
    The foreign exchange options market is one of the largest and most liquid OTC derivative markets in the world. Surprisingly, very little is known in the academic literature about the construction of the most important object in this market: The implied volatility smile. The smile construction procedure and the volatility quoting mechanisms are FX specific and differ significantly from other markets. We give a detailed overview of these quoting mechanisms and introduce the resulting smile construction problem. Furthermore, we provide a new formula which can be used for an efficient and robust FX smile construction. --FX Quotations,FX Smile Construction,Risk Reversal,Butterfly,Strangle,Delta Conventions,Malz Formula

    Environmental Kuznets curve (EKC): Times series evidence from Portugal

    Get PDF
    The paper provides empirical evidence of an EKC – a relationship between income and environmental degradation for Portugal by applying autoregressive distributed lag (ARDL) to times series data. In order to capture Portugal’s historical experience, demographic changes, and international trade on CO2 emissions, we assess the traditional income-emissions model with variables such as energy consumption, urbanization, and trade openness in time series framework. There is evidence of an EKC in both the short and long-run approaches. All variables carry the expected sign except trade openness which has the wrong sign and is statistically insignificant in both the short-run and long-run. Despite the success of Portugal in containing CO2 emissions so far, it is important to note that in recent years, emissions have risen. In order to comply with the 1992 Kyoto Protocol on CO2 emissions, there is need for policies that focus on the top five sectors responsible for about 55 percent of CO2 emissions are due to the extraction of crude petroleum, manufacturing of refined products, electricity distribution, construction, land transport and transport via pipeline services.Cointegration; Causality; Environmental Kuznets Curve

    Asymptotics of forward implied volatility

    Full text link
    We prove here a general closed-form expansion formula for forward-start options and the forward implied volatility smile in a large class of models, including the Heston stochastic volatility and time-changed exponential L\'evy models. This expansion applies to both small and large maturities and is based solely on the properties of the forward characteristic function of the underlying process. The method is based on sharp large deviations techniques, and allows us to recover (in particular) many results for the spot implied volatility smile. In passing we (i) show that the forward-start date has to be rescaled in order to obtain non-trivial small-maturity asymptotics, (ii) prove that the forward-start date may influence the large-maturity behaviour of the forward smile, and (iii) provide some examples of models with finite quadratic variation where the small-maturity forward smile does not explode.Comment: 37 pages, 13 figure

    Distinguishing Posed and Spontaneous Smiles by Facial Dynamics

    Full text link
    Smile is one of the key elements in identifying emotions and present state of mind of an individual. In this work, we propose a cluster of approaches to classify posed and spontaneous smiles using deep convolutional neural network (CNN) face features, local phase quantization (LPQ), dense optical flow and histogram of gradient (HOG). Eulerian Video Magnification (EVM) is used for micro-expression smile amplification along with three normalization procedures for distinguishing posed and spontaneous smiles. Although the deep CNN face model is trained with large number of face images, HOG features outperforms this model for overall face smile classification task. Using EVM to amplify micro-expressions did not have a significant impact on classification accuracy, while the normalizing facial features improved classification accuracy. Unlike many manual or semi-automatic methodologies, our approach aims to automatically classify all smiles into either `spontaneous' or `posed' categories, by using support vector machines (SVM). Experimental results on large UvA-NEMO smile database show promising results as compared to other relevant methods.Comment: 16 pages, 8 figures, ACCV 2016, Second Workshop on Spontaneous Facial Behavior Analysi
    corecore