7,567 research outputs found

    Less is More: Micro-expression Recognition from Video using Apex Frame

    Full text link
    Despite recent interest and advances in facial micro-expression research, there is still plenty room for improvement in terms of micro-expression recognition. Conventional feature extraction approaches for micro-expression video consider either the whole video sequence or a part of it, for representation. However, with the high-speed video capture of micro-expressions (100-200 fps), are all frames necessary to provide a sufficiently meaningful representation? Is the luxury of data a bane to accurate recognition? A novel proposition is presented in this paper, whereby we utilize only two images per video: the apex frame and the onset frame. The apex frame of a video contains the highest intensity of expression changes among all frames, while the onset is the perfect choice of a reference frame with neutral expression. A new feature extractor, Bi-Weighted Oriented Optical Flow (Bi-WOOF) is proposed to encode essential expressiveness of the apex frame. We evaluated the proposed method on five micro-expression databases: CAS(ME)2^2, CASME II, SMIC-HS, SMIC-NIR and SMIC-VIS. Our experiments lend credence to our hypothesis, with our proposed technique achieving a state-of-the-art F1-score recognition performance of 61% and 62% in the high frame rate CASME II and SMIC-HS databases respectively.Comment: 14 pages double-column, author affiliations updated, acknowledgment of grant support adde

    Spontaneous Subtle Expression Detection and Recognition based on Facial Strain

    Full text link
    Optical strain is an extension of optical flow that is capable of quantifying subtle changes on faces and representing the minute facial motion intensities at the pixel level. This is computationally essential for the relatively new field of spontaneous micro-expression, where subtle expressions can be technically challenging to pinpoint. In this paper, we present a novel method for detecting and recognizing micro-expressions by utilizing facial optical strain magnitudes to construct optical strain features and optical strain weighted features. The two sets of features are then concatenated to form the resultant feature histogram. Experiments were performed on the CASME II and SMIC databases. We demonstrate on both databases, the usefulness of optical strain information and more importantly, that our best approaches are able to outperform the original baseline results for both detection and recognition tasks. A comparison of the proposed method with other existing spatio-temporal feature extraction approaches is also presented.Comment: 21 pages (including references), single column format, accepted to Signal Processing: Image Communication journa

    Automatic recognition of micro-expressions using local binary patterns on three orthogonal planes and extreme learning machine

    Get PDF
    A dissertation submitted in fullment of the requirements for the degree of Master of Science to the Faculty of Science, University of the Witwatersrand, Johannesburg, September 2017Recognition of micro-expressions is a growing research area as a result of its application in revealing subtle intention of humans especially under high stake situations. Owing to micro-expressions' short duration and low inten- sity, e orts to train humans in their recognition has resulted in very low performance. The use of temporal methods (on image sequences) and static methods (on apex frames) were explored for feature extraction. Supervised machine learning algorithms which include Support Vector Machines (SVM) and Extreme Learning Machines (ELM) were used for the purpose of classi- cation. Extreme learning machines which has the ability to learn fast was compared with SVM which acted as the baseline model. For experimentation, samples from Chinese Academy of Micro-expressions (CASME II) database were used. Results revealed that use of temporal features outperformed the use of static features for micro-expression recognition on both SVM and ELM models. Static and temporal features gave an average testing accuracy of 94.08% and 97.57% respectively for ve classes of micro-expressions us- ing ELM model. Signi cance test carried out on these two average means suggested that temporal features outperformed static features using ELM. Comparison between SVM and ELM learning time also revealed that ELM learns faster than SVM. For the ve selected micro-expression classes, an av- erage training time of 0.3405 seconds was achieved for SVM while an average training time of 0.0409 seconds was achieved for ELM. Hence we can sug- gest that micro-expressions can be recognised successfully by using temporal features and a machine learning algorithm that has a fast learning speed.MT201

    Optimal Dynamic Taxes

    Get PDF
    We study optimal labor and savings distortions in a lifecycle model with idiosyncratic shocks. We show a tight connection between its recursive formulation and a static Mirrlees model with two goods, which allows us to derive elasticity-based expressions for the dynamic optimal distortions. We derive a generalization of a savings distortion for non-separable preferences and show that, under certain conditions, the labor wedge tends to zero for sufficiently high skills. We estimate skill distributions using individual data on the U.S. taxes and labor incomes. Computed optimal distortions decrease for sufficiently high incomes and increase with age.

    PhyloPattern: regular expressions to identify complex patterns in phylogenetic trees

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>To effectively apply evolutionary concepts in genome-scale studies, large numbers of phylogenetic trees have to be automatically analysed, at a level approaching human expertise. Complex architectures must be recognized within the trees, so that associated information can be extracted.</p> <p>Results</p> <p>Here, we present a new software library, PhyloPattern, for automating tree manipulations and analysis. PhyloPattern includes three main modules, which address essential tasks in high-throughput phylogenetic tree analysis: node annotation, pattern matching, and tree comparison. PhyloPattern thus allows the programmer to focus on: i) the use of predefined or user defined annotation functions to perform immediate or deferred evaluation of node properties, ii) the search for user-defined patterns in large phylogenetic trees, iii) the pairwise comparison of trees by dynamically generating patterns from one tree and applying them to the other.</p> <p>Conclusion</p> <p>PhyloPattern greatly simplifies and accelerates the work of the computer scientist in the evolutionary biology field. The library has been used to automatically identify phylogenetic evidence for domain shuffling or gene loss events in the evolutionary histories of protein sequences. However any workflow that relies on phylogenetic tree analysis, could be automated with PhyloPattern.</p

    Attention and Anticipation in Fast Visual-Inertial Navigation

    Get PDF
    We study a Visual-Inertial Navigation (VIN) problem in which a robot needs to estimate its state using an on-board camera and an inertial sensor, without any prior knowledge of the external environment. We consider the case in which the robot can allocate limited resources to VIN, due to tight computational constraints. Therefore, we answer the following question: under limited resources, what are the most relevant visual cues to maximize the performance of visual-inertial navigation? Our approach has four key ingredients. First, it is task-driven, in that the selection of the visual cues is guided by a metric quantifying the VIN performance. Second, it exploits the notion of anticipation, since it uses a simplified model for forward-simulation of robot dynamics, predicting the utility of a set of visual cues over a future time horizon. Third, it is efficient and easy to implement, since it leads to a greedy algorithm for the selection of the most relevant visual cues. Fourth, it provides formal performance guarantees: we leverage submodularity to prove that the greedy selection cannot be far from the optimal (combinatorial) selection. Simulations and real experiments on agile drones show that our approach ensures state-of-the-art VIN performance while maintaining a lean processing time. In the easy scenarios, our approach outperforms appearance-based feature selection in terms of localization errors. In the most challenging scenarios, it enables accurate visual-inertial navigation while appearance-based feature selection fails to track robot's motion during aggressive maneuvers.Comment: 20 pages, 7 figures, 2 table

    A coherent method for the detection and estimation of continuous gravitational wave signals using a pulsar timing array

    Get PDF
    The use of a high precision pulsar timing array is a promising approach to detecting gravitational waves in the very low frequency regime (10−6−10−910^{-6} -10^{-9} Hz) that is complementary to the ground-based efforts (e.g., LIGO, Virgo) at high frequencies (∼10−103\sim 10 -10^3 Hz) and space-based ones (e.g., LISA) at low frequencies (10−4−10−110^{-4} -10^{-1} Hz). One of the target sources for pulsar timing arrays are individual supermassive black hole binaries that are expected to form in galactic mergers. In this paper, a likelihood based method for detection and estimation is presented for a monochromatic continuous gravitational wave signal emitted by such a source. The so-called pulsar terms in the signal that arise due to the breakdown of the long-wavelength approximation are explicitly taken into account in this method. In addition, the method accounts for equality and inequality constraints involved in the semi-analytical maximization of the likelihood over a subset of the parameters. The remaining parameters are maximized over numerically using Particle Swarm Optimization. Thus, the method presented here solves the monochromatic continuous wave detection and estimation problem without invoking some of the approximations that have been used in earlier studies.Comment: 33 pages, 10 figures, submitted to Ap
    • …
    corecore