806 research outputs found

    A Monte Carlo method for the spread of mobile malware

    Full text link
    A new model for the spread of mobile malware based on proximity (i.e. Bluetooth, ad-hoc WiFi or NFC) is introduced. The spread of malware is analyzed using a Monte Carlo method and the results of the simulation are compared with those from mean field theory.Comment: 11 pages, 2 figure

    Decays into {\pi}+{\pi}- of the f0(1370) scalar glueball candidate in pp central exclusive production experiments

    Full text link
    The existence and properties of the f0(1370) scalar meson are rather well established from data of antiproton annihilations at rest. However conflicting results from Central Exclusive Production (CEP) experiments of the last millennium and ignorance of data from antiproton annihilations at rest in H2 and D2 bubble chambers have generated doubts on the very existence of the f0(1370). Properties of {\pi}+{\pi}- pairs produced in central exclusive production (CEP) reactions observed in old data together with data collected in the current decade at high energy colliders permit to show that {\pi}+{\pi}- decays of the f0(1370) meson are directly observable as an isolated peak between 1.1 and 1.6 GeV. Consequences of this observation and prospects for the identification of the scalar glueball ground-state are discussed.Comment: 20 pages, 11 figure

    Bryuno Function and the Standard Map

    Full text link
    For the standard map the homotopically non-trivial invariant curves of rotation number satisfying the Bryuno condition are shown to be analytic in the perturbative parameter, provided the latter is small enough. The radius of convergence of the Lindstedt series -- sometimes called critical function of the standard map -- is studied and the relation with the Bryuno function is derived: the logarithm of the radius of convergence plus twice the Bryuno function is proved to be bounded (from below and from above) uniformily in the rotation number.Comment: 120 K, Latex, 33 page

    Automatic Analysis of Facial Expressions Based on Deep Covariance Trajectories

    Get PDF
    In this paper, we propose a new approach for facial expression recognition using deep covariance descriptors. The solution is based on the idea of encoding local and global Deep Convolutional Neural Network (DCNN) features extracted from still images, in compact local and global covariance descriptors. The space geometry of the covariance matrices is that of Symmetric Positive Definite (SPD) matrices. By conducting the classification of static facial expressions using Support Vector Machine (SVM) with a valid Gaussian kernel on the SPD manifold, we show that deep covariance descriptors are more effective than the standard classification with fully connected layers and softmax. Besides, we propose a completely new and original solution to model the temporal dynamic of facial expressions as deep trajectories on the SPD manifold. As an extension of the classification pipeline of covariance descriptors, we apply SVM with valid positive definite kernels derived from global alignment for deep covariance trajectories classification. By performing extensive experiments on the Oulu-CASIA, CK+, and SFEW datasets, we show that both the proposed static and dynamic approaches achieve state-of-the-art performance for facial expression recognition outperforming many recent approaches.Comment: A preliminary version of this work appeared in "Otberdout N, Kacem A, Daoudi M, Ballihi L, Berretti S. Deep Covariance Descriptors for Facial Expression Recognition, in British Machine Vision Conference 2018, BMVC 2018, Northumbria University, Newcastle, UK, September 3-6, 2018. ; 2018 :159." arXiv admin note: substantial text overlap with arXiv:1805.0386

    Dynamic Facial Expression Generation on Hilbert Hypersphere with Conditional Wasserstein Generative Adversarial Nets

    Full text link
    In this work, we propose a novel approach for generating videos of the six basic facial expressions given a neutral face image. We propose to exploit the face geometry by modeling the facial landmarks motion as curves encoded as points on a hypersphere. By proposing a conditional version of manifold-valued Wasserstein generative adversarial network (GAN) for motion generation on the hypersphere, we learn the distribution of facial expression dynamics of different classes, from which we synthesize new facial expression motions. The resulting motions can be transformed to sequences of landmarks and then to images sequences by editing the texture information using another conditional Generative Adversarial Network. To the best of our knowledge, this is the first work that explores manifold-valued representations with GAN to address the problem of dynamic facial expression generation. We evaluate our proposed approach both quantitatively and qualitatively on two public datasets; Oulu-CASIA and MUG Facial Expression. Our experimental results demonstrate the effectiveness of our approach in generating realistic videos with continuous motion, realistic appearance and identity preservation. We also show the efficiency of our framework for dynamic facial expressions generation, dynamic facial expression transfer and data augmentation for training improved emotion recognition models

    SPEAKER VGG CCT: Cross-corpus Speech Emotion Recognition with Speaker Embedding and Vision Transformers

    Full text link
    In recent years, Speech Emotion Recognition (SER) has been investigated mainly transforming the speech signal into spectrograms that are then classified using Convolutional Neural Networks pretrained on generic images and fine tuned with spectrograms. In this paper, we start from the general idea above and develop a new learning solution for SER, which is based on Compact Convolutional Transformers (CCTs) combined with a speaker embedding. With CCTs, the learning power of Vision Transformers (ViT) is combined with a diminished need for large volume of data as made possible by the convolution. This is important in SER, where large corpora of data are usually not available. The speaker embedding allows the network to extract an identity representation of the speaker, which is then integrated by means of a self-attention mechanism with the features that the CCT extracts from the spectrogram. Overall, the solution is capable of operating in real-time showing promising results in a cross-corpus scenario, where training and test datasets are kept separate. Experiments have been performed on several benchmarks in a cross-corpus setting as rarely used in the literature, with results that are comparable or superior to those obtained with state-of-the-art network architectures. Our code is available at https://github.com/JabuMlDev/Speaker-VGG-CCT
    • …
    corecore