19 research outputs found

    Latent-based adversarial neural networks for facial affect estimations

    Get PDF
    Comunicació presentada al 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020), celebrat del 16 al 20 de novembre de 2020 a Buenos Aires, Argentina.There is a growing interest in affective computing research nowadays given its crucial role in bridging humans with computers. This progress has recently been accelerated due to the emergence of bigger dataset. One recent advance in this field is the use of adversarial learning to improve model learning through augmented samples. However, the use of latent features, which is feasible through adversarial learning, is not largely explored, yet. This technique may also improve the performance of affective models, as analogously demonstrated in related fields, such as computer vision. To expand this analysis, in this work, we explore the use of latent features through our proposed adversarial-based networks for valence and arousal recognition in the wild. Specifically, our models operate by aggregating several modalities to our discriminator, which is further conditioned to the extracted latent features by the generator. Our experiments on the recently released SEWA dataset suggest the progressive improvements of our results. Finally, we show our competitive results on the Affective Behavior Analysis in-the-Wild (ABAW) challenge dataset.This work is partly supported by the Spanish Ministry of Economy and Competitiveness under project grant TIN2017- 90124-P, the Maria de Maeztu Units of Excellence Programme (MDM-2015-0502), and the donation bahi2018-19 to the CMTech at UPF. Further funding has been received from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 826506 (sustAGE)

    SiaMemory: Target Tracking

    Get PDF
    This paper proposes, develops and evaluates a novel object-tracking algorithm that outperforms start-of-the-art method in terms of its robustness. The proposed method compromises Siamese networks, Recurrent Convolutional Neural Networks (RCNNs) and Long Short Term Memory (LSTM) and performs short-term target tracking in real-time. As Siamese networks only generates the current frame tracking target based on the previous frame of image information, it is less effective in handling target’s appearance and disappearance, rapid movement, or deformation. Hence, our method a novel tracking method that integrates improved full-convolutional Siamese networks based on all-CNN, RCNN and LSTM. In order to improve the training efficiency of the deep learning network, a strategy of segmented training based on transfer learning is proposed. For some test video sequences that background clutters, deformation, motion blur, fast motion and out of view, our method achieves the best tracking performance. Using 41 videos from the Object Tracking Benchmark (OTB) dataset and considering the area under the curve for the precision and success rate, our method outperforms the second best by 18.5% and 14.9% respectively
    corecore