84 research outputs found

    Generating 3D faces using Convolutional Mesh Autoencoders

    Full text link
    Learned 3D representations of human faces are useful for computer vision problems such as 3D face tracking and reconstruction from images, as well as graphics applications such as character generation and animation. Traditional models learn a latent representation of a face using linear subspaces or higher-order tensor generalizations. Due to this linearity, they can not capture extreme deformations and non-linear expressions. To address this, we introduce a versatile model that learns a non-linear representation of a face using spectral convolutions on a mesh surface. We introduce mesh sampling operations that enable a hierarchical mesh representation that captures non-linear variations in shape and expression at multiple scales within the model. In a variational setting, our model samples diverse realistic 3D faces from a multivariate Gaussian distribution. Our training data consists of 20,466 meshes of extreme expressions captured over 12 different subjects. Despite limited training data, our trained model outperforms state-of-the-art face models with 50% lower reconstruction error, while using 75% fewer parameters. We also show that, replacing the expression space of an existing state-of-the-art face model with our autoencoder, achieves a lower reconstruction error. Our data, model and code are available at http://github.com/anuragranj/com

    Can Musical Emotion Be Quantified With Neural Jitter Or Shimmer? A Novel EEG Based Study With Hindustani Classical Music

    Full text link
    The term jitter and shimmer has long been used in the domain of speech and acoustic signal analysis as a parameter for speaker identification and other prosodic features. In this study, we look forward to use the same parameters in neural domain to identify and categorize emotional cues in different musical clips. For this, we chose two ragas of Hindustani music which are conventionally known to portray contrast emotions and EEG study was conducted on 5 participants who were made to listen to 3 min clip of these two ragas with sufficient resting period in between. The neural jitter and shimmer components were evaluated for each experimental condition. The results reveal interesting information regarding domain specific arousal of human brain in response to musical stimuli and also regarding trait characteristics of an individual. This novel study can have far reaching conclusions when it comes to modeling of emotional appraisal. The results and implications are discussed in detail.Comment: 6 pages, 12 figures, Presented in 4th International Conference on Signal Processing and Integrated Networks (SPIN) 201

    ASAP: Adaptive Structure Aware Pooling for Learning Hierarchical Graph Representations

    Full text link
    Graph Neural Networks (GNN) have been shown to work effectively for modeling graph structured data to solve tasks such as node classification, link prediction and graph classification. There has been some recent progress in defining the notion of pooling in graphs whereby the model tries to generate a graph level representation by downsampling and summarizing the information present in the nodes. Existing pooling methods either fail to effectively capture the graph substructure or do not easily scale to large graphs. In this work, we propose ASAP (Adaptive Structure Aware Pooling), a sparse and differentiable pooling method that addresses the limitations of previous graph pooling architectures. ASAP utilizes a novel self-attention network along with a modified GNN formulation to capture the importance of each node in a given graph. It also learns a sparse soft cluster assignment for nodes at each layer to effectively pool the subgraphs to form the pooled graph. Through extensive experiments on multiple datasets and theoretical analysis, we motivate our choice of the components used in ASAP. Our experimental results show that combining existing GNN architectures with ASAP leads to state-of-the-art results on multiple graph classification benchmarks. ASAP has an average improvement of 4%, compared to current sparse hierarchical state-of-the-art method.Comment: The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI 2020
    • …
    corecore