2 research outputs found
Semi-Implicit Stochastic Recurrent Neural Networks
Stochastic recurrent neural networks with latent random variables of complex
dependency structures have shown to be more successful in modeling sequential
data than deterministic deep models. However, the majority of existing methods
have limited expressive power due to the Gaussian assumption of latent
variables. In this paper, we advocate learning implicit latent representations
using semi-implicit variational inference to further increase model
flexibility. Semi-implicit stochastic recurrent neural network(SIS-RNN) is
developed to enrich inferred model posteriors that may have no analytic density
functions, as long as independent random samples can be generated via
reparameterization. Extensive experiments in different tasks on real-world
datasets show that SIS-RNN outperforms the existing methods
Bayesian Graph Neural Networks with Adaptive Connection Sampling
We propose a unified framework for adaptive connection sampling in graph
neural networks (GNNs) that generalizes existing stochastic regularization
methods for training GNNs. The proposed framework not only alleviates
over-smoothing and over-fitting tendencies of deep GNNs, but also enables
learning with uncertainty in graph analytic tasks with GNNs. Instead of using
fixed sampling rates or hand-tuning them as model hyperparameters in existing
stochastic regularization methods, our adaptive connection sampling can be
trained jointly with GNN model parameters in both global and local fashions.
GNN training with adaptive connection sampling is shown to be mathematically
equivalent to an efficient approximation of training Bayesian GNNs.
Experimental results with ablation studies on benchmark datasets validate that
adaptively learning the sampling rate given graph training data is the key to
boost the performance of GNNs in semi-supervised node classification, less
prone to over-smoothing and over-fitting with more robust prediction