1 research outputs found
Neural Approximation of an Auto-Regressive Process through Confidence Guided Sampling
We propose a generic confidence-based approximation that can be plugged in
and simplify the auto-regressive generation process with a proved convergence.
We first assume that the priors of future samples can be generated in an
independently and identically distributed (i.i.d.) manner using an efficient
predictor. Given the past samples and future priors, the mother AR model can
post-process the priors while the accompanied confidence predictor decides
whether the current sample needs a resampling or not. Thanks to the i.i.d.
assumption, the post-processing can update each sample in a parallel way, which
remarkably accelerates the mother model. Our experiments on different data
domains including sequences and images show that the proposed method can
successfully capture the complex structures of the data and generate the
meaningful future samples with lower computational cost while preserving the
sequential relationship of the data