1,118,503 research outputs found
Iterative Amortized Inference
Inference models are a key component in scaling variational inference to deep
latent variable models, most notably as encoder networks in variational
auto-encoders (VAEs). By replacing conventional optimization-based inference
with a learned model, inference is amortized over data examples and therefore
more computationally efficient. However, standard inference models are
restricted to direct mappings from data to approximate posterior estimates. The
failure of these models to reach fully optimized approximate posterior
estimates results in an amortization gap. We aim toward closing this gap by
proposing iterative inference models, which learn to perform inference
optimization through repeatedly encoding gradients. Our approach generalizes
standard inference models in VAEs and provides insight into several empirical
findings, including top-down inference techniques. We demonstrate the inference
optimization capabilities of iterative inference models and show that they
outperform standard inference models on several benchmark data sets of images
and text.Comment: International Conference on Machine Learning (ICML) 201
Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks
It is desirable to train convolutional networks (CNNs) to run more
efficiently during inference. In many cases however, the computational budget
that the system has for inference cannot be known beforehand during training,
or the inference budget is dependent on the changing real-time resource
availability. Thus, it is inadequate to train just inference-efficient CNNs,
whose inference costs are not adjustable and cannot adapt to varied inference
budgets. We propose a novel approach for cost-adjustable inference in CNNs -
Stochastic Downsampling Point (SDPoint). During training, SDPoint applies
feature map downsampling to a random point in the layer hierarchy, with a
random downsampling ratio. The different stochastic downsampling configurations
known as SDPoint instances (of the same model) have computational costs
different from each other, while being trained to minimize the same prediction
loss. Sharing network parameters across different instances provides
significant regularization boost. During inference, one may handpick a SDPoint
instance that best fits the inference budget. The effectiveness of SDPoint, as
both a cost-adjustable inference approach and a regularizer, is validated
through extensive experiments on image classification
- …