thesis

PARAMETRIC INFORMATION BOTTLENECK TO OPTIMIZE STOCHASTIC NEURAL NETWORKS

Abstract

Department of Computer Science and EngineeringIn this thesis, we present a layer-wise learning of Stochastic Neural Networks (SNNs) in an information-theoretic perspective. In each layer of an SNN, the compression and the relevance are defined to quantify the amount of information that the layer contains about the input space and the target space, respectively. We jointly optimize the compression and the relevance of all parameters in an SNN to better exploit the neural network???s representation. Previously, the Information Bottleneck (IB) ([1]) extracts relevant information for a target variable. Here, we propose Parametric Information Bottleneck (PIB) for a neural network by utilizing (only) its model parameters explicitly to approximate the compression and the relevance. We show that, the PIB framework can be considered as an extension of the Maximum Likelihood Estimate (MLE) principle to every layer level. We also show that, as compared to the MLE principle, PIB : (I) improves the generalization of neural networks in classification tasks, (ii) generates better samples in multi-modal prediction, (iii) is more efficient to exploit a neural network???s representation by pushing it closer to the optimal information-theoretical representation in a faster manner. Our PIB framework, therefore, shows a great potential from an information-theoretic perspective for exploiting neural networks??? representative power that have not yet been fully utilized.ope

    Similar works