431 research outputs found
Efficient learning of Bayesian networks with bounded tree-width
Learning Bayesian networks with bounded tree-width has attracted much attention recently, because low tree-width allows exact inference to be performed efficiently. Some existing methods [24,29] tackle the problem by using k-trees to learn the optimal Bayesian network with tree-width up to k. Finding the best k-tree, however, is computationally intractable. In this paper, we propose a sampling method to efficiently find representative k-trees by introducing an informative score function to characterize the quality of a k-tree. To further improve the quality of the k-trees, we propose a probabilistic hill climbing approach that locally refines the sampled k-trees. The proposed algorithm can efficiently learn a quality Bayesian network with tree-width at most k. Experimental results demonstrate that our approach is more computationally efficient than the exact methods with comparable accuracy, and outperforms most existing approximate methods
Adversarial Reprogramming of Text Classification Neural Networks
Adversarial Reprogramming has demonstrated success in utilizing pre-trained
neural network classifiers for alternative classification tasks without
modification to the original network. An adversary in such an attack scenario
trains an additive contribution to the inputs to repurpose the neural network
for the new classification task. While this reprogramming approach works for
neural networks with a continuous input space such as that of images, it is not
directly applicable to neural networks trained for tasks such as text
classification, where the input space is discrete. Repurposing such
classification networks would require the attacker to learn an adversarial
program that maps inputs from one discrete space to the other. In this work, we
introduce a context-based vocabulary remapping model to reprogram neural
networks trained on a specific sequence classification task, for a new sequence
classification task desired by the adversary. We propose training procedures
for this adversarial program in both white-box and black-box settings. We
demonstrate the application of our model by adversarially repurposing various
text-classification models including LSTM, bi-directional LSTM and CNN for
alternate classification tasks
Random deep neural networks are biased towards simple functions
We prove that the binary classifiers of bit strings generated by random wide
deep neural networks with ReLU activation function are biased towards simple
functions. The simplicity is captured by the following two properties. For any
given input bit string, the average Hamming distance of the closest input bit
string with a different classification is at least sqrt(n / (2{\pi} log n)),
where n is the length of the string. Moreover, if the bits of the initial
string are flipped randomly, the average number of flips required to change the
classification grows linearly with n. These results are confirmed by numerical
experiments on deep neural networks with two hidden layers, and settle the
conjecture stating that random deep neural networks are biased towards simple
functions. This conjecture was proposed and numerically explored in [Valle
P\'erez et al., ICLR 2019] to explain the unreasonably good generalization
properties of deep learning algorithms. The probability distribution of the
functions generated by random deep neural networks is a good choice for the
prior probability distribution in the PAC-Bayesian generalization bounds. Our
results constitute a fundamental step forward in the characterization of this
distribution, therefore contributing to the understanding of the generalization
properties of deep learning algorithms
- …