17,770 research outputs found
Improved Dropout for Shallow and Deep Learning
Dropout has been witnessed with great success in training deep neural
networks by independently zeroing out the outputs of neurons at random. It has
also received a surge of interest for shallow learning, e.g., logistic
regression. However, the independent sampling for dropout could be suboptimal
for the sake of convergence. In this paper, we propose to use multinomial
sampling for dropout, i.e., sampling features or neurons according to a
multinomial distribution with different probabilities for different
features/neurons. To exhibit the optimal dropout probabilities, we analyze the
shallow learning with multinomial dropout and establish the risk bound for
stochastic optimization. By minimizing a sampling dependent factor in the risk
bound, we obtain a distribution-dependent dropout with sampling probabilities
dependent on the second order statistics of the data distribution. To tackle
the issue of evolving distribution of neurons in deep learning, we propose an
efficient adaptive dropout (named \textbf{evolutional dropout}) that computes
the sampling probabilities on-the-fly from a mini-batch of examples. Empirical
studies on several benchmark datasets demonstrate that the proposed dropouts
achieve not only much faster convergence and but also a smaller testing error
than the standard dropout. For example, on the CIFAR-100 data, the evolutional
dropout achieves relative improvements over 10\% on the prediction performance
and over 50\% on the convergence speed compared to the standard dropout.Comment: In NIPS 201
Evolving neural networks with genetic algorithms to study the String Landscape
We study possible applications of artificial neural networks to examine the
string landscape. Since the field of application is rather versatile, we
propose to dynamically evolve these networks via genetic algorithms. This means
that we start from basic building blocks and combine them such that the neural
network performs best for the application we are interested in. We study three
areas in which neural networks can be applied: to classify models according to
a fixed set of (physically) appealing features, to find a concrete realization
for a computation for which the precise algorithm is known in principle but
very tedious to actually implement, and to predict or approximate the outcome
of some involved mathematical computation which performs too inefficient to
apply it, e.g. in model scans within the string landscape. We present simple
examples that arise in string phenomenology for all three types of problems and
discuss how they can be addressed by evolving neural networks from genetic
algorithms.Comment: 17 pages, 7 figures, references added, typos corrected, extended
introductory sectio
- …