7,366 research outputs found
Adversarial Data Programming: Using GANs to Relax the Bottleneck of Curated Labeled Data
Paucity of large curated hand-labeled training data for every
domain-of-interest forms a major bottleneck in the deployment of machine
learning models in computer vision and other fields. Recent work (Data
Programming) has shown how distant supervision signals in the form of labeling
functions can be used to obtain labels for given data in near-constant time. In
this work, we present Adversarial Data Programming (ADP), which presents an
adversarial methodology to generate data as well as a curated aggregated label
has given a set of weak labeling functions. We validated our method on the
MNIST, Fashion MNIST, CIFAR 10 and SVHN datasets, and it outperformed many
state-of-the-art models. We conducted extensive experiments to study its
usefulness, as well as showed how the proposed ADP framework can be used for
transfer learning as well as multi-task learning, where data from two domains
are generated simultaneously using the framework along with the label
information. Our future work will involve understanding the theoretical
implications of this new framework from a game-theoretic perspective, as well
as explore the performance of the method on more complex datasets.Comment: CVPR 2018 main conference pape
On the Analysis of Trajectories of Gradient Descent in the Optimization of Deep Neural Networks
Theoretical analysis of the error landscape of deep neural networks has
garnered significant interest in recent years. In this work, we theoretically
study the importance of noise in the trajectories of gradient descent towards
optimal solutions in multi-layer neural networks. We show that adding noise (in
different ways) to a neural network while training increases the rank of the
product of weight matrices of a multi-layer linear neural network. We thus
study how adding noise can assist reaching a global optimum when the product
matrix is full-rank (under certain conditions). We establish theoretical
foundations between the noise induced into the neural network - either to the
gradient, to the architecture, or to the input/output to a neural network - and
the rank of product of weight matrices. We corroborate our theoretical findings
with empirical results.Comment: 4 pages + 1 figure (main, excluding references), 5 pages + 4 figures
(appendix
ADINE: An Adaptive Momentum Method for Stochastic Gradient Descent
Two major momentum-based techniques that have achieved tremendous success in
optimization are Polyak's heavy ball method and Nesterov's accelerated
gradient. A crucial step in all momentum-based methods is the choice of the
momentum parameter which is always suggested to be set to less than .
Although the choice of is justified only under very strong theoretical
assumptions, it works well in practice even when the assumptions do not
necessarily hold. In this paper, we propose a new momentum based method
, which relaxes the constraint of and allows the
learning algorithm to use adaptive higher momentum. We motivate our hypothesis
on by experimentally verifying that a higher momentum () can help
escape saddles much faster. Using this motivation, we propose our method
that helps weigh the previous updates more (by setting the
momentum parameter ), evaluate our proposed algorithm on deep neural
networks and show that helps the learning algorithm to
converge much faster without compromising on the generalization error.Comment: 8 + 1 pages, 12 figures, accepted at CoDS-COMAD 201
- …