5 research outputs found

    Comparing Information-Theoretic Measures of Complexity in Boltzmann Machines

    Get PDF
    In the past three decades, many theoretical measures of complexity have been proposed to help understand complex systems. In this work, for the first time, we place these measures on a level playing field, to explore the qualitative similarities and differences between them, and their shortcomings. Specifically, using the Boltzmann machine architecture (a fully connected recurrent neural network) with uniformly distributed weights as our model of study, we numerically measure how complexity changes as a function of network dynamics and network parameters. We apply an extension of one such information-theoretic measure of complexity to understand incremental Hebbian learning in Hopfield networks, a fully recurrent architecture model of autoassociative memory. In the course of Hebbian learning, the total information flow reflects a natural upward trend in complexity as the network attempts to learn more and more patterns.Comment: 16 pages, 7 figures; Appears in Entropy, Special Issue "Information Geometry II

    Accuracies, run times, and statistics for genetic algorithms with various mutation rates solving the N-queens problem.

    No full text
    <p>In the nqueensnomut, nqueensconstmut, and nqueenssigmoid tabs, the end heuristic (number of queens attacking each other on the board, with 0 m) found by that genetic algorithm is displayed along with the time the algorithm took to converge to that solution (in generations). In the stats tabs, t-tests are run to compare the accuracies and convergence generations among the different mutation rate genetic algorithms.<br>Key: mut rate: mutation rate, converg: convergence (generations), end h (end heuristic) nomut: no mutation, constmut, constant mutation.</p

    Accuracies, run times, and statistics for search algorithms minimizing the Rastrigin function.

    No full text
    <p>In the Nelder-Mead Results, Hill Climber Results, Random Search, and Genetic Algorithm tabs, the minimum value found by that method is displayed along with the time the algorithm took to arrive at that minimum. In the Statistics and Graph tabs, t-tests are run to compare the accuracies of three search algorithms to the adaptive genetic algorithm.</p

    A closer look at memorization in deep networks

    No full text
    We examine the role of memorization in deep learning, drawing connections to capacity, generalization, and adversarial robustness. While deep networks are capable of memorizing noise data, our results suggest that they tend to prioritize learning simple patterns first. In our experiments, we expose qualitative differences in gradient-based optimization of deep neural networks (DNNs) on noise vs. real data. We also demonstrate that for appropriately tuned explicit regularization (e.g., dropout) we can degrade DNN training performance on noise datasets without compromising generalization on real data. Our analysis suggests that the notions of effective capacity which are dataset independent are unlikely to explain the generalization performance of deep networks when trained with gradient based methods because training data itself plays an important role in determining the degree of memorization.Comment: Appears in Proceedings of the 34th International Conference on Machine Learning (ICML 2017), Devansh Arpit, Stanis{\l}aw Jastrz\k{e}bski, Nicolas Ballas, and David Krueger contributed equally to this wor
    corecore