25,290 research outputs found

    Birth of a Learning Law

    Full text link
    Defense Advanced Research Projects Agency; Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657, N00014-92-J-1309

    Mathematical problems for complex networks

    Get PDF
    Copyright @ 2012 Zidong Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This article is made available through the Brunel Open Access Publishing Fund.Complex networks do exist in our lives. The brain is a neural network. The global economy is a network of national economies. Computer viruses routinely spread through the Internet. Food-webs, ecosystems, and metabolic pathways can be represented by networks. Energy is distributed through transportation networks in living organisms, man-made infrastructures, and other physical systems. Dynamic behaviors of complex networks, such as stability, periodic oscillation, bifurcation, or even chaos, are ubiquitous in the real world and often reconfigurable. Networks have been studied in the context of dynamical systems in a range of disciplines. However, until recently there has been relatively little work that treats dynamics as a function of network structure, where the states of both the nodes and the edges can change, and the topology of the network itself often evolves in time. Some major problems have not been fully investigated, such as the behavior of stability, synchronization and chaos control for complex networks, as well as their applications in, for example, communication and bioinformatics

    The Theory Behind Overfitting, Cross Validation, Regularization, Bagging, and Boosting: Tutorial

    Full text link
    In this tutorial paper, we first define mean squared error, variance, covariance, and bias of both random variables and classification/predictor models. Then, we formulate the true and generalization errors of the model for both training and validation/test instances where we make use of the Stein's Unbiased Risk Estimator (SURE). We define overfitting, underfitting, and generalization using the obtained true and generalization errors. We introduce cross validation and two well-known examples which are KK-fold and leave-one-out cross validations. We briefly introduce generalized cross validation and then move on to regularization where we use the SURE again. We work on both β„“2\ell_2 and β„“1\ell_1 norm regularizations. Then, we show that bootstrap aggregating (bagging) reduces the variance of estimation. Boosting, specifically AdaBoost, is introduced and it is explained as both an additive model and a maximum margin model, i.e., Support Vector Machine (SVM). The upper bound on the generalization error of boosting is also provided to show why boosting prevents from overfitting. As examples of regularization, the theory of ridge and lasso regressions, weight decay, noise injection to input/weights, and early stopping are explained. Random forest, dropout, histogram of oriented gradients, and single shot multi-box detector are explained as examples of bagging in machine learning and computer vision. Finally, boosting tree and SVM models are mentioned as examples of boosting.Comment: 23 pages, 9 figure
    • …
    corecore