6 research outputs found

    On the Depth of Deep Neural Networks: A Theoretical View

    Full text link
    People believe that depth plays an important role in success of deep neural networks (DNN). However, this belief lacks solid theoretical justifications as far as we know. We investigate role of depth from perspective of margin bound. In margin bound, expected error is upper bounded by empirical margin error plus Rademacher Average (RA) based capacity term. First, we derive an upper bound for RA of DNN, and show that it increases with increasing depth. This indicates negative impact of depth on test performance. Second, we show that deeper networks tend to have larger representation power (measured by Betti numbers based complexity) than shallower networks in multi-class setting, and thus can lead to smaller empirical margin error. This implies positive impact of depth. The combination of these two results shows that for DNN with restricted number of hidden units, increasing depth is not always good since there is a tradeoff between positive and negative impacts. These results inspire us to seek alternative ways to achieve positive impact of depth, e.g., imposing margin-based penalty terms to cross entropy loss so as to reduce empirical margin error without increasing depth. Our experiments show that in this way, we achieve significantly better test performance.Comment: AAAI 201

    Statistical Learning Theory for Location Fingerprinting in Wireless LANs

    Get PDF
    In this paper, techniques and algorithms developed in the framework of statistical learning theory are analyzed and applied to the problem of determining the location of a wireless device by measuring the signal strengths from a set of access points (location fingerprinting). Statistical Learning Theory provides a rich theoretical basis for the development of models starting from a set of examples. Signal strength measurement is part of the normal operating mode of wireless equipment, in particular Wi-Fi, so that no custom hardware is required. The proposed techniques, based on the Support Vector Machine paradigm, have been implemented and compared, on the same data set, with other approaches considered in the literature. Tests performed in a real-world environment show that results are comparable, with the advantage of a low algorithmic complexity in the normal operating phase. Moreover, the algorithm is particularly suitable for classification, where it outperforms the other techniques

    On Best Approximation by Ridge Functions

    Get PDF
    AbstractWe consider best approximation of some function classes by the manifold Mn consisting of sums of n arbitrary ridge functions. It is proved that the deviation of the Sobolev class Wr, d2 from the manifold Mn in the space L2 behaves asymptotically as n−r/(d−1)

    A data-based approach to power capacity optimization

    Get PDF

    Polynomial Bounds for VC Dimension of Sigmoidal Neural Networks

    No full text
    We introduce a new method for proving explicit upper bounds on the VC Dimension of general functional basis networks, and prove as an application, for the first time, the VC Dimension of analog neural networks with the sigmoid activation function oe(y) = 1=1 + e \Gammay to be bounded by a quadratic polynomial in the number of programmable parameters
    corecore