5,100 research outputs found

    Approximation in Lp(μ)L^p(\mu) with deep ReLU neural networks

    Full text link
    We discuss the expressive power of neural networks which use the non-smooth ReLU activation function ϱ(x)=max{0,x}\varrho(x) = \max\{0,x\} by analyzing the approximation theoretic properties of such networks. The existing results mainly fall into two categories: approximation using ReLU networks with a fixed depth, or using ReLU networks whose depth increases with the approximation accuracy. After reviewing these findings, we show that the results concerning networks with fixed depth--- which up to now only consider approximation in Lp(λ)L^p(\lambda) for the Lebesgue measure λ\lambda--- can be generalized to approximation in Lp(μ)L^p(\mu), for any finite Borel measure μ\mu. In particular, the generalized results apply in the usual setting of statistical learning theory, where one is interested in approximation in L2(P)L^2(\mathbb{P}), with the probability measure P\mathbb{P} describing the distribution of the data.Comment: Accepted for presentation at SampTA 201

    Training Behavior of Sparse Neural Network Topologies

    Full text link
    Improvements in the performance of deep neural networks have often come through the design of larger and more complex networks. As a result, fast memory is a significant limiting factor in our ability to improve network performance. One approach to overcoming this limit is the design of sparse neural networks, which can be both very large and efficiently trained. In this paper we experiment training on sparse neural network topologies. We test pruning-based topologies, which are derived from an initially dense network whose connections are pruned, as well as RadiX-Nets, a class of network topologies with proven connectivity and sparsity properties. Results show that sparse networks obtain accuracies comparable to dense networks, but extreme levels of sparsity cause instability in training, which merits further study.Comment: 6 pages. Presented at the 2019 IEEE High Performance Extreme Computing (HPEC) Conference. Received "Best Paper" awar
    corecore