5,100 research outputs found
Approximation in with deep ReLU neural networks
We discuss the expressive power of neural networks which use the non-smooth
ReLU activation function by analyzing the
approximation theoretic properties of such networks. The existing results
mainly fall into two categories: approximation using ReLU networks with a fixed
depth, or using ReLU networks whose depth increases with the approximation
accuracy. After reviewing these findings, we show that the results concerning
networks with fixed depth--- which up to now only consider approximation in
for the Lebesgue measure --- can be generalized to
approximation in , for any finite Borel measure . In particular,
the generalized results apply in the usual setting of statistical learning
theory, where one is interested in approximation in , with the
probability measure describing the distribution of the data.Comment: Accepted for presentation at SampTA 201
Training Behavior of Sparse Neural Network Topologies
Improvements in the performance of deep neural networks have often come
through the design of larger and more complex networks. As a result, fast
memory is a significant limiting factor in our ability to improve network
performance. One approach to overcoming this limit is the design of sparse
neural networks, which can be both very large and efficiently trained. In this
paper we experiment training on sparse neural network topologies. We test
pruning-based topologies, which are derived from an initially dense network
whose connections are pruned, as well as RadiX-Nets, a class of network
topologies with proven connectivity and sparsity properties. Results show that
sparse networks obtain accuracies comparable to dense networks, but extreme
levels of sparsity cause instability in training, which merits further study.Comment: 6 pages. Presented at the 2019 IEEE High Performance Extreme
Computing (HPEC) Conference. Received "Best Paper" awar
- …