Deep Network Approximation: Achieving Arbitrary Accuracy with Fixed Number of Neurons

Abstract

This paper develops simple feed-forward neural networks that achieve the universal approximation property for all continuous functions with a fixed finite number of neurons. These neural networks are simple because they are designed with a simple and computable continuous activation function Οƒ\sigma leveraging a triangular-wave function and the softsign function. We prove that Οƒ\sigma-activated networks with width 36d(2d+1)36d(2d+1) and depth 1111 can approximate any continuous function on a dd-dimensional hypercube within an arbitrarily small error. Hence, for supervised learning and its related regression problems, the hypothesis space generated by these networks with a size not smaller than 36d(2d+1)Γ—1136d(2d+1)\times 11 is dense in the continuous function space C([a,b]d)C([a,b]^d) and therefore dense in the Lebesgue spaces Lp([a,b]d)L^p([a,b]^d) for p∈[1,∞)p\in [1,\infty). Furthermore, classification functions arising from image and signal classification are in the hypothesis space generated by Οƒ\sigma-activated networks with width 36d(2d+1)36d(2d+1) and depth 1212, when there exist pairwise disjoint bounded closed subsets of Rd\mathbb{R}^d such that the samples of the same class are located in the same subset. Finally, we use numerical experimentation to show that replacing the ReLU activation function by ours would improve the experiment results

    Similar works

    Full text

    thumbnail-image

    Available Versions