This paper develops simple feed-forward neural networks that achieve the
universal approximation property for all continuous functions with a fixed
finite number of neurons. These neural networks are simple because they are
designed with a simple and computable continuous activation function Ο
leveraging a triangular-wave function and the softsign function. We prove that
Ο-activated networks with width 36d(2d+1) and depth 11 can
approximate any continuous function on a d-dimensional hypercube within an
arbitrarily small error. Hence, for supervised learning and its related
regression problems, the hypothesis space generated by these networks with a
size not smaller than 36d(2d+1)Γ11 is dense in the continuous function
space C([a,b]d) and therefore dense in the Lebesgue spaces Lp([a,b]d)
for pβ[1,β). Furthermore, classification functions arising from image
and signal classification are in the hypothesis space generated by
Ο-activated networks with width 36d(2d+1) and depth 12, when there
exist pairwise disjoint bounded closed subsets of Rd such that the
samples of the same class are located in the same subset. Finally, we use
numerical experimentation to show that replacing the ReLU activation function
by ours would improve the experiment results