It is known that superpositions of ridge functions (single hidden-layer feedforward neural networks) may give good approximations to certain kinds of multivariate functions. It remains unclear, however, how to effectively obtain such approximations. In this paper, we use ideas from harmonic analysis to attack this question. We introduce a special admissibility condition for neural activation functions. The new condition is not satisfied by the sigmoid activation in current use by the neural networks community; instead, our condition requires that the neural activation function be oscillatory. Using an admissible neuron we construct linear transforms which represent quite general functions f as a superposition of ridge functions. We develop ffl a continuous transform which satisfies a Parseval-like relation ffl a discrete transform which satisfies frame bounds Both transforms represent f in a stable and effective way. The discrete transform is more challenging to construct and involve..
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.