16 research outputs found

    On Function Recovery by Neural Networks Based on Orthogonal Expansions

    No full text
    this paper is to discuss neural networks, which are based on orthogonal expansions (OE-nets) of unknown functions. Due to the results of Donoho and Johnstone [7] one can look at S-nets and RBF-nets in an unifying manner, using orthogonal expansions as a mathematical tool. It is also clear that P-nets can be analyzed by orthogonal expansions, while W-nets directly fall to this class. In this context, it seems desirable to consider a net architecture, which directly reflects orthogonal expansions. We should add, that the net architecture proposed here is not intended to mimic any biological neural net. It can be treated as a tool for analyzing other networks, which are closer to biological counterparts. On the other hand, we hope that the proposed net structure can be hardwired in the future, providing a useful tool for engineering applications. The second reason, for which we propose to construct OE-nets is in well known difficulties in learning S- and RBF-nets, which manifests in training process, which usually needs hundreds of epochs. In contrary, in learning OE-nets one can use classical results from the theory of least-squares. We put emphasis on fast and reliable learning process, since in engineering applications the only reason for constructing a specialized net hardware for function approximation is when an unknown function changes from time to time and one needs a fast update of its current approximation. The paper is organized as follows. In the next section we formulate a number of questions and requirements, which should influence our decision concerning a proper choice of a functional net. Then, in Section 3 the problem of constructing a net based on orthogonal expansion is stated and the net architecture is discussed. In Section 4 we consider the learnin..

    Context-Dependent Neural Nets—Structures and Learning

    No full text
    corecore