3,193 research outputs found

    On the approximation by weighted ridge functions

    Full text link
    We characterize the best L2L_{2} approximation to a multivariate function by linear combinations of ridge functions multiplied by some fixed weight functions. In the special case when the weight functions are constants, we propose explicit formulas for both the best approximation and approximation error.Comment: 8 page

    A note on the representation of continuous functions by linear superpositions

    Full text link
    We consider the problem of the representation of real continuous functions by linear superpositions i=1kgipi\sum_{i=1}^{k}g_{i}\circ p_{i} with continuous gig_{i} and pip_{i}. This problem was considered by many authors. But complete, and at the same time explicit and practical solutions to the problem was given only for the case k=2k=2. For k>2k>2, a rather practical sufficient condition for the representation can be found in Sternfeld [17] and Sproston, Strauss [16]. In this short note, we give a necessary condition of such kind for the representability of continuous functions

    Inverse scattering problem on the half-axis for a first order system of ordinary differential equations

    Full text link
    In this article, the inverse scattering problem (ISP) of recovering the matrix coefficient of a first order system of ordinary differential equations on the half-axis from its scattering matrix is considered. In the case of a triangular structure of the matrix coefficient, this system has a Volterra-type integral transformation operator at infinity. Such type of transformation operator allows to determine the scattering matrix on the half-axis via the matrix Riemann-Hilbert factorization in the case, where contour is real axis, normalization is canonical and all the partial indices are zero. The ISP on the half-axis is solved by reducing it to ISP on the whole axis for the considered system with the coefficients that are extended to the whole axis as zero

    On the proximinality of ridge functions

    Full text link
    Using two results of Garkavi, Medvedev and Khavinson, we give sufficient conditions for proximinality of sums of two ridge functions with bounded and continuous summands in the spaces of bounded and continuous multivariate functions respectively. In the first case, we give an example which shows that the corresponding sufficient condition cannot be made weaker for some subsets of Rn\mathbb{R}^{n}. In the second case, we obtain also a necessary condition for proximinality. All the results are furnished with plenty of examples. The results, examples and following discussions naturally lead us to a conjecture on the proximinality of the considered class of ridge functions. The main purpose of the paper is to draw readers' attention to this conjecture.Comment: 8 page

    On the representation by linear superpositions

    Full text link
    In a number of papers, Y. Sternfeld investigated the problems of representation of continuous and bounded functions by linear superpositions. In particular, he proved that if such representation holds for continuous functions, then it holds for bounded functions. We consider the same problem without involving any topology and establish a rather practical necessary and sufficient condition for representability of an arbitrary function by linear superpositions. In particular, we show that if some representation by linear superpositions holds for continuous functions, then it holds for all functions. This will lead us to the analogue of the well-known Kolmogorov superposition theorem for multivariate functions on the dd-dimensional unit cube

    A nonlinear evolution equation with 2 + 1 dimensions related to nonstationary Dirac-type system

    Full text link
    In this paper the inverse scattering problem for the nonstationary Dirac-type system on the whole plane was considered. A nonlinear evolution sytem of equation related to nonstationary Dirac-type system is introduced and the solviblity of this sytem using the IST method is studied

    Normal Extensions of a Singular Multipoint Differential Operator for First Order

    Full text link
    In this work, firstly in the direct sum of Hilbert spaces of vector-functions L2(H,(,a1))L2(H,(a2,b2))2(H,(a3,+))L^{2} (H,(-\infty,a_{1})) \oplus L^{2} (H,(a_{2},b_{2}))\oplus^{2} (H,(a_{3},+\infty)), <a1<a2<b2<a3<+- \infty<a_{1}<a_{2}<b_{2}<a_{3}<+\infty all normal extensions of the minimal operator generated by linear singular multipoint formally normal differential expression l=(l1,l2,l3),lk=ddt+Akl=(l_{1},l_{2},l_{3}),l_{k} = \frac{d}{dt}+A_{k} with a selfadjoint operator coefficient Akk=1,2,3A_k k=1,2,3 in any Hilbert space HH, are described in terms of boundary values. Later structure of the spectrum of these extensions is investigated.Comment: 9 page

    On the approximation by single hidden layer feedforward neural networks with fixed weights

    Full text link
    Feedforward neural networks have wide applicability in various disciplines of science due to their universal approximation property. Some authors have shown that single hidden layer feedforward neural networks (SLFNs) with fixed weights still possess the universal approximation property provided that approximated functions are univariate. But this phenomenon does not lay any restrictions on the number of neurons in the hidden layer. The more this number, the more the probability of the considered network to give precise results. In this note, we constructively prove that SLFNs with the fixed weight 11 and two neurons in the hidden layer can approximate any continuous function on a compact subset of the real line. The applicability of this result is demonstrated in various numerical examples. Finally, we show that SLFNs with fixed weights cannot approximate all continuous multivariate functions.Comment: 17 pages, 5 figures, submitted; for associated SageMath worksheet, see https://sites.google.com/site/njguliyev/papers/monic-sigmoida

    A single hidden layer feedforward network with only one neuron in the hidden layer can approximate any univariate function

    Full text link
    The possibility of approximating a continuous function on a compact subset of the real line by a feedforward single hidden layer neural network with a sigmoidal activation function has been studied in many papers. Such networks can approximate an arbitrary continuous function provided that an unlimited number of neurons in a hidden layer is permitted. In this paper, we consider constructive approximation on any finite interval of R\mathbb{R} by neural networks with only one neuron in the hidden layer. We construct algorithmically a smooth, sigmoidal, almost monotone activation function σ\sigma providing approximation to an arbitrary continuous function within any degree of accuracy. This algorithm is implemented in a computer program, which computes the value of σ\sigma at any reasonable point of the real axis.Comment: 12 pages, 1 figure; to be published in Neural Computation; for associated SageMath worksheet, see http://sites.google.com/site/njguliyev/papers/sigmoida

    On the Diliberto-Straus algorithm for the uniform approximation by a sum of two algebras

    Full text link
    In 1951, Diliberto and Straus proposed a levelling algorithm for the uniform approximation of a bivariate function, defined on a rectangle with sides parallel to the coordinate axes, by sums of univariate functions. In the current paper, we consider the problem of approximation of a continuous function defined on a compact Hausdorff space by a sum of two closed algebras containing constants. Under reasonable assumptions, we show the convergence of the Diliberto-Straus algorithm. For the approximation by sums of univariate functions, it follows that Diliberto-Straus's original result holds for a large class of compact convex sets.Comment: 16 page
    corecore