7 research outputs found

    A Comparison between Fixed-Basis and Variable-Basis Schemes for Function Approximation and Functional Optimization

    Get PDF
    Fixed-basis and variable-basis approximation schemes are compared for the problems of function approximation and functional optimization (also known as infinite programming). Classes of problems are investigated for which variable-basis schemes with sigmoidal computational units perform better than fixed-basis ones, in terms of the minimum number of computational units needed to achieve a desired error in function approximation or approximate optimization. Previously known bounds on the accuracy are extended, with better rates, to families o

    High-Dimensional Function Approximation with Neural Networks for Large Volumes of Data

    Get PDF
    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well

    A Comparison between Fixed-Basis and Variable-Basis Schemes for Function Approximation and Functional Optimization

    Get PDF
    Fixed-basis and variable-basis approximation schemes are compared for the problems of function approximation and functional optimization also known as infinite programming . Classes of problems are investigated for which variable-basis schemes with sigmoidal computational units perform better than fixed-basis ones, in terms of the minimum number of computational units needed to achieve a desired error in function approximation or approximate optimization. Previously known bounds on the accuracy are extended, with better rates, to families of d-variable functions whose actual dependence is on a subset of d d variables, where the indices of these d variables are not known a priori

    Physics-informed machine learning in asymptotic homogenization of elliptic equations

    Get PDF
    We apply physics-informed neural networks (PINNs) to first-order, two-scale, periodic asymptotic homogenization of the property tensor in a generic elliptic equation. The problem of lack of differentiability of property tensors at the sharp phase interfaces is circumvented by making use of a diffuse interface approach. Periodic boundary conditions are incorporated strictly through the introduction of an input-transfer layer (Fourier feature mapping), which takes the sine and cosine of the inner product of position and reciprocal lattice vectors. This, together with the absence of Dirichlet boundary conditions, results in a lossless boundary condition application. Consequently, the sole contributors to the loss are the locally-scaled differential equation residuals. We use crystalline arrangements that are defined via Bravais lattices to demonstrate the formulation's versatility based on the reciprocal lattice vectors. We also show that considering integer multiples of the reciprocal basis in the Fourier mapping leads to improved convergence of high-frequency functions. We consider applications in one, two, and three dimensions, including periodic composites, composed of embeddings of monodisperse inclusions in the form of disks/spheres, and stochastic monodisperse disk arrangements.</p

    Some comparisons of complexity in dictionary-based and linear computational models

    No full text
    Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator
    corecore