527 research outputs found

    A potential theoretic minimax problem on the torus

    Get PDF
    We investigate an extension of an equilibrium-type result, conjectured by Ambrus, Ball and Erd\'elyi, and proved recently by Hardin, Kendall and Saff. These results were formulated on the torus, hence we also work on the torus, but one of the main motivations for our extension comes from an analogous setup on the unit interval, investigated earlier by Fenton. Basically, the problem is a minimax one, i.e. to minimize the maximum of a function FF, defined as the sum of arbitrary translates of certain fixed "kernel functions", minimization understood with respect to the translates. If these kernels are assumed to be concave, having certain singularities or cusps at zero, then translates by yjy_j will have singularities at yjy_j (while in between these nodes the sum function still behaves realtively regularly). So one can consider the maxima mim_i on each subintervals between the nodes yjy_j, and look for the minimization of maxF=maximi\max F = \max_i m_i. Here also a dual question of maximization of minimi\min_i m_i arises. This type of minimax problems were treated under some additional assumptions on the kernels. Also the problem is normalized so that y0=0y_0=0. In particular, Hardin, Kendall and Saff assumed that we have one single kernel KK on the torus or circle, and F=j=0nK(yj)=K+j=1nK(yj)F=\sum_{j=0}^n K(\cdot-y_j)= K + \sum_{j=1}^n K(\cdot-y_j). Fenton considered situations on the interval with two fixed kernels JJ and KK, also satisfying additional assumptions, and F=J+j=1nK(yj)F= J + \sum_{j=1}^n K(\cdot-y_j). Here we consider the situation (on the circle) when \emph{all the kernel functions can be different}, and F=j=0nKj(yj)=K0+j=1nKj(yj)F=\sum_{j=0}^n K_j(\cdot- y_j) = K_0 + \sum_{j=1}^n K_j(\cdot-y_j). Also an emphasis is put on relaxing all other technical assumptions and give alternative, rather minimal variants of the set of conditions on the kernel

    An analysis of training and generalization errors in shallow and deep networks

    Full text link
    This paper is motivated by an open problem around deep networks, namely, the apparent absence of over-fitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze this phenomenon in the case of regression problems when each unit evaluates a periodic activation function. We argue that the minimal expected value of the square loss is inappropriate to measure the generalization error in approximation of compositional functions in order to take full advantage of the compositional structure. Instead, we measure the generalization error in the sense of maximum loss, and sometimes, as a pointwise error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error. We prove that a solution of a regularization problem is guaranteed to yield a good training error as well as a good generalization error and estimate how much error to expect at which test data.Comment: 21 pages; Accepted for publication in Neural Network

    Numerical Methods -- Lecture Notes 2014-2015

    Get PDF
    In these notes some basic numerical methods will be described. The following topics are addressed: 1. Nonlinear Equations, 2. Linear Systems, 3. Polynomial Interpolation and Approximation, 4. Trigonometric Interpolation with DFT and FFT, 5. Numerical Integration, 6. Initial Value Problems for ODEs, 7. Stiff Initial Value Problems, 8. Two-Point Boundary Value Problems
    corecore