527 research outputs found
A potential theoretic minimax problem on the torus
We investigate an extension of an equilibrium-type result, conjectured by
Ambrus, Ball and Erd\'elyi, and proved recently by Hardin, Kendall and Saff.
These results were formulated on the torus, hence we also work on the torus,
but one of the main motivations for our extension comes from an analogous setup
on the unit interval, investigated earlier by Fenton.
Basically, the problem is a minimax one, i.e. to minimize the maximum of a
function , defined as the sum of arbitrary translates of certain fixed
"kernel functions", minimization understood with respect to the translates. If
these kernels are assumed to be concave, having certain singularities or cusps
at zero, then translates by will have singularities at (while in
between these nodes the sum function still behaves realtively regularly). So
one can consider the maxima on each subintervals between the nodes ,
and look for the minimization of .
Here also a dual question of maximization of arises. This type
of minimax problems were treated under some additional assumptions on the
kernels. Also the problem is normalized so that .
In particular, Hardin, Kendall and Saff assumed that we have one single
kernel on the torus or circle, and . Fenton considered situations on the interval with
two fixed kernels and , also satisfying additional assumptions, and . Here we consider the situation (on the circle)
when \emph{all the kernel functions can be different}, and . Also an emphasis is put
on relaxing all other technical assumptions and give alternative, rather
minimal variants of the set of conditions on the kernel
An analysis of training and generalization errors in shallow and deep networks
This paper is motivated by an open problem around deep networks, namely, the
apparent absence of over-fitting despite large over-parametrization which
allows perfect fitting of the training data. In this paper, we analyze this
phenomenon in the case of regression problems when each unit evaluates a
periodic activation function. We argue that the minimal expected value of the
square loss is inappropriate to measure the generalization error in
approximation of compositional functions in order to take full advantage of the
compositional structure. Instead, we measure the generalization error in the
sense of maximum loss, and sometimes, as a pointwise error. We give estimates
on exactly how many parameters ensure both zero training error as well as a
good generalization error. We prove that a solution of a regularization problem
is guaranteed to yield a good training error as well as a good generalization
error and estimate how much error to expect at which test data.Comment: 21 pages; Accepted for publication in Neural Network
Numerical Methods -- Lecture Notes 2014-2015
In these notes some basic numerical methods will be described. The
following topics are addressed: 1. Nonlinear Equations, 2. Linear
Systems, 3. Polynomial Interpolation and Approximation, 4. Trigonometric
Interpolation with DFT and FFT, 5. Numerical Integration, 6. Initial
Value Problems for ODEs, 7. Stiff Initial Value Problems, 8. Two-Point
Boundary Value Problems
- …