377 research outputs found
Evaluating matrix functions for exponential integrators via Carathéodory-Fejér approximation and contour integrals
Among the fastest methods for solving stiff PDE are exponential integrators, which require the evaluation of , where is a negative definite matrix and is the exponential function or one of the related `` functions'' such as . Building on previous work by Trefethen and Gutknecht, Gonchar and Rakhmanov, and Lu, we propose two methods for the fast evaluation of that are especially useful when shifted systems can be solved efficiently, e.g. by a sparse direct solver. The first method method is based on best rational approximations to on the negative real axis computed via the Carathéodory-Fejér procedure, and we conjecture that the accuracy scales as , where is the number of complex matrix solves. In particular, three matrix solves suffice to evaluate to approximately six digits of accuracy. The second method is an application of the trapezoid rule on a Talbot-type contour
Dimension reduction for functionals on solenoidal vector fields
We study integral functionals constrained to divergence-free vector fields in
on a thin domain, under standard -growth and coercivity assumptions,
. We prove that as the thickness of the domain goes to zero, the
Gamma-limit with respect to weak convergence in is always given by the
associated functional with convexified energy density wherever it is finite.
Remarkably, this happens despite the fact that relaxation of nonconvex
functionals subject to the limiting constraint can give rise to a nonlocal
functional as illustrated in an example.Comment: 25 page
Effective H^{\infty} interpolation constrained by Hardy and Bergman weighted norms
Given a finite set of the unit disc and a holomorphic
function in which belongs to a class we are looking for a
function in another class which minimizes the norm among all
functions such that . Generally speaking, the
interpolation constant considered is When , our interpolation problem includes those of
Nevanlinna-Pick (1916), Caratheodory-Schur (1908). Moreover, Carleson's free
interpolation (1958) has also an interpretation in terms of our constant
.} If is a Hilbert space belonging to the
scale of Hardy and Bergman weighted spaces, we show that where n=#\sigma,
and where stands for the
norm of the evaluation functional on the space . The upper
bound is sharp over sets with given and .} If is a general
Hardy-Sobolev space or a general weighted Bergman space (not necessarily of
Hilbert type), we also found upper and lower bounds for (sometimes for special sets ) but with some gaps between
these bounds.} This constrained interpolation is motivated by some applications
in matrix analysis and in operator theory.
On weighted compositions preserving the Carathéodory class
This is a post-peer-review, pre-copyedit version of an article published in Monatshefte für Mathematik. The final authenticated version is available online at: http://dx.doi.org/10.1007/s00605-017-1093-3We characterize in various ways the weighted composition transformations which preserve the class P of normalized analytic functions in the disk with positive real part. We analyze the meaning of the criteria obtained for various special cases of symbols and identify the fixed points of such transformationsArévalo, Martín, and Vukotić are supported by MTM2015-65792-P from MINECO and FEDER/EU and partially by the Thematic Research Network MTM2015-69323-REDT, MINECO, Spain. Hernández and Martín are supported by FONDECYT 1150284, Chile. Martín is also supported by Academy of Finland Grant 26800
Carathéodory sampling for stochastic gradient descent
Many problems require to optimize empirical risk functions over large data sets. Gradient descent methods that calculate the full gradient in every descent step do not scale to such datasets. Various flavours of Stochastic Gradient Descent (SGD) replace the expensive summation that computes the full gradient by approximating it with a small sum over a randomly selected subsample of the data set that in turn suffers from a high variance. We present a different approach that is inspired by classical results of Tchakaloff and Carathéodory about measure reduction. These results allow to replace an empirical measure with another, carefully constructed probability measure that has a much smaller support, but can preserve certain statistics such as the expected gradient. To turn this into scalable algorithms we firstly, adaptively select the descent steps where the measure reduction is carried out; secondly, we combine this with Block Coordinate Descent so that measure reduction can be done very cheaply. This makes the resulting methods scalable to high-dimensional spaces. Finally, we provide an experimental validation and comparison
Fast and accurate con-eigenvalue algorithm for optimal rational approximations
The need to compute small con-eigenvalues and the associated con-eigenvectors
of positive-definite Cauchy matrices naturally arises when constructing
rational approximations with a (near) optimally small error.
Specifically, given a rational function with poles in the unit disk, a
rational approximation with poles in the unit disk may be obtained
from the th con-eigenvector of an Cauchy matrix, where the
associated con-eigenvalue gives the approximation error in the
norm. Unfortunately, standard algorithms do not accurately compute
small con-eigenvalues (and the associated con-eigenvectors) and, in particular,
yield few or no correct digits for con-eigenvalues smaller than the machine
roundoff. We develop a fast and accurate algorithm for computing
con-eigenvalues and con-eigenvectors of positive-definite Cauchy matrices,
yielding even the tiniest con-eigenvalues with high relative accuracy. The
algorithm computes the th con-eigenvalue in operations
and, since the con-eigenvalues of positive-definite Cauchy matrices decay
exponentially fast, we obtain (near) optimal rational approximations in
operations, where is the
approximation error in the norm. We derive error bounds
demonstrating high relative accuracy of the computed con-eigenvalues and the
high accuracy of the unit con-eigenvectors. We also provide examples of using
the algorithm to compute (near) optimal rational approximations of functions
with singularities and sharp transitions, where approximation errors close to
machine precision are obtained. Finally, we present numerical tests on random
(complex-valued) Cauchy matrices to show that the algorithm computes all the
con-eigenvalues and con-eigenvectors with nearly full precision
- …