271 research outputs found

    Structured Hölder condition numbers for multiple eigenvalues

    Get PDF
    The sensitivity of a multiple eigenvalue of a matrix under perturbations can be measured by its Hölder condition number. Various extensions of this concept are considered. A meaningful notion of structured Hölder condition numbers is introduced, and it is shown that many existing results on structured condition numbers for simple eigenvalues carry over to multiple eigenvalues. The structures investigated in more detail include real, Toeplitz, Hankel, symmetric, skewsymmetric, Hamiltonian, and skew-Hamiltonian matrices. Furthermore, unstructured and structured Hölder condition numbers for multiple eigenvalues of matrix pencils are introduced. Particular attention is given to symmetric/skew-symmetric, Hermitian, and palindromic pencils. It is also shown how matrix polynomial eigenvalue problems can be covered within this framework. © by SIAM

    Cut distance identifying graphon parameters over weak* limits

    Full text link
    The theory of graphons comes with the so-called cut norm and the derived cut distance. The cut norm is finer than the weak* topology. Dole\v{z}al and Hladk\'y [Cut-norm and entropy minimization over weak* limits, J. Combin. Theory Ser. B 137 (2019), 232-263] showed, that given a sequence of graphons, a cut distance accumulation graphon can be pinpointed in the set of weak* accumulation points as a minimizer of the entropy. Motivated by this, we study graphon parameters with the property that their minimizers or maximizers identify cut distance accumulation points over the set of weak* accumulation points. We call such parameters cut distance identifying. Of particular importance are cut distance identifying parameters coming from subgraph densities, t(H,*). This concept is closely related to the emerging field of graph norms, and the notions of the step Sidorenko property and the step forcing property introduced by Kr\'al, Martins, Pach and Wrochna [The step Sidorenko property and non-norming edge-transitive graphs, J. Combin. Theory Ser. A 162 (2019), 34-54]. We prove that a connected graph is weakly norming if and only if it is step Sidorenko, and that if a graph is norming then it is step forcing. Further, we study convexity properties of cut distance identifying graphon parameters, and find a way to identify cut distance limits using spectra of graphons. We also show that continuous cut distance identifying graphon parameters have the "pumping property", and thus can be used in the proof of the the Frieze-Kannan regularity lemma.Comment: 48 pages, 3 figures. Correction when treating disconnected norming graphs, and a new section 3.2 on index pumping in the regularity lemm

    Spectral tensor-train decomposition

    Get PDF
    The accurate approximation of high-dimensional functions is an essential task in uncertainty quantification and many other fields. We propose a new function approximation scheme based on a spectral extension of the tensor-train (TT) decomposition. We first define a functional version of the TT decomposition and analyze its properties. We obtain results on the convergence of the decomposition, revealing links between the regularity of the function, the dimension of the input space, and the TT ranks. We also show that the regularity of the target function is preserved by the univariate functions (i.e., the "cores") comprising the functional TT decomposition. This result motivates an approximation scheme employing polynomial approximations of the cores. For functions with appropriate regularity, the resulting \textit{spectral tensor-train decomposition} combines the favorable dimension-scaling of the TT decomposition with the spectral convergence rate of polynomial approximations, yielding efficient and accurate surrogates for high-dimensional functions. To construct these decompositions, we use the sampling algorithm \texttt{TT-DMRG-cross} to obtain the TT decomposition of tensors resulting from suitable discretizations of the target function. We assess the performance of the method on a range of numerical examples: a modifed set of Genz functions with dimension up to 100100, and functions with mixed Fourier modes or with local features. We observe significant improvements in performance over an anisotropic adaptive Smolyak approach. The method is also used to approximate the solution of an elliptic PDE with random input data. The open source software and examples presented in this work are available online.Comment: 33 pages, 19 figure

    Computing all Pairs (λ,μ) Such That λ is a Double Eigenvalue of Α + μΒ

    Get PDF
    Double eigenvalues are not generic for matrices without any particular structure. A matrix depending linearly on a scalar parameter, Α + μΒ, will, however, generically have double eigenvalues for some values of the parameter μ. In this paper, we consider the problem of finding those values. More precisely, we construct a method to accurately find all scalar pairs (λ,μ) such that Α + μΒ has a double eigenvalue λ, where Α and Β are given arbitrary complex matrices. The general idea of the globally convergent method is that if μ is close to a solution, then Α + μΒ has two eigenvalues which are close to each other. We fix the relative distance between these two eigenvalues and construct a method to solve and study it by observing that the resulting problem can be stated as a two-parameter eigenvalue problem, which is already studied in the literature. The method, which we call the method of fixed relative distance (MFRD), involves solving a two-parameter eigenvalue problem which returns approximations of all solutions. It is unfortunately not possible to get full accuracy with MFRD. In order to compute solutions with full accuracy, we present an iterative method which returns a very accurate solution, for a sufficiently good starting value. The approach is illustrated with one academic example and one application to a simple problem in computational quantum mechanics. Copyright 2011 Society for Industrial and Applied Mathematic

    On the Inductive Bias of Neural Tangent Kernels

    Get PDF
    State-of-the-art neural networks are heavily over-parameterized, making the optimization algorithm a crucial ingredient for learning predictive models with good generalization properties. A recent line of work has shown that in a certain over-parameterized regime, the learning dynamics of gradient descent are governed by a certain kernel obtained at initialization, called the neural tangent kernel. We study the inductive bias of learning in such a regime by analyzing this kernel and the corresponding function space (RKHS). In particular, we study smoothness, approximation, and stability properties of functions with finite norm, including stability to image deformations in the case of convolutional networks, and compare to other known kernels for similar architectures.Comment: NeurIPS 201

    IST Austria Thesis

    Get PDF
    The eigenvalue density of many large random matrices is well approximated by a deterministic measure, the self-consistent density of states. In the present work, we show this behaviour for several classes of random matrices. In fact, we establish that, in each of these classes, the self-consistent density of states approximates the eigenvalue density of the random matrix on all scales slightly above the typical eigenvalue spacing. For large classes of random matrices, the self-consistent density of states exhibits several universal features. We prove that, under suitable assumptions, random Gram matrices and Hermitian random matrices with decaying correlations have a 1/3-Hölder continuous self-consistent density of states ρ on R, which is analytic, where it is positive, and has either a square root edge or a cubic root cusp, where it vanishes. We, thus, extend the validity of the corresponding result for Wigner-type matrices from [4, 5, 7]. We show that ρ is determined as the inverse Stieltjes transform of the normalized trace of the unique solution m(z) to the Dyson equation −m(z) −1 = z − a + S[m(z)] on C N×N with the constraint Im m(z) ≥ 0. Here, z lies in the complex upper half-plane, a is a self-adjoint element of C N×N and S is a positivity-preserving operator on C N×N encoding the first two moments of the random matrix. In order to analyze a possible limit of ρ for N → ∞ and address some applications in free probability theory, we also consider the Dyson equation on infinite dimensional von Neumann algebras. We present two applications to random matrices. We first establish that, under certain assumptions, large random matrices with independent entries have a rotationally symmetric self-consistent density of states which is supported on a centered disk in C. Moreover, it is infinitely often differentiable apart from a jump on the boundary of this disk. Second, we show edge universality at all regular (not necessarily extreme) spectral edges for Hermitian random matrices with decaying correlations

    Escape rates for Gibbs measures

    Get PDF
    In this paper we study the asymptotic behaviour of the escape rate of a Gibbs measure supported on a conformal repeller through a small hole. There are additional applications to the convergence of the Hausdorff dimension of the survivor set
    corecore