1,210 research outputs found
Conditions for scale-based decomposition in singularity perturbed systems
Bibliography: p. 23-24.Supported in part by the Air Force Office of Scientific Research under grant AFOSR-82-0258 Supported in part by the Army Research Office under grant DAAG-29-84-K-005 Supported by a Science Scholar Fellowship from the Mary Ingraham Bunting Institute of Radcliffe College, under a grant from the Office of Naval Research.Sheldon X.-C. Lou ... [et al.]
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
A Quasi-Random Approach to Matrix Spectral Analysis
Inspired by the quantum computing algorithms for Linear Algebra problems
[HHL,TaShma] we study how the simulation on a classical computer of this type
of "Phase Estimation algorithms" performs when we apply it to solve the
Eigen-Problem of Hermitian matrices. The result is a completely new, efficient
and stable, parallel algorithm to compute an approximate spectral decomposition
of any Hermitian matrix. The algorithm can be implemented by Boolean circuits
in parallel time with a total cost of Boolean
operations. This Boolean complexity matches the best known rigorous parallel time algorithms, but unlike those algorithms our algorithm is
(logarithmically) stable, so further improvements may lead to practical
implementations.
All previous efficient and rigorous approaches to solve the Eigen-Problem use
randomization to avoid bad condition as we do too. Our algorithm makes further
use of randomization in a completely new way, taking random powers of a unitary
matrix to randomize the phases of its eigenvalues. Proving that a tiny Gaussian
perturbation and a random polynomial power are sufficient to ensure almost
pairwise independence of the phases is the main technical
contribution of this work. This randomization enables us, given a Hermitian
matrix with well separated eigenvalues, to sample a random eigenvalue and
produce an approximate eigenvector in parallel time and
Boolean complexity. We conjecture that further improvements of
our method can provide a stable solution to the full approximate spectral
decomposition problem with complexity similar to the complexity (up to a
logarithmic factor) of sampling a single eigenvector.Comment: Replacing previous version: parallel algorithm runs in total
complexity and not . However, the depth of the
implementing circuit is : hence comparable to fastest
eigen-decomposition algorithms know
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Conformal covariance and the split property
We show that for a conformal local net of observables on the circle, the
split property is automatic. Both full conformal covariance (i.e.
diffeomorphism covariance) and the circle-setting play essential roles in this
fact, while by previously constructed examples it was already known that even
on the circle, M\"obius covariance does not imply the split property.
On the other hand, here we also provide an example of a local conformal net
living on the two-dimensional Minkowski space, which - although being
diffeomorphism covariant - does not have the split property.Comment: 34 pages, 3 tikz figure
Singular perturbation of polynomial potentials in the complex domain with applications to PT-symmetric families
In the first part of the paper, we discuss eigenvalue problems of the form
-w"+Pw=Ew with complex potential P and zero boundary conditions at infinity on
two rays in the complex plane. We give sufficient conditions for continuity of
the spectrum when the leading coefficient of P tends to 0. In the second part,
we apply these results to the study of topology and geometry of the real
spectral loci of PT-symmetric families with P of degree 3 and 4, and prove
several related results on the location of zeros of their eigenfunctions.Comment: The main result on singular perturbation is substantially improved,
generalized, and the proof is simplified. 37 pages, 16 figure
- …