327 research outputs found
Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or
implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k))
floating-point operations (flops) in contrast to O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data
Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions
Low-rank matrix approximations, such as the truncated singular value
decomposition and the rank-revealing QR decomposition, play a central role in
data analysis and scientific computing. This work surveys and extends recent
research which demonstrates that randomization offers a powerful tool for
performing low-rank matrix approximation. These techniques exploit modern
computational architectures more fully than classical methods and open the
possibility of dealing with truly massive data sets.
This paper presents a modular framework for constructing randomized
algorithms that compute partial matrix decompositions. These methods use random
sampling to identify a subspace that captures most of the action of a matrix.
The input matrix is then compressed---either explicitly or implicitly---to this
subspace, and the reduced matrix is manipulated deterministically to obtain the
desired low-rank factorization. In many cases, this approach beats its
classical competitors in terms of accuracy, speed, and robustness. These claims
are supported by extensive numerical experiments and a detailed error analysis
Gauge Theory for Spectral Triples and the Unbounded Kasparov Product
We explore factorizations of noncommutative Riemannian spin geometries over
commutative base manifolds in unbounded KK-theory. After setting up the general
formalism of unbounded KK-theory and improving upon the construction of
internal products, we arrive at a natural bundle-theoretic formulation of gauge
theories arising from spectral triples. We find that the unitary group of a
given noncommutative spectral triple arises as the group of endomorphisms of a
certain Hilbert bundle; the inner fluctuations split in terms of connections
on, and endomorphisms of, this Hilbert bundle. Moreover, we introduce an
extended gauge group of unitary endomorphisms and a corresponding notion of
gauge fields. We work out several examples in full detail, to wit Yang--Mills
theory, the noncommutative torus and the -deformed Hopf fibration over
the two-sphere.Comment: 50 pages. Accepted version. Section 2 has been rewritten. Results in
sections 3-6 are unchange
Reproducing Kernel Krein Spaces of Analytic Functions and Inverse Scattering
The purpose of this thesis is to study certain reproducing kernel Krein spaces of analytic functions, the relationships between these spaces and an inverse scattering problem associated with matrix valued functions of bounded type, and an operator model.
Roughly speaking, these results correspond to a generalization of earlier investigations on the applications of de Branges\u27 theory of reproducing kernel Hilbert spaces of analytic functions to the inverse scattering problem for a matrix valued function of the Schur class.
The present work considers first a generalization of a portion of de Branges\u27 theory to Krein spaces. We then formulate a general inverse scattering problem which includes as a special case the more classical inverse scattering problem of finding linear fractional representations of a given matrix valued function of the Schur class and use the theory alluded to above to obtain solutions to this problem.
Finally, we give a model for certain hermitian operators in Pontryagin spaces in terms of multiplication by the complex variable in a reproducing kernel Pontryagin space of analytic functions
Sign-indefinite second order differential operators on finite metric graphs
The question of self-adjoint realizations of sign-indefinite second order
differential operators is discussed in terms of a model problem. Operators of
the type -\frac{d}{dx} \sgn (x) \frac{d}{dx} are generalized to finite, not
necessarily compact, metric graphs. All self-adjoint realizations are
parametrized using methods from extension theory. The spectral and scattering
theory of the self-adjoint realizations are studied in detail.Comment: 43 pages, 2 figure
- …