43,807 research outputs found

    A sparse decomposition of low rank symmetric positive semi-definite matrices

    Get PDF
    Suppose that ARN×NA \in \mathbb{R}^{N \times N} is symmetric positive semidefinite with rank KNK \le N. Our goal is to decompose AA into KK rank-one matrices k=1KgkgkT\sum_{k=1}^K g_k g_k^T where the modes {gk}k=1K\{g_{k}\}_{k=1}^K are required to be as sparse as possible. In contrast to eigen decomposition, these sparse modes are not required to be orthogonal. Such a problem arises in random field parametrization where AA is the covariance function and is intractable to solve in general. In this paper, we partition the indices from 1 to NN into several patches and propose to quantify the sparseness of a vector by the number of patches on which it is nonzero, which is called patch-wise sparseness. Our aim is to find the decomposition which minimizes the total patch-wise sparseness of the decomposed modes. We propose a domain-decomposition type method, called intrinsic sparse mode decomposition (ISMD), which follows the "local-modes-construction + patching-up" procedure. The key step in the ISMD is to construct local pieces of the intrinsic sparse modes by a joint diagonalization problem. Thereafter a pivoted Cholesky decomposition is utilized to glue these local pieces together. Optimal sparse decomposition, consistency with different domain decomposition and robustness to small perturbation are proved under the so called regular-sparse assumption (see Definition 1.2). We provide simulation results to show the efficiency and robustness of the ISMD. We also compare the ISMD to other existing methods, e.g., eigen decomposition, pivoted Cholesky decomposition and convex relaxation of sparse principal component analysis [25] and [40]

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Frequency domain reduced order model of aligned-spin effective-one-body waveforms with generic mass-ratios and spins

    Get PDF
    I provide a frequency domain reduced order model (ROM) for the aligned-spin effective-one-body (EOB) model "SEOBNRv2" for data analysis with second and third generation ground based gravitational wave (GW) detectors. SEOBNRv2 models the dominant mode of the GWs emitted by the coalescence of black hole (BH) binaries. The large physical parameter space (dimensionless spins 1χi0.99-1 \leq \chi_i \leq 0.99 and symmetric mass-ratios 0.01η0.250.01 \leq \eta \leq 0.25) requires sophisticated reduced order modeling techniques, including patching in the parameter space and in frequency. I find that the time window over which the inspiral-plunge and the merger-ringdown waveform in SEOBNRv2 are connected is discontinuous when the spin of the deformed Kerr BH χ=0.8\chi=0.8 or the symmetric mass-ratio η0.083\eta \sim 0.083. This discontinuity increases resolution requirements for the ROM. The ROM can be used for compact binary systems with total masses of 2M2 M_\odot or higher for the advanced LIGO (aLIGO) design sensitivity and a 1010 Hz lower cutoff frequency. The ROM has a worst mismatch against SEOBNRv2 of 1%\sim 1\%, but in general mismatches are better than 0.1%\sim 0.1\%. The ROM is crucial for key data analysis applications for compact binaries, such as GW searches and parameter estimation carried out within the LIGO Scientific Collaboration (LSC).Comment: 14 pages, 14 figure

    Computing the Jacobian in spatial models: an applied survey.

    Get PDF
    Despite attempts to get around the Jacobian in fitting spatial econometric models by using GMM and other approximations, it remains a central problem for maximum likelihood estimation. In principle, and for smaller data sets, the use of the eigenvalues of the spatial weights matrix provides a very rapid and satisfactory resolution. For somewhat larger problems, including those induced in spatial panel and dyadic (network) problems, solving the eigenproblem is not as attractive, and a number of alternatives have been proposed. This paper will survey chosen alternatives, and comment on their relative usefulness.Spatial autoregression; Maximum likelihood estimation; Jacobian computation; Econometric software.

    Towards a Simplified Dynamic Wake Model using POD Analysis

    Full text link
    We apply the proper orthogonal decomposition (POD) to large eddy simulation data of a wind turbine wake in a turbulent atmospheric boundary layer. The turbine is modeled as an actuator disk. Our analyis mainly focuses on the question whether POD could be a useful tool to develop a simplified dynamic wake model. The extracted POD modes are used to obtain approximate descriptions of the velocity field. To assess the quality of these POD reconstructions, we define simple measures which are believed to be relevant for a sequential turbine in the wake such as the energy flux through a disk in the wake. It is shown that only a few modes are necessary to capture basic dynamical aspects of these measures even though only a small part of the turbulent kinetic energy is restored. Furthermore, we show that the importance of the individual modes depends on the measure chosen. Therefore, the optimal choice of modes for a possible model could in principle depend on the application of interest. We additionally present a possible interpretation of the POD modes relating them to specific properties of the wake. For example the first mode is related to the horizontal large scale movement. Besides yielding a deeper understanding, this also enables us to view our results in comparison to existing dynamic wake models
    corecore