51,514 research outputs found

    Least-Squares Covariance Matrix Adjustment

    Get PDF
    We consider the problem of finding the smallest adjustment to a given symmetric n×nn \times n matrix, as measured by the Euclidean or Frobenius norm, so that it satisfies some given linear equalities and inequalities, and in addition is positive semidefinite. This least-squares covariance adjustment problem is a convex optimization problem, and can be efficiently solved using standard methods when the number of variables (i.e., entries in the matrix) is modest, say, under 10001000. Since the number of variables is n(n+1)/2n(n+1)/2, this corresponds to a limit around n=45n=45. Malick [{\it SIAM J. Matrix Anal.\ Appl.,} 26 (2005), pp. 272--284] studies a closely related problem and calls it the semidefinite least-squares problem. In this paper we formulate a dual problem that has no matrix inequality or matrix variables, and a number of (scalar) variables equal to the number of equality and inequality constraints in the original least-squares covariance adjustment problem. This dual problem allows us to solve far larger least-squares covariance adjustment problems than would be possible using standard methods. Assuming a modest number of constraints, problems with n=1000n=1000 are readily solved by the dual method. The dual method coincides with the dual method proposed by Malick when there are no inequality constraints and can be obtained as an extension of his dual method when there are inequality constraints. Using the dual problem, we show that in many cases the optimal solution is a low rank update of the original matrix. When the original matrix has structure, such as sparsity, this observation allows us to solve very large least-squares covariance adjustment problems

    Parallelization of an object-oriented FEM dynamics code: influence of the strategies on the Speedup

    Get PDF
    This paper presents an implementation in C++ of an explicit parallel finite element code dedicated to the simulation of impacts. We first present a brief overview of the kinematics and the explicit integration scheme with details concerning some particular points. Then we present the OpenMP parallelization toolkit used in order to parallelize our FEM code, and we focus on how the parallelization of the DynELA FEM code has been conducted for a shared memory system using OpenMP. Some examples are then presented to demonstrate the efficiency and accuracy of the proposed implementations concerning the Speedup of the code. Finally, an impact simulation application is presented and results are compared with the ones obtained by the commercial Abaqus explicit FEM code

    Exact Algorithm for Sampling the 2D Ising Spin Glass

    Get PDF
    A sampling algorithm is presented that generates spin glass configurations of the 2D Edwards-Anderson Ising spin glass at finite temperature, with probabilities proportional to their Boltzmann weights. Such an algorithm overcomes the slow dynamics of direct simulation and can be used to study long-range correlation functions and coarse-grained dynamics. The algorithm uses a correspondence between spin configurations on a regular lattice and dimer (edge) coverings of a related graph: Wilson's algorithm [D. B. Wilson, Proc. 8th Symp. Discrete Algorithms 258, (1997)] for sampling dimer coverings on a planar lattice is adapted to generate samplings for the dimer problem corresponding to both planar and toroidal spin glass samples. This algorithm is recursive: it computes probabilities for spins along a "separator" that divides the sample in half. Given the spins on the separator, sample configurations for the two separated halves are generated by further division and assignment. The algorithm is simplified by using Pfaffian elimination, rather than Gaussian elimination, for sampling dimer configurations. For n spins and given floating point precision, the algorithm has an asymptotic run-time of O(n^{3/2}); it is found that the required precision scales as inverse temperature and grows only slowly with system size. Sample applications and benchmarking results are presented for samples of size up to n=128^2, with fixed and periodic boundary conditions.Comment: 18 pages, 10 figures, 1 table; minor clarification

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page
    corecore