1,538 research outputs found

    Adaptive Graph via Multiple Kernel Learning for Nonnegative Matrix Factorization

    Full text link
    Nonnegative Matrix Factorization (NMF) has been continuously evolving in several areas like pattern recognition and information retrieval methods. It factorizes a matrix into a product of 2 low-rank non-negative matrices that will define parts-based, and linear representation of nonnegative data. Recently, Graph regularized NMF (GrNMF) is proposed to find a compact representation,which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. In GNMF, an affinity graph is constructed from the original data space to encode the geometrical information. In this paper, we propose a novel idea which engages a Multiple Kernel Learning approach into refining the graph structure that reflects the factorization of the matrix and the new data space. The GrNMF is improved by utilizing the graph refined by the kernel learning, and then a novel kernel learning method is introduced under the GrNMF framework. Our approach shows encouraging results of the proposed algorithm in comparison to the state-of-the-art clustering algorithms like NMF, GrNMF, SVD etc.Comment: This paper has been withdrawn by the author due to the terrible writin

    Optimization algorithms for the solution of the frictionless normal contact between rough surfaces

    Get PDF
    This paper revisits the fundamental equations for the solution of the frictionless unilateral normal contact problem between a rough rigid surface and a linear elastic half-plane using the boundary element method (BEM). After recasting the resulting Linear Complementarity Problem (LCP) as a convex quadratic program (QP) with nonnegative constraints, different optimization algorithms are compared for its solution: (i) a Greedy method, based on different solvers for the unconstrained linear system (Conjugate Gradient CG, Gauss-Seidel, Cholesky factorization), (ii) a constrained CG algorithm, (iii) the Alternating Direction Method of Multipliers (ADMM), and (iviv) the Non-Negative Least Squares (NNLS) algorithm, possibly warm-started by accelerated gradient projection steps or taking advantage of a loading history. The latter method is two orders of magnitude faster than the Greedy CG method and one order of magnitude faster than the constrained CG algorithm. Finally, we propose another type of warm start based on a refined criterion for the identification of the initial trial contact domain that can be used in conjunction with all the previous optimization algorithms. This method, called Cascade Multi-Resolution (CMR), takes advantage of physical considerations regarding the scaling of the contact predictions by changing the surface resolution. The method is very efficient and accurate when applied to real or numerically generated rough surfaces, provided that their power spectral density function is of power-law type, as in case of self-similar fractal surfaces.Comment: 38 pages, 11 figure

    Dimensional hyper-reduction of nonlinear finite element models via empirical cubature

    Get PDF
    We present a general framework for the dimensional reduction, in terms of number of degrees of freedom as well as number of integration points (“hyper-reduction”), of nonlinear parameterized finite element (FE) models. The reduction process is divided into two sequential stages. The first stage consists in a common Galerkin projection onto a reduced-order space, as well as in the condensation of boundary conditions and external forces. For the second stage (reduction in number of integration points), we present a novel cubature scheme that efficiently determines optimal points and associated positive weights so that the error in integrating reduced internal forces is minimized. The distinguishing features of the proposed method are: (1) The minimization problem is posed in terms of orthogonal basis vector (obtained via a partitioned Singular Value Decomposition) rather that in terms of snapshots of the integrand. (2) The volume of the domain is exactly integrated. (3) The selection algorithm need not solve in all iterations a nonnegative least-squares problem to force the positiveness of the weights. Furthermore, we show that the proposed method converges to the absolute minimum (zero integration error) when the number of selected points is equal to the number of internal force modes included in the objective function. We illustrate this model reduction methodology by two nonlinear, structural examples (quasi-static bending and resonant vibration of elastoplastic composite plates). In both examples, the number of integration points is reduced three order of magnitudes (with respect to FE analyses) without significantly sacrificing accuracy.Peer ReviewedPostprint (published version
    • …
    corecore