18,888 research outputs found

    Bi-Objective Nonnegative Matrix Factorization: Linear Versus Kernel-Based Models

    Full text link
    Nonnegative matrix factorization (NMF) is a powerful class of feature extraction techniques that has been successfully applied in many fields, namely in signal and image processing. Current NMF techniques have been limited to a single-objective problem in either its linear or nonlinear kernel-based formulation. In this paper, we propose to revisit the NMF as a multi-objective problem, in particular a bi-objective one, where the objective functions defined in both input and feature spaces are taken into account. By taking the advantage of the sum-weighted method from the literature of multi-objective optimization, the proposed bi-objective NMF determines a set of nondominated, Pareto optimal, solutions instead of a single optimal decomposition. Moreover, the corresponding Pareto front is studied and approximated. Experimental results on unmixing real hyperspectral images confirm the efficiency of the proposed bi-objective NMF compared with the state-of-the-art methods

    A Comparative Study of Pairwise Learning Methods based on Kernel Ridge Regression

    Full text link
    Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction or network inference problems. During the last decade kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their behavior has been underexplored in the machine learning literature. In this work we review and unify existing kernel-based algorithms that are commonly used in different pairwise learning settings, ranging from matrix filtering to zero-shot learning. To this end, we focus on closed-form efficient instantiations of Kronecker kernel ridge regression. We show that independent task kernel ridge regression, two-step kernel ridge regression and a linear matrix filter arise naturally as a special case of Kronecker kernel ridge regression, implying that all these methods implicitly minimize a squared loss. In addition, we analyze universality, consistency and spectral filtering properties. Our theoretical results provide valuable insights in assessing the advantages and limitations of existing pairwise learning methods.Comment: arXiv admin note: text overlap with arXiv:1606.0427

    Fredholm factorization of Wiener-Hopf scalar and matrix kernels

    Get PDF
    A general theory to factorize the Wiener-Hopf (W-H) kernel using Fredholm Integral Equations (FIE) of the second kind is presented. This technique, hereafter called Fredholm factorization, factorizes the W-H kernel using simple numerical quadrature. W-H kernels can be either of scalar form or of matrix form with arbitrary dimensions. The kernel spectrum can be continuous (with branch points), discrete (with poles), or mixed (with branch points and poles). In order to validate the proposed method, rational matrix kernels in particular are studied since they admit exact closed form factorization. In the appendix a new analytical method to factorize rational matrix kernels is also described. The Fredholm factorization is discussed in detail, supplying several numerical tests. Physical aspects are also illustrated in the framework of scattering problems: in particular, diffraction problems. Mathematical proofs are reported in the pape
    • …
    corecore