481 research outputs found

    A New Approach to Collaborative Filtering: Operator Estimation with Spectral Regularization

    Get PDF
    We present a general approach for collaborative filtering (CF) using spectral regularization to learn linear operators from "users" to the "objects" they rate. Recent low-rank type matrix completion approaches to CF are shown to be special cases. However, unlike existing regularization based CF methods, our approach can be used to also incorporate information such as attributes of the users or the objects -- a limitation of existing regularization based CF methods. We then provide novel representer theorems that we use to develop new estimation methods. We provide learning algorithms based on low-rank decompositions, and test them on a standard CF dataset. The experiments indicate the advantages of generalizing the existing regularization based CF methods to incorporate related information about users and objects. Finally, we show that certain multi-task learning methods can be also seen as special cases of our proposed approach

    The representer theorem for Hilbert spaces: a necessary and sufficient condition

    Full text link
    A family of regularization functionals is said to admit a linear representer theorem if every member of the family admits minimizers that lie in a fixed finite dimensional subspace. A recent characterization states that a general class of regularization functionals with differentiable regularizer admits a linear representer theorem if and only if the regularization term is a non-decreasing function of the norm. In this report, we improve over such result by replacing the differentiability assumption with lower semi-continuity and deriving a proof that is independent of the dimensionality of the space

    A theoretical framework for supervised learning from regions

    Get PDF
    Supervised learning is investigated, when the data are represented not only by labeled points but also labeled regions of the input space. In the limit case, such regions degenerate to single points and the proposed approach changes back to the classical learning context. The adopted framework entails the minimization of a functional obtained by introducing a loss function that involves such regions. An additive regularization term is expressed via differential operators that model the smoothness properties of the desired input/output relationship. Representer theorems are given, proving that the optimization problem associated to learning from labeled regions has a unique solution, which takes on the form of a linear combination of kernel functions determined by the differential operators together with the regions themselves. As a relevant situation, the case of regions given by multi-dimensional intervals (i.e., “boxes”) is investigated, which models prior knowledge expressed by logical propositions

    Periodic Splines and Gaussian Processes for the Resolution of Linear Inverse Problems

    Get PDF
    This paper deals with the resolution of inverse problems in a periodic setting or, in other terms, the reconstruction of periodic continuous-domain signals from their noisy measurements. We focus on two reconstruction paradigms: variational and statistical. In the variational approach, the reconstructed signal is solution to an optimization problem that establishes a tradeoff between fidelity to the data and smoothness conditions via a quadratic regularization associated to a linear operator. In the statistical approach, the signal is modeled as a stationary random process defined from a Gaussian white noise and a whitening operator; one then looks for the optimal estimator in the mean-square sense. We give a generic form of the reconstructed signals for both approaches, allowing for a rigorous comparison of the two.We fully characterize the conditions under which the two formulations yield the same solution, which is a periodic spline in the case of sampling measurements. We also show that this equivalence between the two approaches remains valid on simulations for a broad class of problems. This extends the practical range of applicability of the variational method

    Representer Theorems in Banach Spaces: Minimum Norm Interpolation, Regularized Learning and Semi-Discrete Inverse Problems

    Get PDF
    Learning a function from a finite number of sampled data points (measurements) is a fundamental problem in science and engineering. This is often formulated as a minimum norm interpolation (MNI) problem, a regularized learning problem or, in general, a semi discrete inverse problem (SDIP), in either Hilbert spaces or Banach spaces. The goal of this paper is to systematically study solutions of these problems in Banach spaces. We aim at obtaining explicit representer theorems for their solutions, on which convenient solution methods can then be developed. For the MNI problem, the explicit representer theorems enable us to express the infimum in terms of the norm of the linear combination of the interpolation functionals. For the purpose of developing efficient computational algorithms, we establish the fixed-point equation formulation of solutions of these problems. We reveal that unlike in a Hilbert space, in general, solutions of these problems in a Banach space may not be able to be reduced to truly finite dimensional problems (with certain infinite dimensional components hidden). We demonstrate how this obstacle can be removed, reducing the original problem to a truly finite dimensional one, in the special case when the Banach space is â„“1(N)
    • …
    corecore