6,782 research outputs found

    Kernel methods in machine learning

    Full text link
    We review machine learning methods employing positive definite kernels. These methods formulate learning and estimation problems in a reproducing kernel Hilbert space (RKHS) of functions defined on the data domain, expanded in terms of a kernel. Working in linear spaces of function has the benefit of facilitating the construction and analysis of learning algorithms while at the same time allowing large classes of functions. The latter include nonlinear functions as well as functions defined on nonvectorial data. We cover a wide range of methods, ranging from binary classifiers to sophisticated methods for estimation with structured data.Comment: Published in at http://dx.doi.org/10.1214/009053607000000677 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Making Indefinite Kernel Learning Practical

    Get PDF
    In this paper we embed evolutionary computation into statistical learning theory. First, we outline the connection between large margin optimization and statistical learning and see why this paradigm is successful for many pattern recognition problems. We then embed evolutionary computation into the most prominent representative of this class of learning methods, namely into Support Vector Machines (SVM). In contrast to former applications of evolutionary algorithms to SVM we do not only optimize the method or kernel parameters. We rather use evolution strategies in order to directly solve the posed constrained optimization problem. Transforming the problem into the Wolfe dual reduces the total runtime and allows the usage of kernel functions just as for traditional SVM. We will show that evolutionary SVM are at least as accurate as their quadratic programming counterparts on eight real-world benchmark data sets in terms of generalization performance. They always outperform traditional approaches in terms of the original optimization problem. Additionally, the proposed algorithm is more generic than existing traditional solutions since it will also work for non-positive semidefinite or indefinite kernel functions. The evolutionary SVM variants frequently outperform their quadratic programming competitors in cases where such an indefinite Kernel function is used. --

    Exploiting sparsity to build efficient kernel based collaborative filtering for top-N item recommendation

    Get PDF
    The increasing availability of implicit feedback datasets has raised the interest in developing effective collaborative filtering techniques able to deal asymmetrically with unambiguous positive feedback and ambiguous negative feedback. In this paper, we propose a principled kernel-based collaborative filtering method for top-N item recommendation with implicit feedback. We present an efficient implementation using the linear kernel, and we show how to generalize it to kernels of the dot product family preserving the efficiency. We also investigate on the elements which influence the sparsity of a standard cosine kernel. This analysis shows that the sparsity of the kernel strongly depends on the properties of the dataset, in particular on the long tail distribution. We compare our method with state-of-the-art algorithms achieving good results both in terms of efficiency and effectiveness

    Exploiting sparsity to build efficient kernel based collaborative filtering for top-N item recommendation

    Get PDF
    The increasing availability of implicit feedback datasets has raised the interest in developing effective collaborative filtering techniques able to deal asymmetrically with unambiguous positive feedback and ambiguous negative feedback. In this paper, we propose a principled kernel-based collaborative filtering method for top-N item recommendation with implicit feedback. We present an efficient implementation using the linear kernel, and we show how to generalize it to kernels of the dot product family preserving the efficiency. We also investigate on the elements which influence the sparsity of a standard cosine kernel. This analysis shows that the sparsity of the kernel strongly depends on the properties of the dataset, in particular on the long tail distribution. We compare our method with state-of-the-art algorithms achieving good results both in terms of efficiency and effectiveness

    Density-operator approaches to transport through interacting quantum dots: Simplifications in fourth-order perturbation theory

    Get PDF
    Various theoretical methods address transport effects in quantum dots beyond single-electron tunneling while accounting for the strong interactions in such systems. In this paper we report a detailed comparison between three prominent approaches to quantum transport: the fourth-order Bloch-Redfield quantum master equation (BR), the real-time diagrammatic technique (RT), and the scattering rate approach based on the T-matrix (TM). Central to the BR and RT is the generalized master equation for the reduced density matrix. We demonstrate the exact equivalence of these two techniques. By accounting for coherences (nondiagonal elements of the density matrix) between nonsecular states, we show how contributions to the transport kernels can be grouped in a physically meaningful way. This not only significantly reduces the numerical cost of evaluating the kernels but also yields expressions similar to those obtained in the TM approach, allowing for a detailed comparison. However, in the TM approach an ad hoc regularization procedure is required to cure spurious divergences in the expressions for the transition rates in the stationary (zero-frequency) limit. We show that these problems derive from incomplete cancellation of reducible contributions and do not occur in the BR and RT techniques, resulting in well-behaved expressions in the latter two cases. Additionally, we show that a standard regularization procedure of the TM rates employed in the literature does not correctly reproduce the BR and RT expressions. All the results apply to general quantum dot models and we present explicit rules for the simplified calculation of the zero-frequency kernels. Although we focus on fourth-order perturbation theory only, the results and implications generalize to higher orders. We illustrate our findings for the single impurity Anderson model with finite Coulomb interaction in a magnetic field.Comment: 29 pages, 12 figures; revised published versio

    Support vector machine for functional data classification

    Get PDF
    In many applications, input data are sampled functions taking their values in infinite dimensional spaces rather than standard vectors. This fact has complex consequences on data analysis algorithms that motivate modifications of them. In fact most of the traditional data analysis tools for regression, classification and clustering have been adapted to functional inputs under the general name of functional Data Analysis (FDA). In this paper, we investigate the use of Support Vector Machines (SVMs) for functional data analysis and we focus on the problem of curves discrimination. SVMs are large margin classifier tools based on implicit non linear mappings of the considered data into high dimensional spaces thanks to kernels. We show how to define simple kernels that take into account the unctional nature of the data and lead to consistent classification. Experiments conducted on real world data emphasize the benefit of taking into account some functional aspects of the problems.Comment: 13 page
    corecore