128,775 research outputs found

    A Unifying View of Multiple Kernel Learning

    Full text link
    Recent research on multiple kernel learning has lead to a number of approaches for combining kernels in regularized risk minimization. The proposed approaches include different formulations of objectives and varying regularization strategies. In this paper we present a unifying general optimization criterion for multiple kernel learning and show how existing formulations are subsumed as special cases. We also derive the criterion's dual representation, which is suitable for general smooth optimization algorithms. Finally, we evaluate multiple kernel learning in this framework analytically using a Rademacher complexity bound on the generalization error and empirically in a set of experiments

    Semi-supervised Online Multiple Kernel Learning Algorithm for Big Data

    Get PDF
    In order to improve the performance of machine learning in big data, online multiple kernel learning algorithms are proposed in this paper. First, a supervised online multiple kernel learning algorithm for big data (SOMK_bd) is proposed to reduce the computational workload during kernel modification. In SOMK_bd, the traditional kernel learning algorithm is improved and kernel integration is only carried out in the constructed kernel subset. Next, an unsupervised online multiple kernel learning algorithm for big data (UOMK_bd) is proposed. In UOMK_bd, the traditional kernel learning algorithm is improved to adapt to the online environment and data replacement strategy is used to modify the kernel function in unsupervised manner. Then, a semi-supervised online multiple kernel learning algorithm for big data (SSOMK_bd) is proposed. Based on incremental learning, SSOMK_bd makes full use of the abundant information of large scale incomplete labeled data, and uses SOMK_bd and UOMK_bd to update the current reading data. Finally, experiments are conducted on UCI data set and the results show that the proposed algorithms are effective

    Neural Generalization of Multiple Kernel Learning

    Full text link
    Multiple Kernel Learning is a conventional way to learn the kernel function in kernel-based methods. MKL algorithms enhance the performance of kernel methods. However, these methods have a lower complexity compared to deep learning models and are inferior to these models in terms of recognition accuracy. Deep learning models can learn complex functions by applying nonlinear transformations to data through several layers. In this paper, we show that a typical MKL algorithm can be interpreted as a one-layer neural network with linear activation functions. By this interpretation, we propose a Neural Generalization of Multiple Kernel Learning (NGMKL), which extends the conventional multiple kernel learning framework to a multi-layer neural network with nonlinear activation functions. Our experiments on several benchmarks show that the proposed method improves the complexity of MKL algorithms and leads to higher recognition accuracy
    corecore