130,782 research outputs found

    Deep Embedding Kernel

    Get PDF
    Kernel methods and deep learning are two major branches of machine learning that have achieved numerous successes in both analytics and artificial intelligence. While having their own unique characteristics, both branches work through mapping data to a feature space that is supposedly more favorable towards the given task. This dissertation addresses the strengths and weaknesses of each mapping method through combining them and forming a family of novel deep architectures that center around the Deep Embedding Kernel (DEK). In short, DEK is a realization of a kernel function through a newly deep architecture. The mapping in DEK is both implicit (like in kernel methods) and learnable (like in deep learning). Prior to DEK, we proposed a less advanced architecture called Deep Kernel for the tasks of classification and visualization. More recently, we integrate DEK with the novel Dual Deep Learning framework to model big unstructured data. Using DEK as a core component, we further propose two machine learning models: Deep Similarity-Enhanced K Nearest Neighbors (DSE-KNN) and Recurrent Embedding Kernel (REK). Both models have their mappings trained towards optimizing data instances\u27 neighborhoods in the feature space. REK is specifically designed for time series data. Experimental studies throughout the dissertation show that the proposed models have competitive performance to other commonly used and state-of-the-art machine learning models in their given tasks

    End-to-End Kernel Learning with Supervised Convolutional Kernel Networks

    Get PDF
    In this paper, we introduce a new image representation based on a multilayer kernel machine. Unlike traditional kernel methods where data representation is decoupled from the prediction task, we learn how to shape the kernel with supervision. We proceed by first proposing improvements of the recently-introduced convolutional kernel networks (CKNs) in the context of unsupervised learning; then, we derive backpropagation rules to take advantage of labeled training data. The resulting model is a new type of convolutional neural network, where optimizing the filters at each layer is equivalent to learning a linear subspace in a reproducing kernel Hilbert space (RKHS). We show that our method achieves reasonably competitive performance for image classification on some standard "deep learning" datasets such as CIFAR-10 and SVHN, and also for image super-resolution, demonstrating the applicability of our approach to a large variety of image-related tasks.Comment: to appear in Advances in Neural Information Processing Systems (NIPS

    A representer theorem for deep kernel learning

    Full text link
    In this paper we provide a finite-sample and an infinite-sample representer theorem for the concatenation of (linear combinations of) kernel functions of reproducing kernel Hilbert spaces. These results serve as mathematical foundation for the analysis of machine learning algorithms based on compositions of functions. As a direct consequence in the finite-sample case, the corresponding infinite-dimensional minimization problems can be recast into (nonlinear) finite-dimensional minimization problems, which can be tackled with nonlinear optimization algorithms. Moreover, we show how concatenated machine learning problems can be reformulated as neural networks and how our representer theorem applies to a broad class of state-of-the-art deep learning methods
    corecore