2 research outputs found

    A study on multi-scale kernel optimisation via centered kernel-target alignment

    Get PDF
    Kernel mapping is one of the most widespread approaches to intrinsically deriving nonlinear classifiers. With the aim of better suiting a given dataset, different kernels have been proposed and different bounds and methodologies have been studied to optimise them. We focus on the optimisation of a multi-scale kernel, where a different width is chosen for each feature. This idea has been barely studied in the literature, although it has been shown to achieve better performance in the presence of heterogeneous attributes. The large number of parameters in multi-scale kernels makes it computationally unaffordable to optimise them by applying traditional cross-validation. Instead, an analytical measure known as centered kernel-target alignment (CKTA) can be used to align the kernel to the so-called ideal kernel matrix. This paper analyses and compares this and other alternatives, providing a review of the literature in kernel optimisation and some insights into the usefulness of multi-scale kernel optimisation via CKTA. When applied to the binary support vector machine paradigm (SVM), the results using 24 datasets show that CKTA with a multi-scale kernel leads to the construction of a well-defined feature space and simpler SVM models, provides an implicit filtering of non-informative features and achieves robust and comparable performance to other methods even when using random initialisations. Finally, we derive some considerations about when a multi-scale approach could be, in general, useful and propose a distance-based initialisation technique for the gradient-ascent method, which shows promising results

    Multi-scale Support Vector Machine Optimization by Kernel Target-Alignment

    No full text
    Abstract. The problem considered is the optimization of a multi-scale kernel, where a different width is chosen for each feature. This idea has been barely studied in the literature, and through the use of evolutionary or gradient descent approaches, which explicitly train the learning machine and thereby incur high computacional cost. To cope with this limitation, the problem is explored by making use of an analytical methodology known as kernel-target alignment, where the kernel is optimized by aligning it to the so-called ideal kernel matrix. The results show that the proposal leads to better performance and simpler models at limited computational cost when applying the binary Support Vector Machine (SVM) paradigm.
    corecore