9 research outputs found

    Modified Frank-Wolfe Algorithm for Enhanced Sparsity in Support Vector Machine Classifiers

    Full text link
    This work proposes a new algorithm for training a re-weighted L2 Support Vector Machine (SVM), inspired on the re-weighted Lasso algorithm of Cand\`es et al. and on the equivalence between Lasso and SVM shown recently by Jaggi. In particular, the margin required for each training vector is set independently, defining a new weighted SVM model. These weights are selected to be binary, and they are automatically adapted during the training of the model, resulting in a variation of the Frank-Wolfe optimization algorithm with essentially the same computational complexity as the original algorithm. As shown experimentally, this algorithm is computationally cheaper to apply since it requires less iterations to converge, and it produces models with a sparser representation in terms of support vectors and which are more stable with respect to the selection of the regularization hyper-parameter

    Modified Frank–Wolfe algorithm for enhanced sparsity in support vector machine classifiers

    Full text link
    This work proposes a new algorithm for training a re-weighted ℓ2 Support Vector Machine (SVM), inspired on the re-weighted Lasso algorithm of Candès et al. and on the equivalence between Lasso and SVM shown recently by Jaggi. In particular, the margin required for each training vector is set independently, defining a new weighted SVM model. These weights are selected to be binary, and they are automatically adapted during the training of the model, resulting in a variation of the Frank–Wolfe optimization algorithm with essentially the same computational complexity as the original algorithm. As shown experimentally, this algorithm is computationally cheaper to apply since it requires less iterations to converge, and it produces models with a sparser representation in terms of support vectors and which are more stable with respect to the selection of the regularization hyper-parameterThe authors would like to thank the following organizations. • EU: The research leading to these results has received funding from the European Research Council under the European Ue- DATADRIVE-B (290923). This paper reflects only the authors’ views, the Union is not liable for any use that may be made of the contained information. • Research Council KUL: GOA/10/09 MaNet, CoE PFV/10/002 (OPTEC), BIL12/11

    Convex formulation for multi-task L1-, L2-, and LS-SVMs

    Full text link
    Quite often a machine learning problem lends itself to be split in several well-defined subproblems, or tasks. The goal of Multi-Task Learning (MTL) is to leverage the joint learning of the problem from two different perspectives: on the one hand, a single, overall model, and on the other hand task-specific models. In this way, the found solution by MTL may be better than those of either the common or the task-specific models. Starting with the work of Evgeniou et al., support vector machines (SVMs) have lent themselves naturally to this approach. This paper proposes a convex formulation of MTL for the L1-, L2- and LS-SVM models that results in dual problems quite similar to the single-task ones, but with multi-task kernels; in turn, this makes possible to train the convex MTL models using standard solvers. As an alternative approach, the direct optimal combination of the already trained common and task-specific models can also be considered. In this paper, a procedure to compute the optimal combining parameter with respect to four different error functions is derived. As shown experimentally, the proposed convex MTL approach performs generally better than the alternative optimal convex combination, and both of them are better than the straight use of either common or task-specific modelsWith partial support from Spain’s grant TIN2016-76406-P. Work supported also by the UAM–ADIC Chair for Data Science and Machine Learning

    ν-SVM solutions of constrained lasso and elastic net

    Full text link
    Many important linear sparse models have at its core the Lasso problem, for which the GLMNet algorithm is often considered as the current state of the art. Recently M. Jaggi has observed that Constrained Lasso (CL) can be reduced to an SVM-like problem, for which the LIBSVM library provides very efficient algorithms. This suggests that it could also be used advantageously to solve CL. In this work we will refine Jaggi’s arguments to reduce CL as well as constrained Elastic Net to a Nearest Point Problem, which in turn can be rewritten as an appropriate ν-SVM problem solvable by LIBSVM. We will also show experimentally that the well-known LIBSVM library results in a faster convergence than GLMNet for small problems and also, if properly adapted, for larger ones. Screening is another ingredient to speed up solving Lasso. Shrinking can be seen as the simpler alternative of SVM to screening and we will discuss how it also may in some cases reduce the cost of an SVM-based CL solutionWith partial support from Spanish government grants TIN2013-42351-P, TIN2016-76406-P, TIN2015-70308-REDT and S2013/ICE-2845 CASI-CAM-CM; work also supported by project FACIL–Ayudas Fundación BBVA a Equipos de Investigación Científica 2016 and the UAM–ADIC Chair for Data Science and Machine Learning. The first author is also supported by the FPU–MEC grant AP-2012-5163. We gratefully acknowledge the use of the facilities of Centro de Computación Científica (CCC) at UAM and thank Red Eléctrica de España for kindly supplying wind energy dat

    Diffusion maps and local models for wind power prediction

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-33266-1_70Proceedings of 22nd International Conference on Artificial Neural Networks, Lausanne, Switzerland, September 11-14, 2012In this work we will apply Diffusion Maps (DM), a recent technique for dimensionality reduction and clustering, to build local models for wind energy forecasting. We will compare ridge regression models for K–means clusters obtained over DM features, against the models obtained for clusters constructed over the original meteorological data or principal components, and also against a global model. We will see that a combination of the DM model for the low wind power region and the global model elsewhere outperforms other options.With partial support from grant TIN2010-21575-C02-01 of Spain’s Ministerio de Economía y Competitividad and the UAM–ADIC Chair for Machine Learning in Modelling and Prediction. The first author is also supported by an FPI-UAM grant and kindly thanks the Applied Mathematics Department of Yale University for receiving her during a visit. The second author is supported by the FPU-MEC grant AP2008-00167. We also thank Red Eléctrica de España, Spain’s TSO, for providing historic wind energy dat

    Magnetic eigenmaps for community detection in directed networks

    No full text
    Communities in directed networks have often been characterized as regions with a high density of links, or as sets of nodes with certain patterns of connection. Our approach for community detection combines the optimization of a quality function and a spectral clustering of a deformation of the combinatorial Laplacian, the so-called magnetic Laplacian. The eigenfunctions of the magnetic Laplacian, that we call magnetic eigenmaps, incorporate structural information. Hence, using the magnetic eigenmaps, dense communities including directed cycles can be revealed as well as "role" communities in networks with a running flow, usually discovered thanks to mixture models. Furthermore, in the spirit of the Markov stability method, an approach for studying communities at different energy levels in the network is put forward, based on a quantum mechanical system at finite temperature.Comment: 15 page
    corecore