121 research outputs found

    Pareto-Path Multi-Task Multiple Kernel Learning

    Full text link
    A traditional and intuitively appealing Multi-Task Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing amongst tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a Multi-Objective Optimization (MOO) problem, which considers the concurrent optimization of all task objectives involved in the Multi-Task Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel Support Vector Machine (SVM) MT-MKL framework, that considers an implicitly-defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving better classification performance, when compared to other similar MTL approaches.Comment: Accepted by IEEE Transactions on Neural Networks and Learning System

    Reduced-Rank Local Distance Metric Learning

    Get PDF
    Abstract. We propose a new method for local metric learning based on a conical combination of Mahalanobis metrics and pair-wise similarities between the data. Its formulation allows for controlling the rank of the metrics ’ weight matrices. We also offer a convergent algorithm for training the associated model. Experimental results on a collection of classification problems imply that the new method may offer notable performance advantages over alternative metric learning approaches that have recently appeared in the literature

    Gap-based estimation: Choosing the smoothing parameters for Probabilistic and general regression neural networks

    Get PDF
    Probabilistic neural networks (PNN) and general regression neural networks (GRNN) represent knowledge by simple but interpretable models that approximate the optimal classifier or predictor in the sense of expected value of the accuracy. These models require the specification of an important smoothing parameter, which is usually chosen by crossvalidation or clustering. In this letter, we demonstrate the problems with the cross-validation and clustering approaches to specify the smoothing parameter, discuss the relationship between this parameter and some of the data statistics, and attempt to develop a fast approach to determine the optimal value of this parameter. Finally, through experimentation, we show that our approach, referred to as a gap-based estimation approach, is superior in speed to the compared approaches, including support vector machine, and yields good and stable accuracy

    A SIMULATED ANNEALING ALGORITHM FOR THE UNRELATED PARALLEL MACHINE SCHEDULING PROBLEM

    No full text
    The problem addressed in this paper is scheduling jobs on unrelated parallel machines with sequence-dependent setup times to minimize the maximum completion time (i.e., the makespan). This problem is NP-hard even without including setup times. Adding sequence-dependent setup times adds another dimension of complexity to the problem and obtaining optimal solutions becomes very difficult especially for large problems. In this paper a Simulated Annealing (SA) algorithm is applied to the problem at hand to reach near-optimum solution. The effectiveness of the Simulated Annealing algorithm is measured by comparing the quality of its solutions to optimal solutions for small problems. The results show that the SA efficiently obtained optimal solutions for all test problems
    • …
    corecore