390,265 research outputs found

    Learning task-specific similarity

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2006.Includes bibliographical references (p. 139-147).The right measure of similarity between examples is important in many areas of computer science. In particular it is a critical component in example-based learning methods. Similarity is commonly defined in terms of a conventional distance function, but such a definition does not necessarily capture the inherent meaning of similarity, which tends to depend on the underlying task. We develop an algorithmic approach to learning similarity from examples of what objects are deemed similar according to the task-specific notion of similarity at hand, as well as optional negative examples. Our learning algorithm constructs, in a greedy fashion, an encoding of the data. This encoding can be seen as an embedding into a space, where a weighted Hamming distance is correlated with the unknown similarity. This allows us to predict when two previously unseen examples are similar and, importantly, to efficiently search a very large database for examples similar to a query. This approach is tested on a set of standard machine learning benchmark problems. The model of similarity learned with our algorithm provides and improvement over standard example-based classification and regression. We also apply this framework to problems in computer vision: articulated pose estimation of humans from single images, articulated tracking in video, and matching image regions subject to generic visual similarity.by Gregory Shakhnarovich.Ph.D

    Variational Quantum Kernels with Task-Specific Quantum Metric Learning

    Full text link
    Quantum kernel methods, i.e., kernel methods with quantum kernels, offer distinct advantages as a hybrid quantum-classical approach to quantum machine learning (QML), including applicability to Noisy Intermediate-Scale Quantum (NISQ) devices and usage for solving all types of machine learning problems. Kernel methods rely on the notion of similarity between points in a higher (possibly infinite) dimensional feature space. For machine learning, the notion of similarity assumes that points close in the feature space should be close in the machine learning task space. In this paper, we discuss the use of variational quantum kernels with task-specific quantum metric learning to generate optimal quantum embeddings (a.k.a. quantum feature encodings) that are specific to machine learning tasks. Such task-specific optimal quantum embeddings, implicitly supporting feature selection, are valuable not only to quantum kernel methods in improving the latter's performance, but they can also be valuable to non-kernel QML methods based on parameterized quantum circuits (PQCs) as pretrained embeddings and for transfer learning. This further demonstrates the quantum utility, and quantum advantage (with classically-intractable quantum embeddings), of quantum kernel methods

    Generalization Performance of Transfer Learning: Overparameterized and Underparameterized Regimes

    Full text link
    Transfer learning is a useful technique for achieving improved performance and reducing training costs by leveraging the knowledge gained from source tasks and applying it to target tasks. Assessing the effectiveness of transfer learning relies on understanding the similarity between the ground truth of the source and target tasks. In real-world applications, tasks often exhibit partial similarity, where certain aspects are similar while others are different or irrelevant. To investigate the impact of partial similarity on transfer learning performance, we focus on a linear regression model with two distinct sets of features: a common part shared across tasks and a task-specific part. Our study explores various types of transfer learning, encompassing two options for parameter transfer. By establishing a theoretical characterization on the error of the learned model, we compare these transfer learning options, particularly examining how generalization performance changes with the number of features/parameters in both underparameterized and overparameterized regimes. Furthermore, we provide practical guidelines for determining the number of features in the common and task-specific parts for improved generalization performance. For example, when the total number of features in the source task's learning model is fixed, we show that it is more advantageous to allocate a greater number of redundant features to the task-specific part rather than the common part. Moreover, in specific scenarios, particularly those characterized by high noise levels and small true parameters, sacrificing certain true features in the common part in favor of employing more redundant features in the task-specific part can yield notable benefits

    Multi-task Bias-Variance Trade-off Through Functional Constraints

    Full text link
    Multi-task learning aims to acquire a set of functions, either regressors or classifiers, that perform well for diverse tasks. At its core, the idea behind multi-task learning is to exploit the intrinsic similarity across data sources to aid in the learning process for each individual domain. In this paper we draw intuition from the two extreme learning scenarios -- a single function for all tasks, and a task-specific function that ignores the other tasks dependencies -- to propose a bias-variance trade-off. To control the relationship between the variance (given by the number of i.i.d. samples), and the bias (coming from data from other task), we introduce a constrained learning formulation that enforces domain specific solutions to be close to a central function. This problem is solved in the dual domain, for which we propose a stochastic primal-dual algorithm. Experimental results for a multi-domain classification problem with real data show that the proposed procedure outperforms both the task specific, as well as the single classifiers
    • …
    corecore