383 research outputs found
Multi-task Regression using Minimal Penalties
In this paper we study the kernel multiple ridge regression framework, which
we refer to as multi-task regression, using penalization techniques. The
theoretical analysis of this problem shows that the key element appearing for
an optimal calibration is the covariance matrix of the noise between the
different tasks. We present a new algorithm to estimate this covariance matrix,
based on the concept of minimal penalty, which was previously used in the
single-task regression framework to estimate the variance of the noise. We
show, in a non-asymptotic setting and under mild assumptions on the target
function, that this estimator converges towards the covariance matrix. Then
plugging this estimator into the corresponding ideal penalty leads to an oracle
inequality. We illustrate the behavior of our algorithm on synthetic examples
Online Multitask Learning with Long-Term Memory
We introduce a novel online multitask setting. In this setting each task is
partitioned into a sequence of segments that is unknown to the learner.
Associated with each segment is a hypothesis from some hypothesis class. We
give algorithms that are designed to exploit the scenario where there are many
such segments but significantly fewer associated hypotheses. We prove regret
bounds that hold for any segmentation of the tasks and any association of
hypotheses to the segments. In the single-task setting this is equivalent to
switching with long-term memory in the sense of [Bousquet and Warmuth; 2003].
We provide an algorithm that predicts on each trial in time linear in the
number of hypotheses when the hypothesis class is finite. We also consider
infinite hypothesis classes from reproducing kernel Hilbert spaces for which we
give an algorithm whose per trial time complexity is cubic in the number of
cumulative trials. In the single-task special case this is the first example of
an efficient regret-bounded switching algorithm with long-term memory for a
non-parametric hypothesis class
Online Matrix Completion with Side Information
This thesis considers the problem of binary matrix completion with side information in the online setting and the applications thereof. The side information provides additional information on the rows and columns and can yield improved results compared to when such information is not available. We present efficient and general algorithms in transductive and inductive models. The performance guarantees that we prove are with respect to the matrix complexity measures of the max-norm and the margin complexity. We apply our bounds to the hypothesis class of biclustered matrices. Such matrices can be permuted through the rows and columns into homogeneous latent blocks. This class is a natural choice for our problem since the margin complexity and max-norm of these matrices have an upper bound that is easy to interpret in terms of the latent dimensions. We also apply our algorithms to a novel online multitask setting with RKHS hypothesis classes. In this setting, each task is partitioned in a sequence of segments, where a hypothesis is associated with each segment. Our algorithms are designed to exploit the scenario where the number of associated hypotheses is much smaller than the number of segments. We prove performance guarantees that hold for any segmentation of the tasks and any association of hypotheses to the segments. In the single-task setting, this is analogous to switching with long-term memory in the sense of [Bousquet and Warmuth; 2003]
How to Reuse and Compose Knowledge for a Lifetime of Tasks: A Survey on Continual Learning and Functional Composition
A major goal of artificial intelligence (AI) is to create an agent capable of
acquiring a general understanding of the world. Such an agent would require the
ability to continually accumulate and build upon its knowledge as it encounters
new experiences. Lifelong or continual learning addresses this setting, whereby
an agent faces a continual stream of problems and must strive to capture the
knowledge necessary for solving each new task it encounters. If the agent is
capable of accumulating knowledge in some form of compositional representation,
it could then selectively reuse and combine relevant pieces of knowledge to
construct novel solutions. Despite the intuitive appeal of this simple idea,
the literatures on lifelong learning and compositional learning have proceeded
largely separately. In an effort to promote developments that bridge between
the two fields, this article surveys their respective research landscapes and
discusses existing and future connections between them
Doctor of Philosophy
dissertationMachine learning is the science of building predictive models from data that automatically improve based on past experience. To learn these models, traditional learning algorithms require labeled data. They also require that the entire dataset fits in the memory of a single machine. Labeled data are available or can be acquired for small and moderately sized datasets but curating large datasets can be prohibitively expensive. Similarly, massive datasets are usually too huge to fit into the memory of a single machine. An alternative is to distribute the dataset over multiple machines. Distributed learning, however, poses new challenges as most existing machine learning techniques are inherently sequential. Additionally, these distributed approaches have to be designed keeping in mind various resource limitations of real-world settings, prime among them being intermachine communication. With the advent of big datasets machine learning algorithms are facing new challenges. Their design is no longer limited to minimizing some loss function but, additionally, needs to consider other resources that are critical when learning at scale. In this thesis, we explore different models and measures for learning with limited resources that have a budget. What budgetary constraints are posed by modern datasets? Can we reuse or combine existing machine learning paradigms to address these challenges at scale? How does the cost metrics change when we shift to distributed models for learning? These are some of the questions that have been investigated in this thesis. The answers to these questions hold the key to addressing some of the challenges faced when learning on massive datasets. In the first part of this thesis, we present three different budgeted scenarios that deal with scarcity of labeled data and limited computational resources. The goal is to leverage transfer information from related domains to learn under budgetary constraints. Our proposed techniques comprise semisupervised transfer, online transfer and active transfer. In the second part of this thesis, we study distributed learning with limited communication. We present initial sampling based results, as well as, propose communication protocols for learning distributed linear classifiers
Learning Representations of Social Media Users
User representations are routinely used in recommendation systems by platform
developers, targeted advertisements by marketers, and by public policy
researchers to gauge public opinion across demographic groups. Computer
scientists consider the problem of inferring user representations more
abstractly; how does one extract a stable user representation - effective for
many downstream tasks - from a medium as noisy and complicated as social media?
The quality of a user representation is ultimately task-dependent (e.g. does
it improve classifier performance, make more accurate recommendations in a
recommendation system) but there are proxies that are less sensitive to the
specific task. Is the representation predictive of latent properties such as a
person's demographic features, socioeconomic class, or mental health state? Is
it predictive of the user's future behavior?
In this thesis, we begin by showing how user representations can be learned
from multiple types of user behavior on social media. We apply several
extensions of generalized canonical correlation analysis to learn these
representations and evaluate them at three tasks: predicting future hashtag
mentions, friending behavior, and demographic features. We then show how user
features can be employed as distant supervision to improve topic model fit.
Finally, we show how user features can be integrated into and improve existing
classifiers in the multitask learning framework. We treat user representations
- ground truth gender and mental health features - as auxiliary tasks to
improve mental health state prediction. We also use distributed user
representations learned in the first chapter to improve tweet-level stance
classifiers, showing that distant user information can inform classification
tasks at the granularity of a single message.Comment: PhD thesi
Learning Representations of Social Media Users
User representations are routinely used in recommendation systems by platform
developers, targeted advertisements by marketers, and by public policy
researchers to gauge public opinion across demographic groups. Computer
scientists consider the problem of inferring user representations more
abstractly; how does one extract a stable user representation - effective for
many downstream tasks - from a medium as noisy and complicated as social media?
The quality of a user representation is ultimately task-dependent (e.g. does
it improve classifier performance, make more accurate recommendations in a
recommendation system) but there are proxies that are less sensitive to the
specific task. Is the representation predictive of latent properties such as a
person's demographic features, socioeconomic class, or mental health state? Is
it predictive of the user's future behavior?
In this thesis, we begin by showing how user representations can be learned
from multiple types of user behavior on social media. We apply several
extensions of generalized canonical correlation analysis to learn these
representations and evaluate them at three tasks: predicting future hashtag
mentions, friending behavior, and demographic features. We then show how user
features can be employed as distant supervision to improve topic model fit.
Finally, we show how user features can be integrated into and improve existing
classifiers in the multitask learning framework. We treat user representations
- ground truth gender and mental health features - as auxiliary tasks to
improve mental health state prediction. We also use distributed user
representations learned in the first chapter to improve tweet-level stance
classifiers, showing that distant user information can inform classification
tasks at the granularity of a single message.Comment: PhD thesi
- …