409 research outputs found
Learning Representations of Social Media Users
User representations are routinely used in recommendation systems by platform
developers, targeted advertisements by marketers, and by public policy
researchers to gauge public opinion across demographic groups. Computer
scientists consider the problem of inferring user representations more
abstractly; how does one extract a stable user representation - effective for
many downstream tasks - from a medium as noisy and complicated as social media?
The quality of a user representation is ultimately task-dependent (e.g. does
it improve classifier performance, make more accurate recommendations in a
recommendation system) but there are proxies that are less sensitive to the
specific task. Is the representation predictive of latent properties such as a
person's demographic features, socioeconomic class, or mental health state? Is
it predictive of the user's future behavior?
In this thesis, we begin by showing how user representations can be learned
from multiple types of user behavior on social media. We apply several
extensions of generalized canonical correlation analysis to learn these
representations and evaluate them at three tasks: predicting future hashtag
mentions, friending behavior, and demographic features. We then show how user
features can be employed as distant supervision to improve topic model fit.
Finally, we show how user features can be integrated into and improve existing
classifiers in the multitask learning framework. We treat user representations
- ground truth gender and mental health features - as auxiliary tasks to
improve mental health state prediction. We also use distributed user
representations learned in the first chapter to improve tweet-level stance
classifiers, showing that distant user information can inform classification
tasks at the granularity of a single message.Comment: PhD thesi
Learning Representations of Social Media Users
User representations are routinely used in recommendation systems by platform
developers, targeted advertisements by marketers, and by public policy
researchers to gauge public opinion across demographic groups. Computer
scientists consider the problem of inferring user representations more
abstractly; how does one extract a stable user representation - effective for
many downstream tasks - from a medium as noisy and complicated as social media?
The quality of a user representation is ultimately task-dependent (e.g. does
it improve classifier performance, make more accurate recommendations in a
recommendation system) but there are proxies that are less sensitive to the
specific task. Is the representation predictive of latent properties such as a
person's demographic features, socioeconomic class, or mental health state? Is
it predictive of the user's future behavior?
In this thesis, we begin by showing how user representations can be learned
from multiple types of user behavior on social media. We apply several
extensions of generalized canonical correlation analysis to learn these
representations and evaluate them at three tasks: predicting future hashtag
mentions, friending behavior, and demographic features. We then show how user
features can be employed as distant supervision to improve topic model fit.
Finally, we show how user features can be integrated into and improve existing
classifiers in the multitask learning framework. We treat user representations
- ground truth gender and mental health features - as auxiliary tasks to
improve mental health state prediction. We also use distributed user
representations learned in the first chapter to improve tweet-level stance
classifiers, showing that distant user information can inform classification
tasks at the granularity of a single message.Comment: PhD thesi
Heterformer: Transformer-based Deep Node Representation Learning on Heterogeneous Text-Rich Networks
Representation learning on networks aims to derive a meaningful vector
representation for each node, thereby facilitating downstream tasks such as
link prediction, node classification, and node clustering. In heterogeneous
text-rich networks, this task is more challenging due to (1) presence or
absence of text: Some nodes are associated with rich textual information, while
others are not; (2) diversity of types: Nodes and edges of multiple types form
a heterogeneous network structure. As pretrained language models (PLMs) have
demonstrated their effectiveness in obtaining widely generalizable text
representations, a substantial amount of effort has been made to incorporate
PLMs into representation learning on text-rich networks. However, few of them
can jointly consider heterogeneous structure (network) information as well as
rich textual semantic information of each node effectively. In this paper, we
propose Heterformer, a Heterogeneous Network-Empowered Transformer that
performs contextualized text encoding and heterogeneous structure encoding in a
unified model. Specifically, we inject heterogeneous structure information into
each Transformer layer when encoding node texts. Meanwhile, Heterformer is
capable of characterizing node/edge type heterogeneity and encoding nodes with
or without texts. We conduct comprehensive experiments on three tasks (i.e.,
link prediction, node classification, and node clustering) on three large-scale
datasets from different domains, where Heterformer outperforms competitive
baselines significantly and consistently.Comment: KDD 2023. (Code: https://github.com/PeterGriffinJin/Heterformer
A Comprehensive Survey on Word Representation Models: From Classical to State-Of-The-Art Word Representation Language Models
Word representation has always been an important research area in the history
of natural language processing (NLP). Understanding such complex text data is
imperative, given that it is rich in information and can be used widely across
various applications. In this survey, we explore different word representation
models and its power of expression, from the classical to modern-day
state-of-the-art word representation language models (LMS). We describe a
variety of text representation methods, and model designs have blossomed in the
context of NLP, including SOTA LMs. These models can transform large volumes of
text into effective vector representations capturing the same semantic
information. Further, such representations can be utilized by various machine
learning (ML) algorithms for a variety of NLP related tasks. In the end, this
survey briefly discusses the commonly used ML and DL based classifiers,
evaluation metrics and the applications of these word embeddings in different
NLP tasks
Knowledge Modelling and Learning through Cognitive Networks
One of the most promising developments in modelling knowledge is cognitive network science, which aims to investigate cognitive phenomena driven by the networked, associative organization of knowledge. For example, investigating the structure of semantic memory via semantic networks has illuminated how memory recall patterns influence phenomena such as creativity, memory search, learning, and more generally, knowledge acquisition, exploration, and exploitation. In parallel, neural network models for artificial intelligence (AI) are also becoming more widespread as inferential models for understanding which features drive language-related phenomena such as meaning reconstruction, stance detection, and emotional profiling. Whereas cognitive networks map explicitly which entities engage in associative relationships, neural networks perform an implicit mapping of correlations in cognitive data as weights, obtained after training over labelled data and whose interpretation is not immediately evident to the experimenter. This book aims to bring together quantitative, innovative research that focuses on modelling knowledge through cognitive and neural networks to gain insight into mechanisms driving cognitive processes related to knowledge structuring, exploration, and learning. The book comprises a variety of publication types, including reviews and theoretical papers, empirical research, computational modelling, and big data analysis. All papers here share a commonality: they demonstrate how the application of network science and AI can extend and broaden cognitive science in ways that traditional approaches cannot
- …