3,816 research outputs found
When is a Network a Network? Multi-Order Graphical Model Selection in Pathways and Temporal Networks
We introduce a framework for the modeling of sequential data capturing
pathways of varying lengths observed in a network. Such data are important,
e.g., when studying click streams in information networks, travel patterns in
transportation systems, information cascades in social networks, biological
pathways or time-stamped social interactions. While it is common to apply graph
analytics and network analysis to such data, recent works have shown that
temporal correlations can invalidate the results of such methods. This raises a
fundamental question: when is a network abstraction of sequential data
justified? Addressing this open question, we propose a framework which combines
Markov chains of multiple, higher orders into a multi-layer graphical model
that captures temporal correlations in pathways at multiple length scales
simultaneously. We develop a model selection technique to infer the optimal
number of layers of such a model and show that it outperforms previously used
Markov order detection techniques. An application to eight real-world data sets
on pathways and temporal networks shows that it allows to infer graphical
models which capture both topological and temporal characteristics of such
data. Our work highlights fallacies of network abstractions and provides a
principled answer to the open question when they are justified. Generalizing
network representations to multi-order graphical models, it opens perspectives
for new data mining and knowledge discovery algorithms.Comment: 10 pages, 4 figures, 1 table, companion python package pathpy
available on gitHu
Salience and Market-aware Skill Extraction for Job Targeting
At LinkedIn, we want to create economic opportunity for everyone in the
global workforce. To make this happen, LinkedIn offers a reactive Job Search
system, and a proactive Jobs You May Be Interested In (JYMBII) system to match
the best candidates with their dream jobs. One of the most challenging tasks
for developing these systems is to properly extract important skill entities
from job postings and then target members with matched attributes. In this
work, we show that the commonly used text-based \emph{salience and
market-agnostic} skill extraction approach is sub-optimal because it only
considers skill mention and ignores the salient level of a skill and its market
dynamics, i.e., the market supply and demand influence on the importance of
skills. To address the above drawbacks, we present \model, our deployed
\emph{salience and market-aware} skill extraction system. The proposed \model
~shows promising results in improving the online performance of job
recommendation (JYMBII) ( job apply) and skill suggestions for job
posters ( suggestion rejection rate). Lastly, we present case studies to
show interesting insights that contrast traditional skill recognition method
and the proposed \model~from occupation, industry, country, and individual
skill levels. Based on the above promising results, we deployed the \model
~online to extract job targeting skills for all M job postings served at
LinkedIn.Comment: 9 pages, to appear in KDD202
Automated construction and analysis of political networks via open government and media sources
We present a tool to generate real world political networks from user provided lists of politicians and news sites. Additional output includes visualizations, interactive tools and maps that allow a user to better understand the politicians and their surrounding environments as portrayed by the media. As a case study, we construct a comprehensive list of current Texas politicians, select news sites that convey a spectrum of political viewpoints covering Texas politics, and examine the results. We propose a âCombinedâ co-occurrence distance metric to better reflect the relationship between two entities. A topic modeling technique is also proposed as a novel, automated way of labeling communities that exist within a politicianâs âextendedâ network.Peer ReviewedPostprint (author's final draft
Fake News Detection in Social Networks via Crowd Signals
Our work considers leveraging crowd signals for detecting fake news and is
motivated by tools recently introduced by Facebook that enable users to flag
fake news. By aggregating users' flags, our goal is to select a small subset of
news every day, send them to an expert (e.g., via a third-party fact-checking
organization), and stop the spread of news identified as fake by an expert. The
main objective of our work is to minimize the spread of misinformation by
stopping the propagation of fake news in the network. It is especially
challenging to achieve this objective as it requires detecting fake news with
high-confidence as quickly as possible. We show that in order to leverage
users' flags efficiently, it is crucial to learn about users' flagging
accuracy. We develop a novel algorithm, DETECTIVE, that performs Bayesian
inference for detecting fake news and jointly learns about users' flagging
accuracy over time. Our algorithm employs posterior sampling to actively trade
off exploitation (selecting news that maximize the objective value at a given
epoch) and exploration (selecting news that maximize the value of information
towards learning about users' flagging accuracy). We demonstrate the
effectiveness of our approach via extensive experiments and show the power of
leveraging community signals for fake news detection
The Child is Father of the Man: Foresee the Success at the Early Stage
Understanding the dynamic mechanisms that drive the high-impact scientific
work (e.g., research papers, patents) is a long-debated research topic and has
many important implications, ranging from personal career development and
recruitment search, to the jurisdiction of research resources. Recent advances
in characterizing and modeling scientific success have made it possible to
forecast the long-term impact of scientific work, where data mining techniques,
supervised learning in particular, play an essential role. Despite much
progress, several key algorithmic challenges in relation to predicting
long-term scientific impact have largely remained open. In this paper, we
propose a joint predictive model to forecast the long-term scientific impact at
the early stage, which simultaneously addresses a number of these open
challenges, including the scholarly feature design, the non-linearity, the
domain-heterogeneity and dynamics. In particular, we formulate it as a
regularized optimization problem and propose effective and scalable algorithms
to solve it. We perform extensive empirical evaluations on large, real
scholarly data sets to validate the effectiveness and the efficiency of our
method.Comment: Correct some typos in our KDD pape
- âŚ