2,384 research outputs found
NEXT LEVEL: A COURSE RECOMMENDER SYSTEM BASED ON CAREER INTERESTS
Skills-based hiring is a talent management approach that empowers employers to align recruitment around business results, rather than around credentials and title. It starts with employers identifying the particular skills required for a role, and then screening and evaluating candidates’ competencies against those requirements. With the recent rise in employers adopting skills-based hiring practices, it has become integral for students to take courses that improve their marketability and support their long-term career success. A 2017 survey of over 32,000 students at 43 randomly selected institutions found that only 34% of students believe they will graduate with the skills and knowledge required to be successful in the job market. Furthermore, the study found that while 96% of chief academic officers believe that their institutions are very or somewhat effective at preparing students for the workforce, only 11% of business leaders strongly agree [11]. An implication of the misalignment is that college graduates lack the skills that companies need and value. Fortunately, the rise of skills-based hiring provides an opportunity for universities and students to establish and follow clearer classroom-to-career pathways. To this end, this paper presents a course recommender system that aims to improve students’ career readiness by suggesting relevant skills and courses based on their unique career interests
Graph Convolutional Neural Networks for Web-Scale Recommender Systems
Recent advancements in deep neural networks for graph-structured data have
led to state-of-the-art performance on recommender system benchmarks. However,
making these methods practical and scalable to web-scale recommendation tasks
with billions of items and hundreds of millions of users remains a challenge.
Here we describe a large-scale deep recommendation engine that we developed and
deployed at Pinterest. We develop a data-efficient Graph Convolutional Network
(GCN) algorithm PinSage, which combines efficient random walks and graph
convolutions to generate embeddings of nodes (i.e., items) that incorporate
both graph structure as well as node feature information. Compared to prior GCN
approaches, we develop a novel method based on highly efficient random walks to
structure the convolutions and design a novel training strategy that relies on
harder-and-harder training examples to improve robustness and convergence of
the model. We also develop an efficient MapReduce model inference algorithm to
generate embeddings using a trained model. We deploy PinSage at Pinterest and
train it on 7.5 billion examples on a graph with 3 billion nodes representing
pins and boards, and 18 billion edges. According to offline metrics, user
studies and A/B tests, PinSage generates higher-quality recommendations than
comparable deep learning and graph-based alternatives. To our knowledge, this
is the largest application of deep graph embeddings to date and paves the way
for a new generation of web-scale recommender systems based on graph
convolutional architectures.Comment: KDD 201
News Session-Based Recommendations using Deep Neural Networks
News recommender systems are aimed to personalize users experiences and help
them to discover relevant articles from a large and dynamic search space.
Therefore, news domain is a challenging scenario for recommendations, due to
its sparse user profiling, fast growing number of items, accelerated item's
value decay, and users preferences dynamic shift. Some promising results have
been recently achieved by the usage of Deep Learning techniques on Recommender
Systems, specially for item's feature extraction and for session-based
recommendations with Recurrent Neural Networks. In this paper, it is proposed
an instantiation of the CHAMELEON -- a Deep Learning Meta-Architecture for News
Recommender Systems. This architecture is composed of two modules, the first
responsible to learn news articles representations, based on their text and
metadata, and the second module aimed to provide session-based recommendations
using Recurrent Neural Networks. The recommendation task addressed in this work
is next-item prediction for users sessions: "what is the next most likely
article a user might read in a session?" Users sessions context is leveraged by
the architecture to provide additional information in such extreme cold-start
scenario of news recommendation. Users' behavior and item features are both
merged in an hybrid recommendation approach. A temporal offline evaluation
method is also proposed as a complementary contribution, for a more realistic
evaluation of such task, considering dynamic factors that affect global
readership interests like popularity, recency, and seasonality. Experiments
with an extensive number of session-based recommendation methods were performed
and the proposed instantiation of CHAMELEON meta-architecture obtained a
significant relative improvement in top-n accuracy and ranking metrics (10% on
Hit Rate and 13% on MRR) over the best benchmark methods.Comment: Accepted for the Third Workshop on Deep Learning for Recommender
Systems - DLRS 2018, October 02-07, 2018, Vancouver, Canada.
https://recsys.acm.org/recsys18/dlrs
Latent Relational Metric Learning via Memory-based Attention for Collaborative Ranking
This paper proposes a new neural architecture for collaborative ranking with
implicit feedback. Our model, LRML (\textit{Latent Relational Metric Learning})
is a novel metric learning approach for recommendation. More specifically,
instead of simple push-pull mechanisms between user and item pairs, we propose
to learn latent relations that describe each user item interaction. This helps
to alleviate the potential geometric inflexibility of existing metric learing
approaches. This enables not only better performance but also a greater extent
of modeling capability, allowing our model to scale to a larger number of
interactions. In order to do so, we employ a augmented memory module and learn
to attend over these memory blocks to construct latent relations. The
memory-based attention module is controlled by the user-item interaction,
making the learned relation vector specific to each user-item pair. Hence, this
can be interpreted as learning an exclusive and optimal relational translation
for each user-item interaction. The proposed architecture demonstrates the
state-of-the-art performance across multiple recommendation benchmarks. LRML
outperforms other metric learning models by in terms of Hits@10 and
nDCG@10 on large datasets such as Netflix and MovieLens20M. Moreover,
qualitative studies also demonstrate evidence that our proposed model is able
to infer and encode explicit sentiment, temporal and attribute information
despite being only trained on implicit feedback. As such, this ascertains the
ability of LRML to uncover hidden relational structure within implicit
datasets.Comment: WWW 201
Salience and Market-aware Skill Extraction for Job Targeting
At LinkedIn, we want to create economic opportunity for everyone in the
global workforce. To make this happen, LinkedIn offers a reactive Job Search
system, and a proactive Jobs You May Be Interested In (JYMBII) system to match
the best candidates with their dream jobs. One of the most challenging tasks
for developing these systems is to properly extract important skill entities
from job postings and then target members with matched attributes. In this
work, we show that the commonly used text-based \emph{salience and
market-agnostic} skill extraction approach is sub-optimal because it only
considers skill mention and ignores the salient level of a skill and its market
dynamics, i.e., the market supply and demand influence on the importance of
skills. To address the above drawbacks, we present \model, our deployed
\emph{salience and market-aware} skill extraction system. The proposed \model
~shows promising results in improving the online performance of job
recommendation (JYMBII) ( job apply) and skill suggestions for job
posters ( suggestion rejection rate). Lastly, we present case studies to
show interesting insights that contrast traditional skill recognition method
and the proposed \model~from occupation, industry, country, and individual
skill levels. Based on the above promising results, we deployed the \model
~online to extract job targeting skills for all M job postings served at
LinkedIn.Comment: 9 pages, to appear in KDD202
InTune: Reinforcement Learning-based Data Pipeline Optimization for Deep Recommendation Models
Deep learning-based recommender models (DLRMs) have become an essential
component of many modern recommender systems. Several companies are now
building large compute clusters reserved only for DLRM training, driving new
interest in cost- and time- saving optimizations. The systems challenges faced
in this setting are unique; while typical deep learning training jobs are
dominated by model execution, the most important factor in DLRM training
performance is often online data ingestion.
In this paper, we explore the unique characteristics of this data ingestion
problem and provide insights into DLRM training pipeline bottlenecks and
challenges. We study real-world DLRM data processing pipelines taken from our
compute cluster at Netflix to observe the performance impacts of online
ingestion and to identify shortfalls in existing pipeline optimizers. We find
that current tooling either yields sub-optimal performance, frequent crashes,
or else requires impractical cluster re-organization to adopt. Our studies lead
us to design and build a new solution for data pipeline optimization, InTune.
InTune employs a reinforcement learning (RL) agent to learn how to distribute
the CPU resources of a trainer machine across a DLRM data pipeline to more
effectively parallelize data loading and improve throughput. Our experiments
show that InTune can build an optimized data pipeline configuration within only
a few minutes, and can easily be integrated into existing training workflows.
By exploiting the responsiveness and adaptability of RL, InTune achieves higher
online data ingestion rates than existing optimizers, thus reducing idle times
in model execution and increasing efficiency. We apply InTune to our real-world
cluster, and find that it increases data ingestion throughput by as much as
2.29X versus state-of-the-art data pipeline optimizers while also improving
both CPU & GPU utilization.Comment: Accepted at RecSys 2023. 11 pages, 2 pages of references. 8 figures
with 2 table
- …