33 research outputs found
Entity Personalized Talent Search Models with Tree Interaction Features
Talent Search systems aim to recommend potential candidates who are a good
match to the hiring needs of a recruiter expressed in terms of the recruiter's
search query or job posting. Past work in this domain has focused on linear and
nonlinear models which lack preference personalization in the user-level due to
being trained only with globally collected recruiter activity data. In this
paper, we propose an entity-personalized Talent Search model which utilizes a
combination of generalized linear mixed (GLMix) models and gradient boosted
decision tree (GBDT) models, and provides personalized talent recommendations
using nonlinear tree interaction features generated by the GBDT. We also
present the offline and online system architecture for the productionization of
this hybrid model approach in our Talent Search systems. Finally, we provide
offline and online experiment results benchmarking our entity-personalized
model with tree interaction features, which demonstrate significant
improvements in our precision metrics compared to globally trained
non-personalized models.Comment: This paper has been accepted for publication at ACM WWW 201
Runtime Optimizations for Prediction with Tree-Based Models
Tree-based models have proven to be an effective solution for web ranking as
well as other problems in diverse domains. This paper focuses on optimizing the
runtime performance of applying such models to make predictions, given an
already-trained model. Although exceedingly simple conceptually, most
implementations of tree-based models do not efficiently utilize modern
superscalar processor architectures. By laying out data structures in memory in
a more cache-conscious fashion, removing branches from the execution flow using
a technique called predication, and micro-batching predictions using a
technique called vectorization, we are able to better exploit modern processor
architectures and significantly improve the speed of tree-based models over
hard-coded if-else blocks. Our work contributes to the exploration of
architecture-conscious runtime implementations of machine learning algorithms
COMET: A Recipe for Learning and Using Large Ensembles on Massive Data
COMET is a single-pass MapReduce algorithm for learning on large-scale data.
It builds multiple random forest ensembles on distributed blocks of data and
merges them into a mega-ensemble. This approach is appropriate when learning
from massive-scale data that is too large to fit on a single machine. To get
the best accuracy, IVoting should be used instead of bagging to generate the
training subset for each decision tree in the random forest. Experiments with
two large datasets (5GB and 50GB compressed) show that COMET compares favorably
(in both accuracy and training time) to learning on a subsample of data using a
serial algorithm. Finally, we propose a new Gaussian approach for lazy ensemble
evaluation which dynamically decides how many ensemble members to evaluate per
data point; this can reduce evaluation cost by 100X or more
Robust Decision Trees Against Adversarial Examples
Although adversarial examples and model robustness have been extensively
studied in the context of linear models and neural networks, research on this
issue in tree-based models and how to make tree-based models robust against
adversarial examples is still limited. In this paper, we show that tree based
models are also vulnerable to adversarial examples and develop a novel
algorithm to learn robust trees. At its core, our method aims to optimize the
performance under the worst-case perturbation of input features, which leads to
a max-min saddle point problem. Incorporating this saddle point objective into
the decision tree building procedure is non-trivial due to the discrete nature
of trees --- a naive approach to finding the best split according to this
saddle point objective will take exponential time. To make our approach
practical and scalable, we propose efficient tree building algorithms by
approximating the inner minimizer in this saddle point problem, and present
efficient implementations for classical information gain based trees as well as
state-of-the-art tree boosting models such as XGBoost. Experimental results on
real world datasets demonstrate that the proposed algorithms can substantially
improve the robustness of tree-based models against adversarial examples
Gradient Boosting With Piece-Wise Linear Regression Trees
Gradient Boosted Decision Trees (GBDT) is a very successful ensemble learning
algorithm widely used across a variety of applications. Recently, several
variants of GBDT training algorithms and implementations have been designed and
heavily optimized in some very popular open sourced toolkits including XGBoost,
LightGBM and CatBoost. In this paper, we show that both the accuracy and
efficiency of GBDT can be further enhanced by using more complex base learners.
Specifically, we extend gradient boosting to use piecewise linear regression
trees (PL Trees), instead of piecewise constant regression trees, as base
learners. We show that PL Trees can accelerate convergence of GBDT and improve
the accuracy. We also propose some optimization tricks to substantially reduce
the training time of PL Trees, with little sacrifice of accuracy. Moreover, we
propose several implementation techniques to speedup our algorithm on modern
computer architectures with powerful Single Instruction Multiple Data (SIMD)
parallelism. The experimental results show that GBDT with PL Trees can provide
very competitive testing accuracy with comparable or less training time