7,194 research outputs found
Deep Time-Stream Framework for Click-Through Rate Prediction by Tracking Interest Evolution
Click-through rate (CTR) prediction is an essential task in industrial
applications such as video recommendation. Recently, deep learning models have
been proposed to learn the representation of users' overall interests, while
ignoring the fact that interests may dynamically change over time. We argue
that it is necessary to consider the continuous-time information in CTR models
to track user interest trend from rich historical behaviors. In this paper, we
propose a novel Deep Time-Stream framework (DTS) which introduces the time
information by an ordinary differential equations (ODE). DTS continuously
models the evolution of interests using a neural network, and thus is able to
tackle the challenge of dynamically representing users' interests based on
their historical behaviors. In addition, our framework can be seamlessly
applied to any existing deep CTR models by leveraging the additional
Time-Stream Module, while no changes are made to the original CTR models.
Experiments on public dataset as well as real industry dataset with billions of
samples demonstrate the effectiveness of proposed approaches, which achieve
superior performance compared with existing methods.Comment: 8 pages. arXiv admin note: text overlap with arXiv:1809.03672 by
other author
Digging Deeper into Egocentric Gaze Prediction
This paper digs deeper into factors that influence egocentric gaze. Instead
of training deep models for this purpose in a blind manner, we propose to
inspect factors that contribute to gaze guidance during daily tasks. Bottom-up
saliency and optical flow are assessed versus strong spatial prior baselines.
Task-specific cues such as vanishing point, manipulation point, and hand
regions are analyzed as representatives of top-down information. We also look
into the contribution of these factors by investigating a simple recurrent
neural model for ego-centric gaze prediction. First, deep features are
extracted for all input video frames. Then, a gated recurrent unit is employed
to integrate information over time and to predict the next fixation. We also
propose an integrated model that combines the recurrent model with several
top-down and bottom-up cues. Extensive experiments over multiple datasets
reveal that (1) spatial biases are strong in egocentric videos, (2) bottom-up
saliency models perform poorly in predicting gaze and underperform spatial
biases, (3) deep features perform better compared to traditional features, (4)
as opposed to hand regions, the manipulation point is a strong influential cue
for gaze prediction, (5) combining the proposed recurrent model with bottom-up
cues, vanishing points and, in particular, manipulation point results in the
best gaze prediction accuracy over egocentric videos, (6) the knowledge
transfer works best for cases where the tasks or sequences are similar, and (7)
task and activity recognition can benefit from gaze prediction. Our findings
suggest that (1) there should be more emphasis on hand-object interaction and
(2) the egocentric vision community should consider larger datasets including
diverse stimuli and more subjects.Comment: presented at WACV 201
Always Strengthen Your Strengths: A Drift-Aware Incremental Learning Framework for CTR Prediction
Click-through rate (CTR) prediction is of great importance in recommendation
systems and online advertising platforms. When served in industrial scenarios,
the user-generated data observed by the CTR model typically arrives as a
stream. Streaming data has the characteristic that the underlying distribution
drifts over time and may recur. This can lead to catastrophic forgetting if the
model simply adapts to new data distribution all the time. Also, it's
inefficient to relearn distribution that has been occurred. Due to memory
constraints and diversity of data distributions in large-scale industrial
applications, conventional strategies for catastrophic forgetting such as
replay, parameter isolation, and knowledge distillation are difficult to be
deployed. In this work, we design a novel drift-aware incremental learning
framework based on ensemble learning to address catastrophic forgetting in CTR
prediction. With explicit error-based drift detection on streaming data, the
framework further strengthens well-adapted ensembles and freezes ensembles that
do not match the input distribution avoiding catastrophic interference. Both
evaluations on offline experiments and A/B test shows that our method
outperforms all baselines considered.Comment: This work has been accepted by SIGIR2
ReLoop2: Building Self-Adaptive Recommendation Models via Responsive Error Compensation Loop
Industrial recommender systems face the challenge of operating in
non-stationary environments, where data distribution shifts arise from evolving
user behaviors over time. To tackle this challenge, a common approach is to
periodically re-train or incrementally update deployed deep models with newly
observed data, resulting in a continual training process. However, the
conventional learning paradigm of neural networks relies on iterative
gradient-based updates with a small learning rate, making it slow for large
recommendation models to adapt. In this paper, we introduce ReLoop2, a
self-correcting learning loop that facilitates fast model adaptation in online
recommender systems through responsive error compensation. Inspired by the
slow-fast complementary learning system observed in human brains, we propose an
error memory module that directly stores error samples from incoming data
streams. These stored samples are subsequently leveraged to compensate for
model prediction errors during testing, particularly under distribution shifts.
The error memory module is designed with fast access capabilities and undergoes
continual refreshing with newly observed data samples during the model serving
phase to support fast model adaptation. We evaluate the effectiveness of
ReLoop2 on three open benchmark datasets as well as a real-world production
dataset. The results demonstrate the potential of ReLoop2 in enhancing the
responsiveness and adaptiveness of recommender systems operating in
non-stationary environments.Comment: Accepted by KDD 2023. See the project page at
https://xpai.github.io/ReLoo
Recommended from our members
New Data Protection Abstractions for Emerging Mobile and Big Data Workloads
Two recent shifts in computing are challenging the effectiveness of traditional approaches to data protection. Emerging machine learning workloads have complex access patterns and unique leakage characteristics that are not well supported by existing protection approaches. Second, mobile operating systems do not provide sufficient support for fine grained data protection tools forcing users to rely on individual applications to correctly manage and protect data. My thesis is that these emerging workloads have unique characteristics that we can leverage to build new, more effective data protection abstractions.
This dissertation presents two new data protection systems for machine learning work-loads and a new system for fine grained data management and protection on mobile devices. First is Sage, a differentially private machine learning platform addressing the two primary challenges of differential privacy: running out of budget and the privacy utility tradeoff. The second system, Pyramid, is the first selective data system. Pyramid leverages count featurization to reduce the amount of data exposed while training classification models by two orders of magnitude. The final system, Pebbles, provides users with logical data objects as a new fine grained data management and protection primitive allowing data management at a higher level of abstraction. Pebbles, leverages high level storage abstractions in mobile operating systems to discover user recognizable application level data objects in unmodified mobile applications
Online Tool Condition Monitoring Based on Parsimonious Ensemble+
Accurate diagnosis of tool wear in metal turning process remains an open
challenge for both scientists and industrial practitioners because of
inhomogeneities in workpiece material, nonstationary machining settings to suit
production requirements, and nonlinear relations between measured variables and
tool wear. Common methodologies for tool condition monitoring still rely on
batch approaches which cannot cope with a fast sampling rate of metal cutting
process. Furthermore they require a retraining process to be completed from
scratch when dealing with a new set of machining parameters. This paper
presents an online tool condition monitoring approach based on Parsimonious
Ensemble+, pENsemble+. The unique feature of pENsemble+ lies in its highly
flexible principle where both ensemble structure and base-classifier structure
can automatically grow and shrink on the fly based on the characteristics of
data streams. Moreover, the online feature selection scenario is integrated to
actively sample relevant input attributes. The paper presents advancement of a
newly developed ensemble learning algorithm, pENsemble+, where online active
learning scenario is incorporated to reduce operator labelling effort. The
ensemble merging scenario is proposed which allows reduction of ensemble
complexity while retaining its diversity. Experimental studies utilising
real-world manufacturing data streams and comparisons with well known
algorithms were carried out. Furthermore, the efficacy of pENsemble was
examined using benchmark concept drift data streams. It has been found that
pENsemble+ incurs low structural complexity and results in a significant
reduction of operator labelling effort.Comment: this paper has been published by IEEE Transactions on Cybernetic
Ranking to Learn and Learning to Rank: On the Role of Ranking in Pattern Recognition Applications
The last decade has seen a revolution in the theory and application of
machine learning and pattern recognition. Through these advancements, variable
ranking has emerged as an active and growing research area and it is now
beginning to be applied to many new problems. The rationale behind this fact is
that many pattern recognition problems are by nature ranking problems. The main
objective of a ranking algorithm is to sort objects according to some criteria,
so that, the most relevant items will appear early in the produced result list.
Ranking methods can be analyzed from two different methodological perspectives:
ranking to learn and learning to rank. The former aims at studying methods and
techniques to sort objects for improving the accuracy of a machine learning
model. Enhancing a model performance can be challenging at times. For example,
in pattern classification tasks, different data representations can complicate
and hide the different explanatory factors of variation behind the data. In
particular, hand-crafted features contain many cues that are either redundant
or irrelevant, which turn out to reduce the overall accuracy of the classifier.
In such a case feature selection is used, that, by producing ranked lists of
features, helps to filter out the unwanted information. Moreover, in real-time
systems (e.g., visual trackers) ranking approaches are used as optimization
procedures which improve the robustness of the system that deals with the high
variability of the image streams that change over time. The other way around,
learning to rank is necessary in the construction of ranking models for
information retrieval, biometric authentication, re-identification, and
recommender systems. In this context, the ranking model's purpose is to sort
objects according to their degrees of relevance, importance, or preference as
defined in the specific application.Comment: European PhD Thesis. arXiv admin note: text overlap with
arXiv:1601.06615, arXiv:1505.06821, arXiv:1704.02665 by other author
Ranking to Learn and Learning to Rank: On the Role of Ranking in Pattern Recognition Applications
The last decade has seen a revolution in the theory and application of
machine learning and pattern recognition. Through these advancements, variable
ranking has emerged as an active and growing research area and it is now
beginning to be applied to many new problems. The rationale behind this fact is
that many pattern recognition problems are by nature ranking problems. The main
objective of a ranking algorithm is to sort objects according to some criteria,
so that, the most relevant items will appear early in the produced result list.
Ranking methods can be analyzed from two different methodological perspectives:
ranking to learn and learning to rank. The former aims at studying methods and
techniques to sort objects for improving the accuracy of a machine learning
model. Enhancing a model performance can be challenging at times. For example,
in pattern classification tasks, different data representations can complicate
and hide the different explanatory factors of variation behind the data. In
particular, hand-crafted features contain many cues that are either redundant
or irrelevant, which turn out to reduce the overall accuracy of the classifier.
In such a case feature selection is used, that, by producing ranked lists of
features, helps to filter out the unwanted information. Moreover, in real-time
systems (e.g., visual trackers) ranking approaches are used as optimization
procedures which improve the robustness of the system that deals with the high
variability of the image streams that change over time. The other way around,
learning to rank is necessary in the construction of ranking models for
information retrieval, biometric authentication, re-identification, and
recommender systems. In this context, the ranking model's purpose is to sort
objects according to their degrees of relevance, importance, or preference as
defined in the specific application.Comment: European PhD Thesis. arXiv admin note: text overlap with
arXiv:1601.06615, arXiv:1505.06821, arXiv:1704.02665 by other author
- …