306 research outputs found
Dynamics, robustness and fragility of trust
Trust is often conveyed through delegation, or through recommendation. This
makes the trust authorities, who process and publish trust recommendations,
into an attractive target for attacks and spoofing. In some recent empiric
studies, this was shown to lead to a remarkable phenomenon of *adverse
selection*: a greater percentage of unreliable or malicious web merchants were
found among those with certain types of trust certificates, then among those
without. While such findings can be attributed to a lack of diligence in trust
authorities, or even to conflicts of interest, our analysis of trust dynamics
suggests that public trust networks would probably remain vulnerable even if
trust authorities were perfectly diligent. The reason is that the process of
trust building, if trust is not breached too often, naturally leads to
power-law distributions: the rich get richer, the trusted attract more trust.
The evolutionary processes with such distributions, ubiquitous in nature, are
known to be robust with respect to random failures, but vulnerable to adaptive
attacks. We recommend some ways to decrease the vulnerability of trust
building, and suggest some ideas for exploration.Comment: 17 pages; simplified the statement and the proof of the main theorem;
FAST 200
Defending Substitution-Based Profile Pollution Attacks on Sequential Recommenders
While sequential recommender systems achieve significant improvements on
capturing user dynamics, we argue that sequential recommenders are vulnerable
against substitution-based profile pollution attacks. To demonstrate our
hypothesis, we propose a substitution-based adversarial attack algorithm, which
modifies the input sequence by selecting certain vulnerable elements and
substituting them with adversarial items. In both untargeted and targeted
attack scenarios, we observe significant performance deterioration using the
proposed profile pollution algorithm. Motivated by such observations, we design
an efficient adversarial defense method called Dirichlet neighborhood sampling.
Specifically, we sample item embeddings from a convex hull constructed by
multi-hop neighbors to replace the original items in input sequences. During
sampling, a Dirichlet distribution is used to approximate the probability
distribution in the neighborhood such that the recommender learns to combat
local perturbations. Additionally, we design an adversarial training method
tailored for sequential recommender systems. In particular, we represent
selected items with one-hot encodings and perform gradient ascent on the
encodings to search for the worst case linear combination of item embeddings in
training. As such, the embedding function learns robust item representations
and the trained recommender is resistant to test-time adversarial examples.
Extensive experiments show the effectiveness of both our attack and defense
methods, which consistently outperform baselines by a significant margin across
model architectures and datasets.Comment: Accepted to RecSys 202
Adversarial Item Promotion: Vulnerabilities at the Core of Top-N Recommenders that Use Images to Address Cold Start
E-commerce platforms provide their customers with ranked lists of recommended
items matching the customers' preferences. Merchants on e-commerce platforms
would like their items to appear as high as possible in the top-N of these
ranked lists. In this paper, we demonstrate how unscrupulous merchants can
create item images that artificially promote their products, improving their
rankings. Recommender systems that use images to address the cold start problem
are vulnerable to this security risk. We describe a new type of attack,
Adversarial Item Promotion (AIP), that strikes directly at the core of Top-N
recommenders: the ranking mechanism itself. Existing work on adversarial images
in recommender systems investigates the implications of conventional attacks,
which target deep learning classifiers. In contrast, our AIP attacks are
embedding attacks that seek to push features representations in a way that
fools the ranker (not a classifier) and directly lead to item promotion. We
introduce three AIP attacks insider attack, expert attack, and semantic attack,
which are defined with respect to three successively more realistic attack
models. Our experiments evaluate the danger of these attacks when mounted
against three representative visually-aware recommender algorithms in a
framework that uses images to address cold start. We also evaluate two common
defenses against adversarial images in the classification scenario and show
that these simple defenses do not eliminate the danger of AIP attacks. In sum,
we show that using images to address cold start opens recommender systems to
potential threats with clear practical implications. To facilitate future
research, we release an implementation of our attacks and defenses, which
allows reproduction and extension.Comment: Our code is available at https://github.com/liuzrcc/AI
Assessing the Quality and Stability of Recommender Systems
Recommender systems help users to find products they may like when lacking personal experience or facing an overwhelmingly large set of items. However, assessing the quality and stability of recommender systems can present challenges for developers. First, traditional accuracy metrics, such as precision and recall, for validating the quality of recommendations, offer only a coarse, one-dimensional view of the system performance. Second, assessing the stability of a recommender systems requires generating new data and retraining a system, which is expensive. In this work, we present two new approaches for assessing the quality and stability of recommender systems to address these challenges. We first present a general and extensible approach for assessing the quality of the behavior of a recommender system using logical property templates. The approach is general in that it defines recommendation systems in terms of sets of rankings, ratings, users, and items on which property templates are defined. It is extensible in that these property templates define a space of properties that can be instantiated and parameterized to characterize a recommendation system. We study the application of the approach to several recommendation systems. Our findings demonstrate the potential of these properties, illustrating the insights they can provide about the different algorithms and evolving datasets. We also present an approach for influence-guided fuzz testing of recommender system stability. We infer influence models for aspects of a dataset, such as users or items, from the recommendations produced by a recommender system and its training data. We define dataset fuzzing heuristics that use these influence models for generating modifications to an original dataset and we present a test oracle based on a threshold of acceptable instability. We implement our approach and evaluate it on several recommender algorithms using the MovieLens dataset and we find that influence-guided fuzzing can effectively find small sets of modifications that cause significantly more instability than random approaches.
Adviser: Sebastian Elbau
- …