24,029 research outputs found
Upper and Lower Bounds for Weak Backdoor Set Detection
We obtain upper and lower bounds for running times of exponential time
algorithms for the detection of weak backdoor sets of 3CNF formulas,
considering various base classes. These results include (omitting polynomial
factors), (i) a 4.54^k algorithm to detect whether there is a weak backdoor set
of at most k variables into the class of Horn formulas; (ii) a 2.27^k algorithm
to detect whether there is a weak backdoor set of at most k variables into the
class of Krom formulas. These bounds improve an earlier known bound of 6^k. We
also prove a 2^k lower bound for these problems, subject to the Strong
Exponential Time Hypothesis.Comment: A short version will appear in the proceedings of the 16th
International Conference on Theory and Applications of Satisfiability Testin
The "backdoor pathway" of androgen synthesis in human male sexual development.
Mammalian sex determination (male versus female) is largely controlled by genes, whereas sex differentiation (development of reproductive structures) is largely controlled by hormones. Work in the 20th century indicated that female external anatomy was a "default" pathway of development not requiring steroids, whereas male genital development required testicular testosterone plus dihydrotestosterone (DHT) made in genital skin according to a "classic" pathway. Recent work added the description of an alternative "backdoor" pathway of androgen synthesis discovered in marsupials. Unique "backdoor steroids" are found in human hyperandrogenic disorders, and genetic disruption of the pathway causes disordered male sexual development, suggesting it plays an essential role. O'Shaughnessy and colleagues now show that the principal human backdoor androgen is androsterone and provide strong evidence that it derives from placental progesterone that is metabolized to androsterone in nontesticular tissues. These studies are essential to understanding human sexual development and its disorders
Spectral Signatures in Backdoor Attacks
A recent line of work has uncovered a new form of data poisoning: so-called
\emph{backdoor} attacks. These attacks are particularly dangerous because they
do not affect a network's behavior on typical, benign data. Rather, the network
only deviates from its expected output when triggered by a perturbation planted
by an adversary.
In this paper, we identify a new property of all known backdoor attacks,
which we call \emph{spectral signatures}. This property allows us to utilize
tools from robust statistics to thwart the attacks. We demonstrate the efficacy
of these signatures in detecting and removing poisoned examples on real image
sets and state of the art neural network architectures. We believe that
understanding spectral signatures is a crucial first step towards designing ML
systems secure against such backdoor attacksComment: 16 pages, accepted to NIPS 201
Robust Backdoor Attacks against Deep Neural Networks in Real Physical World
Deep neural networks (DNN) have been widely deployed in various applications.
However, many researches indicated that DNN is vulnerable to backdoor attacks.
The attacker can create a hidden backdoor in target DNN model, and trigger the
malicious behaviors by submitting specific backdoor instance. However, almost
all the existing backdoor works focused on the digital domain, while few
studies investigate the backdoor attacks in real physical world. Restricted to
a variety of physical constraints, the performance of backdoor attacks in the
real physical world will be severely degraded. In this paper, we propose a
robust physical backdoor attack method, PTB (physical transformations for
backdoors), to implement the backdoor attacks against deep learning models in
the real physical world. Specifically, in the training phase, we perform a
series of physical transformations on these injected backdoor instances at each
round of model training, so as to simulate various transformations that a
backdoor may experience in real world, thus improves its physical robustness.
Experimental results on the state-of-the-art face recognition model show that,
compared with the backdoor methods that without PTB, the proposed attack method
can significantly improve the performance of backdoor attacks in real physical
world. Under various complex physical conditions, by injecting only a very
small ratio (0.5%) of backdoor instances, the attack success rate of physical
backdoor attacks with the PTB method on VGGFace is 82%, while the attack
success rate of backdoor attacks without the proposed PTB method is lower than
11%. Meanwhile, the normal performance of the target DNN model has not been
affected
From Shortcuts to Triggers: Backdoor Defense with Denoised PoE
Language models are often at risk of diverse backdoor attacks, especially
data poisoning. Thus, it is important to investigate defense solutions for
addressing them. Existing backdoor defense methods mainly focus on backdoor
attacks with explicit triggers, leaving a universal defense against various
backdoor attacks with diverse triggers largely unexplored. In this paper, we
propose an end-to-end ensemble-based backdoor defense framework, DPoE (Denoised
Product-of-Experts), which is inspired by the shortcut nature of backdoor
attacks, to defend various backdoor attacks. DPoE consists of two models: a
shallow model that captures the backdoor shortcuts and a main model that is
prevented from learning the backdoor shortcuts. To address the label flip
caused by backdoor attackers, DPoE incorporates a denoising design. Experiments
on SST-2 dataset show that DPoE significantly improves the defense performance
against various types of backdoor triggers including word-level,
sentence-level, and syntactic triggers. Furthermore, DPoE is also effective
under a more challenging but practical setting that mixes multiple types of
trigger.Comment: Work in Progres
- …