11,052 research outputs found
Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser
Neural networks are vulnerable to adversarial examples, which poses a threat
to their application in security sensitive systems. We propose high-level
representation guided denoiser (HGD) as a defense for image classification.
Standard denoiser suffers from the error amplification effect, in which small
residual adversarial noise is progressively amplified and leads to wrong
classifications. HGD overcomes this problem by using a loss function defined as
the difference between the target model's outputs activated by the clean image
and denoised image. Compared with ensemble adversarial training which is the
state-of-the-art defending method on large images, HGD has three advantages.
First, with HGD as a defense, the target model is more robust to either
white-box or black-box adversarial attacks. Second, HGD can be trained on a
small subset of the images and generalizes well to other images and unseen
classes. Third, HGD can be transferred to defend models other than the one
guiding it. In NIPS competition on defense against adversarial attacks, our HGD
solution won the first place and outperformed other models by a large margin
DiaquaÂdibromidobis[3-dimethylÂamino-1-(4-pyridyl-κN)prop-2-en-1-one]cadmium(II)
In the title compound, [CdBr2(C10H12N2O)2(H2O)2], the CdII ion is located on an inversion center and is six-coordinated by two N atoms [Cd—N = 2.377 (3) Å] from two different 3-dimethylÂamino-1-(4-pyridÂyl)prop-2-en-1-one ligands, two O atoms [Cd—O = 2.355 (2) Å] from two coordinated water molÂecules and two bromide anions [Cd—Br = 2.6855 (5) Å]. InterÂmolecular O—H⋯O hydrogen bonds link the molÂecules into layers parallel to the bc plane
- …