79 research outputs found
Unpaired Image-to-Image Translation with Limited Data to Reveal Subtle Phenotypes
Unpaired image-to-image translation methods aim at learning a mapping of
images from a source domain to a target domain. Recently, these methods proved
to be very useful in biological applications to display subtle phenotypic cell
variations otherwise invisible to the human eye. However, current models
require a large number of images to be trained, while mostmicroscopy
experiments remain limited in the number of images they can produce. In this
work, we present an improved CycleGAN architecture that employs self-supervised
discriminators to alleviate the need for numerous images. We demonstrate
quantitatively and qualitatively that the proposed approach outperforms the
CycleGAN baseline, including when it is combined with differentiable
augmentations. We also provide results obtained with small biological datasets
on obvious and non-obvious cell phenotype variations, demonstrating a
straightforward application of this method
A Review of Adversarial Attacks in Computer Vision
Deep neural networks have been widely used in various downstream tasks,
especially those safety-critical scenario such as autonomous driving, but deep
networks are often threatened by adversarial samples. Such adversarial attacks
can be invisible to human eyes, but can lead to DNN misclassification, and
often exhibits transferability between deep learning and machine learning
models and real-world achievability. Adversarial attacks can be divided into
white-box attacks, for which the attacker knows the parameters and gradient of
the model, and black-box attacks, for the latter, the attacker can only obtain
the input and output of the model. In terms of the attacker's purpose, it can
be divided into targeted attacks and non-targeted attacks, which means that the
attacker wants the model to misclassify the original sample into the specified
class, which is more practical, while the non-targeted attack just needs to
make the model misclassify the sample. The black box setting is a scenario we
will encounter in practice
Visually Adversarial Attacks and Defenses in the Physical World: A Survey
Although Deep Neural Networks (DNNs) have been widely applied in various
real-world scenarios, they are vulnerable to adversarial examples. The current
adversarial attacks in computer vision can be divided into digital attacks and
physical attacks according to their different attack forms. Compared with
digital attacks, which generate perturbations in the digital pixels, physical
attacks are more practical in the real world. Owing to the serious security
problem caused by physically adversarial examples, many works have been
proposed to evaluate the physically adversarial robustness of DNNs in the past
years. In this paper, we summarize a survey versus the current physically
adversarial attacks and physically adversarial defenses in computer vision. To
establish a taxonomy, we organize the current physical attacks from attack
tasks, attack forms, and attack methods, respectively. Thus, readers can have a
systematic knowledge of this topic from different aspects. For the physical
defenses, we establish the taxonomy from pre-processing, in-processing, and
post-processing for the DNN models to achieve full coverage of the adversarial
defenses. Based on the above survey, we finally discuss the challenges of this
research field and further outlook on the future direction
Toward Robust Sensing for Autonomous Vehicles: An Adversarial Perspective
Autonomous Vehicles rely on accurate and robust sensor observations for
safety critical decision-making in a variety of conditions. Fundamental
building blocks of such systems are sensors and classifiers that process
ultrasound, RADAR, GPS, LiDAR and camera signals~\cite{Khan2018}. It is of
primary importance that the resulting decisions are robust to perturbations,
which can take the form of different types of nuisances and data
transformations, and can even be adversarial perturbations (APs). Adversarial
perturbations are purposefully crafted alterations of the environment or of the
sensory measurements, with the objective of attacking and defeating the
autonomous systems. A careful evaluation of the vulnerabilities of their
sensing system(s) is necessary in order to build and deploy safer systems in
the fast-evolving domain of AVs. To this end, we survey the emerging field of
sensing in adversarial settings: after reviewing adversarial attacks on sensing
modalities for autonomous systems, we discuss countermeasures and present
future research directions
ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector
Given the ability to directly manipulate image pixels in the digital input
space, an adversary can easily generate imperceptible perturbations to fool a
Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In
this work, we propose ShapeShifter, an attack that tackles the more challenging
problem of crafting physical adversarial perturbations to fool image-based
object detectors like Faster R-CNN. Attacking an object detector is more
difficult than attacking an image classifier, as it needs to mislead the
classification results in multiple bounding boxes with different scales.
Extending the digital attack to the physical world adds another layer of
difficulty, because it requires the perturbation to be robust enough to survive
real-world distortions due to different viewing distances and angles, lighting
conditions, and camera limitations. We show that the Expectation over
Transformation technique, which was originally proposed to enhance the
robustness of adversarial perturbations in image classification, can be
successfully adapted to the object detection setting. ShapeShifter can generate
adversarially perturbed stop signs that are consistently mis-detected by Faster
R-CNN as other objects, posing a potential threat to autonomous vehicles and
other safety-critical computer vision systems
- …