4 research outputs found
Double Targeted Universal Adversarial Perturbations
Despite their impressive performance, deep neural networks (DNNs) are widely
known to be vulnerable to adversarial attacks, which makes it challenging for
them to be deployed in security-sensitive applications, such as autonomous
driving. Image-dependent perturbations can fool a network for one specific
image, while universal adversarial perturbations are capable of fooling a
network for samples from all classes without selection. We introduce a double
targeted universal adversarial perturbations (DT-UAPs) to bridge the gap
between the instance-discriminative image-dependent perturbations and the
generic universal perturbations. This universal perturbation attacks one
targeted source class to sink class, while having a limited adversarial effect
on other non-targeted source classes, for avoiding raising suspicions.
Targeting the source and sink class simultaneously, we term it double targeted
attack (DTA). This provides an attacker with the freedom to perform precise
attacks on a DNN model while raising little suspicion. We show the
effectiveness of the proposed DTA algorithm on a wide range of datasets and
also demonstrate its potential as a physical attack.Comment: Accepted at ACCV 202
Adversarial Learning in the Cyber Security Domain
In recent years, machine learning algorithms, and more specially, deep
learning algorithms, have been widely used in many fields, including cyber
security. However, machine learning systems are vulnerable to adversarial
attacks, and this limits the application of machine learning, especially in
non-stationary, adversarial environments, such as the cyber security domain,
where actual adversaries (e.g., malware developers) exist. This paper
comprehensively summarizes the latest research on adversarial attacks against
security solutions that are based on machine learning techniques and presents
the risks they pose to cyber security solutions. First, we discuss the unique
challenges of implementing end-to-end adversarial attacks in the cyber security
domain. Following that, we define a unified taxonomy, where the adversarial
attack methods are characterized based on their stage of occurrence, and the
attacker's goals and capabilities. Then, we categorize the applications of
adversarial attack techniques in the cyber security domain. Finally, we use our
taxonomy to shed light on gaps in the cyber security domain that have already
been addressed in other adversarial learning domains and discuss their impact
on future adversarial learning trends in the cyber security domain
Resilient Machine Learning for Networked Cyber Physical Systems: A Survey for Machine Learning Security to Securing Machine Learning for CPS
Cyber Physical Systems (CPS) are characterized by their ability to integrate
the physical and information or cyber worlds. Their deployment in critical
infrastructure have demonstrated a potential to transform the world. However,
harnessing this potential is limited by their critical nature and the far
reaching effects of cyber attacks on human, infrastructure and the environment.
An attraction for cyber concerns in CPS rises from the process of sending
information from sensors to actuators over the wireless communication medium,
thereby widening the attack surface. Traditionally, CPS security has been
investigated from the perspective of preventing intruders from gaining access
to the system using cryptography and other access control techniques. Most
research work have therefore focused on the detection of attacks in CPS.
However, in a world of increasing adversaries, it is becoming more difficult to
totally prevent CPS from adversarial attacks, hence the need to focus on making
CPS resilient. Resilient CPS are designed to withstand disruptions and remain
functional despite the operation of adversaries. One of the dominant
methodologies explored for building resilient CPS is dependent on machine
learning (ML) algorithms. However, rising from recent research in adversarial
ML, we posit that ML algorithms for securing CPS must themselves be resilient.
This paper is therefore aimed at comprehensively surveying the interactions
between resilient CPS using ML and resilient ML when applied in CPS. The paper
concludes with a number of research trends and promising future research
directions. Furthermore, with this paper, readers can have a thorough
understanding of recent advances on ML-based security and securing ML for CPS
and countermeasures, as well as research trends in this active research area
Challenges and Countermeasures for Adversarial Attacks on Deep Reinforcement Learning
Deep Reinforcement Learning (DRL) has numerous applications in the real world
thanks to its outstanding ability in quickly adapting to the surrounding
environments. Despite its great advantages, DRL is susceptible to adversarial
attacks, which precludes its use in real-life critical systems and applications
(e.g., smart grids, traffic controls, and autonomous vehicles) unless its
vulnerabilities are addressed and mitigated. Thus, this paper provides a
comprehensive survey that discusses emerging attacks in DRL-based systems and
the potential countermeasures to defend against these attacks. We first cover
some fundamental backgrounds about DRL and present emerging adversarial attacks
on machine learning techniques. We then investigate more details of the
vulnerabilities that the adversary can exploit to attack DRL along with the
state-of-the-art countermeasures to prevent such attacks. Finally, we highlight
open issues and research challenges for developing solutions to deal with
attacks for DRL-based intelligent systems