34 research outputs found
Regional Image Perturbation Reduces Norms of Adversarial Examples While Maintaining Model-to-model Transferability
Regional adversarial attacks often rely on complicated methods for generating
adversarial perturbations, making it hard to compare their efficacy against
well-known attacks. In this study, we show that effective regional
perturbations can be generated without resorting to complex methods. We develop
a very simple regional adversarial perturbation attack method using
cross-entropy sign, one of the most commonly used losses in adversarial machine
learning. Our experiments on ImageNet with multiple models reveal that, on
average, of the generated adversarial examples maintain model-to-model
transferability when the perturbation is applied to local image regions.
Depending on the selected region, these localized adversarial examples require
significantly less norm distortion (for )
compared to their non-local counterparts. These localized attacks therefore
have the potential to undermine defenses that claim robustness under the
aforementioned norms.Comment: Accepted for the ICML 2020, Workshop on Uncertainty and Robustness in
Deep Learning (UDL
Adversaria Attacks and Defense Mechanisms to Improve Robustness of Deep Temporal Point Processes
Indiana University-Purdue University Indianapolis (IUPUI)Temporal point processes (TPP) are mathematical approaches for modeling asynchronous
event sequences by considering the temporal dependency of each event on past events and its
instantaneous rate. Temporal point processes can model various problems, from earthquake
aftershocks, trade orders, gang violence, and reported crime patterns, to network analysis,
infectious disease transmissions, and virus spread forecasting. In each of these cases, the
entity’s behavior with the corresponding information is noted over time as an asynchronous
event sequence, and the analysis is done using temporal point processes, which provides a
means to define the generative mechanism of the sequence of events and ultimately predict
events and investigate causality.
Among point processes, Hawkes process as a stochastic point process is able to model
a wide range of contagious and self-exciting patterns. One of Hawkes process’s well-known
applications is predicting the evolution of viral processes on networks, which is an important
problem in biology, the social sciences, and the study of the Internet. In existing works,
mean-field analysis based upon degree distribution is used to predict viral spreading across
networks of different types. However, it has been shown that degree distribution alone
fails to predict the behavior of viruses on some real-world networks. Recent attempts have
been made to use assortativity to address this shortcoming. This thesis illustrates how the
evolution of such a viral process is sensitive to the underlying network’s structure.
In Chapter 3 , we show that adding assortativity does not fully explain the variance in
the spread of viruses for a number of real-world networks. We propose using the graphlet
frequency distribution combined with assortativity to explain variations in the evolution
of viral processes across networks with identical degree distribution. Using a data-driven
approach, by coupling predictive modeling with viral process simulation on real-world networks,
we show that simple regression models based on graphlet frequency distribution can
explain over 95% of the variance in virality on networks with the same degree distribution
but different network topologies. Our results highlight the importance of graphlets and identify
a small collection of graphlets that may have the most significant influence over the viral
processes on a network.
Due to the flexibility and expressiveness of deep learning techniques, several neural
network-based approaches have recently shown promise for modeling point process intensities.
However, there is a lack of research on the possible adversarial attacks and the
robustness of such models regarding adversarial attacks and natural shocks to systems.
Furthermore, while neural point processes may outperform simpler parametric models on
in-sample tests, how these models perform when encountering adversarial examples or sharp
non-stationary trends remains unknown.
In Chapter 4 , we propose several white-box and black-box adversarial attacks against
deep temporal point processes. Additionally, we investigate the transferability of whitebox
adversarial attacks against point processes modeled by deep neural networks, which are
considered a more elevated risk. Extensive experiments confirm that neural point processes
are vulnerable to adversarial attacks. Such a vulnerability is illustrated both in terms of
predictive metrics and the effect of attacks on the underlying point process’s parameters.
Expressly, adversarial attacks successfully transform the temporal Hawkes process regime
from sub-critical to into a super-critical and manipulate the modeled parameters that is
considered a risk against parametric modeling approaches. Additionally, we evaluate the
vulnerability and performance of these models in the presence of non-stationary abrupt
changes, using the crimes and Covid-19 pandemic dataset as an example.
Considering the security vulnerability of deep-learning models, including deep temporal
point processes, to adversarial attacks, it is essential to ensure the robustness of the deployed
algorithms that is despite the success of deep learning techniques in modeling temporal point
processes.
In Chapter 5 , we study the robustness of deep temporal point processes against several
proposed adversarial attacks from the adversarial defense viewpoint. Specifically, we
investigate the effectiveness of adversarial training using universal adversarial samples in
improving the robustness of the deep point processes. Additionally, we propose a general
point process domain-adopted (GPDA) regularization, which is strictly applicable to temporal
point processes, to reduce the effect of adversarial attacks and acquire an empirically
robust model. In this approach, unlike other computationally expensive approaches, there
is no need for additional back-propagation in the training step, and no further network isrequired. Ultimately, we propose an adversarial detection framework that has been trained
in the Generative Adversarial Network (GAN) manner and solely on clean training data.
Finally, in Chapter 6 , we discuss implications of the research and future research directions
The Cloud-to-Thing Continuum
The Internet of Things offers massive societal and economic opportunities while at the same time significant challenges, not least the delivery and management of the technical infrastructure underpinning it, the deluge of data generated from it, ensuring privacy and security, and capturing value from it. This Open Access Pivot explores these challenges, presenting the state of the art and future directions for research but also frameworks for making sense of this complex area. This book provides a variety of perspectives on how technology innovations such as fog, edge and dew computing, 5G networks, and distributed intelligence are making us rethink conventional cloud computing to support the Internet of Things. Much of this book focuses on technical aspects of the Internet of Things, however, clear methodologies for mapping the business value of the Internet of Things are still missing. We provide a value mapping framework for the Internet of Things to address this gap. While there is much hype about the Internet of Things, we have yet to reach the tipping point. As such, this book provides a timely entrée for higher education educators, researchers and students, industry and policy makers on the technologies that promise to reshape how society interacts and operates
Applied Metaheuristic Computing
For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC
Object Recognition and Parsing with Weak Supervision
Object recognition is a fundamental problem in computer vision and has attracted a lot of research attention, while object parsing is equally important for many computer vision tasks but has been less studied. With the recent development of deep neural networks, computer vision researches have been dominated by deep learning approaches, which require large amount of training data for a specific task in a specific domain. The cost of collecting rare samples and making "hard" labels is forbiddingly high and has limited the development of many important vision studies, including object parsing. This dissertation will focus on object recognition and parsing with weak supervision, which tackles the problem when only a limited amount of data or label are available for training deep neural networks in the target domain. The goal is to design more advanced computer vision models with enhanced data efficiency during training and increased robustness to out-of-distribution samples during test. To achieve this goal, I will introduce several strategies, including unsupervised learning of compositional components in deep neural networks, zero/few-shot learning by preserving useful knowledge acquired in pre-training, weakly supervised learning combined with spatial-temporal information in video data, and learning from 3D computer graphics models and synthetic data. Furthermore, I will discuss new findings in our cognitive science projects and explain how the part-based representations benefit the development of visual analogical reasoning models. I believe this series of works alleviates the data-hungry problem of deep neural networks, and improves computer vision models to behave closer to human intelligence
Applied Methuerstic computing
For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC
Generalized averaged Gaussian quadrature and applications
A simple numerical method for constructing the optimal generalized averaged Gaussian quadrature formulas will be presented. These formulas exist in many cases in which real positive GaussKronrod formulas do not exist, and can be used as an adequate alternative in order to estimate the error of a Gaussian rule. We also investigate the conditions under which the optimal averaged Gaussian quadrature formulas and their truncated variants are internal