2,552 research outputs found
Strategic Network Formation with Attack and Immunization
Strategic network formation arises where agents receive benefit from
connections to other agents, but also incur costs for forming links. We
consider a new network formation game that incorporates an adversarial attack,
as well as immunization against attack. An agent's benefit is the expected size
of her connected component post-attack, and agents may also choose to immunize
themselves from attack at some additional cost. Our framework is a stylized
model of settings where reachability rather than centrality is the primary
concern and vertices vulnerable to attacks may reduce risk via costly measures.
In the reachability benefit model without attack or immunization, the set of
equilibria is the empty graph and any tree. The introduction of attack and
immunization changes the game dramatically; new equilibrium topologies emerge,
some more sparse and some more dense than trees. We show that, under a mild
assumption on the adversary, every equilibrium network with agents contains
at most edges for . So despite permitting topologies denser
than trees, the amount of overbuilding is limited. We also show that attack and
immunization don't significantly erode social welfare: every non-trivial
equilibrium with respect to several adversaries has welfare at least as that of
any equilibrium in the attack-free model.
We complement our theory with simulations demonstrating fast convergence of a
new bounded rationality dynamic which generalizes linkstable best response but
is considerably more powerful in our game. The simulations further elucidate
the wide variety of asymmetric equilibria and demonstrate topological
consequences of the dynamics e.g. heavy-tailed degree distributions. Finally,
we report on a behavioral experiment on our game with over 100 participants,
where despite the complexity of the game, the resulting network was
surprisingly close to equilibrium.Comment: The short version of this paper appears in the proceedings of WINE-1
Quantitative information flow under generic leakage functions and adaptive adversaries
We put forward a model of action-based randomization mechanisms to analyse
quantitative information flow (QIF) under generic leakage functions, and under
possibly adaptive adversaries. This model subsumes many of the QIF models
proposed so far. Our main contributions include the following: (1) we identify
mild general conditions on the leakage function under which it is possible to
derive general and significant results on adaptive QIF; (2) we contrast the
efficiency of adaptive and non-adaptive strategies, showing that the latter are
as efficient as the former in terms of length up to an expansion factor bounded
by the number of available actions; (3) we show that the maximum information
leakage over strategies, given a finite time horizon, can be expressed in terms
of a Bellman equation. This can be used to compute an optimal finite strategy
recursively, by resorting to standard methods like backward induction.Comment: Revised and extended version of conference paper with the same title
appeared in Proc. of FORTE 2014, LNC
Security Evaluation of Support Vector Machines in Adversarial Environments
Support Vector Machines (SVMs) are among the most popular classification
techniques adopted in security applications like malware detection, intrusion
detection, and spam filtering. However, if SVMs are to be incorporated in
real-world security systems, they must be able to cope with attack patterns
that can either mislead the learning algorithm (poisoning), evade detection
(evasion), or gain information about their internal parameters (privacy
breaches). The main contributions of this chapter are twofold. First, we
introduce a formal general framework for the empirical evaluation of the
security of machine-learning systems. Second, according to our framework, we
demonstrate the feasibility of evasion, poisoning and privacy attacks against
SVMs in real-world security problems. For each attack technique, we evaluate
its impact and discuss whether (and how) it can be countered through an
adversary-aware design of SVMs. Our experiments are easily reproducible thanks
to open-source code that we have made available, together with all the employed
datasets, on a public repository.Comment: 47 pages, 9 figures; chapter accepted into book 'Support Vector
Machine Applications
Building Confidential and Efficient Query Services in the Cloud with RASP Data Perturbation
With the wide deployment of public cloud computing infrastructures, using
clouds to host data query services has become an appealing solution for the
advantages on scalability and cost-saving. However, some data might be
sensitive that the data owner does not want to move to the cloud unless the
data confidentiality and query privacy are guaranteed. On the other hand, a
secured query service should still provide efficient query processing and
significantly reduce the in-house workload to fully realize the benefits of
cloud computing. We propose the RASP data perturbation method to provide secure
and efficient range query and kNN query services for protected data in the
cloud. The RASP data perturbation method combines order preserving encryption,
dimensionality expansion, random noise injection, and random projection, to
provide strong resilience to attacks on the perturbed data and queries. It also
preserves multidimensional ranges, which allows existing indexing techniques to
be applied to speedup range query processing. The kNN-R algorithm is designed
to work with the RASP range query algorithm to process the kNN queries. We have
carefully analyzed the attacks on data and queries under a precisely defined
threat model and realistic security assumptions. Extensive experiments have
been conducted to show the advantages of this approach on efficiency and
security.Comment: 18 pages, to appear in IEEE TKDE, accepted in December 201
Towards a science of security games
Abstract. Security is a critical concern around the world. In many domains from counter-terrorism to sustainability, limited security resources prevent complete security coverage at all times. Instead, these limited resources must be scheduled (or allocated or deployed), while simultaneously taking into account the impor-tance of different targets, the responses of the adversaries to the security posture, and the potential uncertainties in adversary payoffs and observations, etc. Com-putational game theory can help generate such security schedules. Indeed, casting the problem as a Stackelberg game, we have developed new algorithms that are now deployed over multiple years in multiple applications for scheduling of secu-rity resources. These applications are leading to real-world use-inspired research in the emerging research area of “security games”. The research challenges posed by these applications include scaling up security games to real-world sized prob-lems, handling multiple types of uncertainty, and dealing with bounded rationality of human adversaries.
Fair Leader Election for Rational Agents in Asynchronous Rings and Networks
We study a game theoretic model where a coalition of processors might collude
to bias the outcome of the protocol, where we assume that the processors always
prefer any legitimate outcome over a non-legitimate one. We show that the
problems of Fair Leader Election and Fair Coin Toss are equivalent, and focus
on Fair Leader Election.
Our main focus is on a directed asynchronous ring of processors, where we
investigate the protocol proposed by Abraham et al.
\cite{abraham2013distributed} and studied in Afek et al.
\cite{afek2014distributed}. We show that in general the protocol is resilient
only to sub-linear size coalitions. Specifically, we show that
randomly located processors or
adversarially located processors can force any outcome. We complement this by
showing that the protocol is resilient to any adversarial coalition of size
.
We propose a modification to the protocol, and show that it is resilient to
every coalition of size , by exhibiting both an attack and a
resilience result. For every , we define a family of graphs
that can be simulated by trees where each node in the tree
simulates at most processors. We show that for every graph in
, there is no fair leader election protocol that is
resilient to coalitions of size . Our result generalizes a previous result
of Abraham et al. \cite{abraham2013distributed} that states that for every
graph, there is no fair leader election protocol which is resilient to
coalitions of size .Comment: 48 pages, PODC 201
- …