19 research outputs found
Selfish Response to Epidemic Propagation
An epidemic spreading in a network calls for a decision on the part of the
network members: They should decide whether to protect themselves or not. Their
decision depends on the trade-off between their perceived risk of being
infected and the cost of being protected. The network members can make
decisions repeatedly, based on information that they receive about the changing
infection level in the network.
We study the equilibrium states reached by a network whose members increase
(resp. decrease) their security deployment when learning that the network
infection is widespread (resp. limited). Our main finding is that the
equilibrium level of infection increases as the learning rate of the members
increases. We confirm this result in three scenarios for the behavior of the
members: strictly rational cost minimizers, not strictly rational, and strictly
rational but split into two response classes. In the first two cases, we
completely characterize the stability and the domains of attraction of the
equilibrium points, even though the first case leads to a differential
inclusion. We validate our conclusions with simulations on human mobility
traces.Comment: 19 pages, 5 figures, submitted to the IEEE Transactions on Automatic
Contro
Evolutionary Poisson Games for Controlling Large Population Behaviors
Emerging applications in engineering such as crowd-sourcing and
(mis)information propagation involve a large population of heterogeneous users
or agents in a complex network who strategically make dynamic decisions. In
this work, we establish an evolutionary Poisson game framework to capture the
random, dynamic and heterogeneous interactions of agents in a holistic fashion,
and design mechanisms to control their behaviors to achieve a system-wide
objective. We use the antivirus protection challenge in cyber security to
motivate the framework, where each user in the network can choose whether or
not to adopt the software. We introduce the notion of evolutionary Poisson
stable equilibrium for the game, and show its existence and uniqueness. Online
algorithms are developed using the techniques of stochastic approximation
coupled with the population dynamics, and they are shown to converge to the
optimal solution of the controller problem. Numerical examples are used to
illustrate and corroborate our results
Coordination in Network Security Games: a Monotone Comparative Statics Approach
Malicious softwares or malwares for short have become a major security
threat. While originating in criminal behavior, their impact are also
influenced by the decisions of legitimate end users. Getting agents in the
Internet, and in networks in general, to invest in and deploy security features
and protocols is a challenge, in particular because of economic reasons arising
from the presence of network externalities.
In this paper, we focus on the question of incentive alignment for agents of
a large network towards a better security. We start with an economic model for
a single agent, that determines the optimal amount to invest in protection. The
model takes into account the vulnerability of the agent to a security breach
and the potential loss if a security breach occurs. We derive conditions on the
quality of the protection to ensure that the optimal amount spent on security
is an increasing function of the agent's vulnerability and potential loss. We
also show that for a large class of risks, only a small fraction of the
expected loss should be invested.
Building on these results, we study a network of interconnected agents
subject to epidemic risks. We derive conditions to ensure that the incentives
of all agents are aligned towards a better security. When agents are strategic,
we show that security investments are always socially inefficient due to the
network externalities. Moreover alignment of incentives typically implies a
coordination problem, leading to an equilibrium with a very high price of
anarchy.Comment: 10 pages, to appear in IEEE JSA
Decentralized Protection Strategies against SIS Epidemics in Networks
Defining an optimal protection strategy against viruses, spam propagation or
any other kind of contamination process is an important feature for designing
new networks and architectures. In this work, we consider decentralized optimal
protection strategies when a virus is propagating over a network through a SIS
epidemic process. We assume that each node in the network can fully protect
itself from infection at a constant cost, or the node can use recovery
software, once it is infected.
We model our system using a game theoretic framework and find pure, mixed
equilibria, and the Price of Anarchy (PoA) in several network topologies.
Further, we propose both a decentralized algorithm and an iterative procedure
to compute a pure equilibrium in the general case of a multiple communities
network. Finally, we evaluate the algorithms and give numerical illustrations
of all our results.Comment: accepted for publication in IEEE Transactions on Control of Network
System
Behavioural verification: preventing report fraud in decentralized advert distribution systems
Service commissions, which are claimed by Ad-Networks and Publishers, are susceptible to forgery as non-human operators are able to artificially create fictitious traffic on digital platforms for the purpose of committing financial fraud. This places a significant strain on Advertisers who have no effective means of differentiating fabricated Ad-Reports from those which correspond to real consumer activity. To address this problem, we contribute an advert reporting system which utilizes opportunistic networking and a blockchain-inspired construction in order to identify authentic Ad-Reports by determining whether they were composed by honest or dishonest users. What constitutes a user's honesty for our system is the manner in which they access adverts on their mobile device. Dishonest users submit multiple reports over a short period of time while honest users behave as consumers who view adverts at a balanced pace while engaging in typical social activities such as purchasing goods online, moving through space and interacting with other users. We argue that it is hard for dishonest users to fake honest behaviour and we exploit the behavioural patterns of users in order to classify Ad-Reports as real or fabricated. By determining the honesty of the user who submitted a particular report, our system offers a more secure reward-claiming model which protects against fraud while still preserving the user's anonymity