21 research outputs found

    Allocating Limited Resources to Protect a Massive Number of Targets using a Game Theoretic Model

    Full text link
    Resource allocation is the process of optimizing the rare resources. In the area of security, how to allocate limited resources to protect a massive number of targets is especially challenging. This paper addresses this resource allocation issue by constructing a game theoretic model. A defender and an attacker are players and the interaction is formulated as a trade-off between protecting targets and consuming resources. The action cost which is a necessary role of consuming resource, is considered in the proposed model. Additionally, a bounded rational behavior model (Quantal Response, QR), which simulates a human attacker of the adversarial nature, is introduced to improve the proposed model. To validate the proposed model, we compare the different utility functions and resource allocation strategies. The comparison results suggest that the proposed resource allocation strategy performs better than others in the perspective of utility and resource effectiveness.Comment: 14 pages, 12 figures, 41 reference

    Human-Agent Decision-making: Combining Theory and Practice

    Full text link
    Extensive work has been conducted both in game theory and logic to model strategic interaction. An important question is whether we can use these theories to design agents for interacting with people? On the one hand, they provide a formal design specification for agent strategies. On the other hand, people do not necessarily adhere to playing in accordance with these strategies, and their behavior is affected by a multitude of social and psychological factors. In this paper we will consider the question of whether strategies implied by theories of strategic behavior can be used by automated agents that interact proficiently with people. We will focus on automated agents that we built that need to interact with people in two negotiation settings: bargaining and deliberation. For bargaining we will study game-theory based equilibrium agents and for argumentation we will discuss logic-based argumentation theory. We will also consider security games and persuasion games and will discuss the benefits of using equilibrium based agents.Comment: In Proceedings TARK 2015, arXiv:1606.0729

    Towards a science of security games

    Get PDF
    Abstract. Security is a critical concern around the world. In many domains from counter-terrorism to sustainability, limited security resources prevent complete security coverage at all times. Instead, these limited resources must be scheduled (or allocated or deployed), while simultaneously taking into account the impor-tance of different targets, the responses of the adversaries to the security posture, and the potential uncertainties in adversary payoffs and observations, etc. Com-putational game theory can help generate such security schedules. Indeed, casting the problem as a Stackelberg game, we have developed new algorithms that are now deployed over multiple years in multiple applications for scheduling of secu-rity resources. These applications are leading to real-world use-inspired research in the emerging research area of “security games”. The research challenges posed by these applications include scaling up security games to real-world sized prob-lems, handling multiple types of uncertainty, and dealing with bounded rationality of human adversaries.

    Bounded Risk-Sensitive Markov Games: Forward Policy Design and Inverse Reward Learning with Iterative Reasoning and Cumulative Prospect Theory

    Full text link
    Classical game-theoretic approaches for multi-agent systems in both the forward policy design problem and the inverse reward learning problem often make strong rationality assumptions: agents perfectly maximize expected utilities under uncertainties. Such assumptions, however, substantially mismatch with observed humans' behaviors such as satisficing with sub-optimal, risk-seeking, and loss-aversion decisions. In this paper, we investigate the problem of bounded risk-sensitive Markov Game (BRSMG) and its inverse reward learning problem for modeling human realistic behaviors and learning human behavioral models. Drawing on iterative reasoning models and cumulative prospect theory, we embrace that humans have bounded intelligence and maximize risk-sensitive utilities in BRSMGs. Convergence analysis for both the forward policy design and the inverse reward learning problems are established under the BRSMG framework. We validate the proposed forward policy design and inverse reward learning algorithms in a navigation scenario. The results show that the behaviors of agents demonstrate both risk-averse and risk-seeking characteristics. Moreover, in the inverse reward learning task, the proposed bounded risk-sensitive inverse learning algorithm outperforms a baseline risk-neutral inverse learning algorithm by effectively recovering not only more accurate reward values but also the intelligence levels and the risk-measure parameters given demonstrations of agents' interactive behaviors.Comment: Accepted by 2021 AAAI Conference on Artificial Intelligenc

    Incentive Mechanisms for Participatory Sensing: Survey and Research Challenges

    Full text link
    Participatory sensing is a powerful paradigm which takes advantage of smartphones to collect and analyze data beyond the scale of what was previously possible. Given that participatory sensing systems rely completely on the users' willingness to submit up-to-date and accurate information, it is paramount to effectively incentivize users' active and reliable participation. In this paper, we survey existing literature on incentive mechanisms for participatory sensing systems. In particular, we present a taxonomy of existing incentive mechanisms for participatory sensing systems, which are subsequently discussed in depth by comparing and contrasting different approaches. Finally, we discuss an agenda of open research challenges in incentivizing users in participatory sensing.Comment: Updated version, 4/25/201

    Is Behavioral Economics Doomed?

    Get PDF
    It is fashionable to criticize economic theory for focusing too much on rationality and ignoring the imperfect and emotional way in which real economic decisions are reached. All of us facing the global economic crisis wonder just how rational economic men and women can be. Behavioral economics—an effort to incorporate psychological ideas into economics—has become all the rage. This book by well-known economist David K. Levine questions the idea that behavioral economics is the answer to economic problems. It explores the successes and failures of contemporary economics both inside and outside the laboratory. It then asks whether popular behavioral theories of psychological biases are solutions to the failures. It not only provides an overview of popular behavioral theories and their history, but also gives the reader the tools for scrutinizing them. Levine’s book is essential reading for students and teachers of economic theory and anyone interested in the psychology of economics

    Adversarial Decision Making in Counterterrorism Applications

    Get PDF
    Our main objective is to improve decision making in counterterrorism applications by implementing expected utility for prescriptive decision making and prospect theory for descriptive modeling. The areas that we aim to improve are behavioral modeling of adversaries with multi objectives in counterterrorism applications and incorporating risk attitudes of decision makers to risk matrices in assessing risk within an adversarial counterterrorism framework. Traditionally, counterterrorism applications have been approached on a single attribute basis. We utilize a multi-attribute prospect theory approach to more realistically model the attacker’s behavior, while using expected utility theory to prescribe the appropriate actions to the defender. We evaluate our approach by considering an attacker with multiple objectives who wishes to smuggle radioactive material into the United States and a defender who has the option to implement a screening process to hinder the attacker. Next, we consider the use of risk matrices (a method widely used for assessing risk given a consequence and a probability pairing of a potential threat) in an adversarial framework – modeling an attacker and defender risk matrix using utility theory and linking the matrices with the Luce model. A shortcoming with modeling the attacker and the defender risk matrix using utility theory is utility theory’s failure to account for the decision makers’ deviation from rational behavior as seen in experimental literature. We consider an adversarial risk matrix framework that models the attacker risk matrix using prospect theory to overcome this shortcoming, while using expected utility theory to prescribe actions to the defender
    corecore