738 research outputs found
Applications of Repeated Games in Wireless Networks: A Survey
A repeated game is an effective tool to model interactions and conflicts for
players aiming to achieve their objectives in a long-term basis. Contrary to
static noncooperative games that model an interaction among players in only one
period, in repeated games, interactions of players repeat for multiple periods;
and thus the players become aware of other players' past behaviors and their
future benefits, and will adapt their behavior accordingly. In wireless
networks, conflicts among wireless nodes can lead to selfish behaviors,
resulting in poor network performances and detrimental individual payoffs. In
this paper, we survey the applications of repeated games in different wireless
networks. The main goal is to demonstrate the use of repeated games to
encourage wireless nodes to cooperate, thereby improving network performances
and avoiding network disruption due to selfish behaviors. Furthermore, various
problems in wireless networks and variations of repeated game models together
with the corresponding solutions are discussed in this survey. Finally, we
outline some open issues and future research directions.Comment: 32 pages, 15 figures, 5 tables, 168 reference
Recommended from our members
Game-Theoretic Safety Assurance for Human-Centered Robotic Systems
In order for autonomous systems like robots, drones, and self-driving cars to be reliably introduced into our society, they must have the ability to actively account for safety during their operation. While safety analysis has traditionally been conducted offline for controlled environments like cages on factory floors, the much higher complexity of open, human-populated spaces like our homes, cities, and roads makes it unviable to rely on common design-time assumptions, since these may be violated once the system is deployed. Instead, the next generation of robotic technologies will need to reason about safety online, constructing high-confidence assurances informed by ongoing observations of the environment and other agents, in spite of models of them being necessarily fallible.This dissertation aims to lay down the necessary foundations to enable autonomous systems to ensure their own safety in complex, changing, and uncertain environments, by explicitly reasoning about the gap between their models and the real world. It first introduces a suite of novel robust optimal control formulations and algorithmic tools that permit tractable safety analysis in time-varying, multi-agent systems, as well as safe real-time robotic navigation in partially unknown environments; these approaches are demonstrated on large-scale unmanned air traffic simulation and physical quadrotor platforms. After this, it draws on Bayesian machine learning methods to translate model-based guarantees into high-confidence assurances, monitoring the reliability of predictive models in light of changing evidence about the physical system and surrounding agents. This principle is first applied to a general safety framework allowing the use of learning-based control (e.g. reinforcement learning) for safety-critical robotic systems such as drones, and then combined with insights from cognitive science and dynamic game theory to enable safe human-centered navigation and interaction; these techniques are showcased on physical quadrotors—flying in unmodeled wind and among human pedestrians—and simulated highway driving. The dissertation ends with a discussion of challenges and opportunities ahead, including the bridging of safety analysis and reinforcement learning and the need to ``close the loop'' around learning and adaptation in order to deploy increasingly advanced autonomous systems with confidence
Robust Inflation-Targeting Rules and the Gains from International Policy Coordination
This paper empirically assesses the performance of interest-rate monetary rules for interdependent economies characterized by model uncertainty. We set out a two-bloc dynamic stochastic general equilibrium model with habit persistence (that generates output persistence), Calvo pricing and wage-setting with indexing of non-optimized prices and wages (generating inflation persistence), incomplete financial markets and the incomplete pass-through of exchange rate changes. We estimate a linearized form of the model by Bayesian maximum-likelihood methods using US and Euro-zone data. From the estimates of the posterior distributions we then examine monetary policy conducted both independently and cooperatively by the Fed and the ECB in the form of robust inflation-targeting interest-rate rules. Comparing the utility outcome in a closed-loop Nash equilibrium with the outcome from a coordinated design of policy rules, we find a new result: the gains from monetary policy coordination rise significantly when CPI inflation targeting interest-rate rules are designed to account for model uncertainty.monetary policy coordination, robustness, inflation-targeting interest-rate rules.
APPROACHES TO VULNERABILITY ANALYSIS FOR DISCOVERING THE CRITICAL ROUTES IN ROADWAY NETWORKS
All modes of transportation are vulnerable to disruptions caused by natural disasters and/or man-made events (e.g., accidents), which may have temporary or permanent consequences. Identifying crucial links where failure could have significant effects is an important component of transportation network vulnerability assessments, and the risk of such occurrences cannot be underestimated. The ability to recognize critical segments in a transportation network is essential for designing resilient networks and improving traffic conditions in scenarios like link failures, which can result in partial or full capacity reductions in the system. This study proposes two approaches for identifying critical links for both single and multiple link disruptions. New hybrid link ranking measures are proposed, and their accuracy is compared with the existing traffic-based measures. These new ranking measures integrate aspects of traffic equilibrium and network topology. The numerical study revealed that three of the proposed measures generate valid findings while consuming much less computational power and time than full-scan analysis measures. To cover various disruption possibilities other than single link failure, an optimization model based on a game theory framework and a heuristic algorithm to solve the mathematical formulation is described in the second part of this research. The proposed methodology is able to identify critical sets of links under different disruption scenarios including major and minor interruptions, non-intelligent and intelligent attackers, and the effect of presenting defender. Results were evaluated with both full scan analysis techniques and hybrid ranking measures, and the comparison demonstrated that the proposed model and algorithm are reliable at identifying critical sets of links for random and specially targeted attacks based on the adversary\u27s link selection in both partial and complete link closure scenarios, while significantly reducing computational complexity. The findings indicate that identifying critical sets of links is highly dependent on the adversary\u27s inelegancy, the presence of defenders, and the disruption scenario. Furthermore, this research indicates that in disruptions of multiple links, there is a complex correlation between critical links and simply combining the most critical single links significantly underestimates the network\u27s vulnerability
Provider and peer selection in the evolving internet ecosystem
The Internet consists of thousands of autonomous networks connected together to provide end-to-end reachability. Networks of different sizes, and with different functions and business objectives, interact and co-exist in the evolving "Internet Ecosystem". The Internet ecosystem is highly dynamic, experiencing growth (birth of new networks), rewiring (changes in the connectivity of existing networks), as well as deaths (of existing networks). The dynamics of the Internet ecosystem are determined both by external "environmental" factors (such as the state of the global economy or the popularity of new Internet applications) and the complex incentives and objectives of each network. These dynamics have major implications on how the future Internet will look like. How does the Internet evolve? What is the Internet heading towards, in terms of topological, performance, and economic organization? How do given optimization strategies affect the profitability of different
networks? How do these strategies affect the Internet in terms of topology, economics, and performance?
In this thesis, we take some steps towards answering the above questions using a combination of measurement and modeling approaches. We first study the evolution of the Autonomous System (AS) topology over the last decade. In particular, we classify ASes and inter-AS links according to their business function, and study separately their evolution over the last 10 years. Next, we focus on enterprise customers and content providers at the edge of the Internet, and propose algorithms for a stub network to choose its upstream providers to maximize its utility (either monetary cost, reliability or performance). Third, we develop a model for interdomain network formation, incorporating the effects of economics, geography, and the provider/peer selections strategies of different types of networks. We use this model to examine the "outcome" of these strategies, in terms of the topology, economics and performance of the resulting internetwork. We also investigate the effect of external factors, such as the nature of the interdomain traffic matrix, customer preferences in provider selection, and pricing/cost structures. Finally, we focus on a recent trend due to the increasing amount of traffic flowing from content providers (who generate content), to access providers (who serve end users). This has led to a tussle between content providers and access providers, who have threatened to prioritize certain types of traffic, or charge content providers directly -- strategies that are viewed as violations of "network neutrality". In our work, we evaluate various pricing and connection strategies that access providers can use to remain profitable without violating network neutrality.Ph.D.Committee Chair: Dovrolis, Constantine; Committee Member: Ammar, Mostafa; Committee Member: Feamster, Nick; Committee Member: Willinger, Walter; Committee Member: Zegura, Elle
Dynamic State Estimation in Distributed Aircraft Electric Control Systems via Adaptive Submodularity
International audienceWe consider the problem of estimating the discrete state of an aircraft electric system under a distributed control architecture through active sensing. The main idea is to use a set of controllable switches to reconfigure the system in order to gather more information about the unknown state. By adaptively making a sequence of reconfiguration decisions with uncertain outcome, then correlating measurements and prior information to make the next decision, we aim to reduce the uncertainty. A greedy strategy is developed that maximizes the one-step expected uncertainty reduction. By exploiting recent results on adaptive submodularity, we give theoretical guarantees on the worst-case performance of the greedy strategy. We apply the proposed method in a fault detection scenario where the discrete state captures possible faults in various circuit components. In addition, simple abstraction rules are proposed to alleviate state space explosion and to scale up the strategy. Finally, the efficiency of the proposed method is demonstrated empirically on different circuits
- …