54,944 research outputs found
RISK ASSESSMENT OF MALICIOUS ATTACKS AGAINST POWER SYSTEMS
The new scenarios of malicious attack prompt for their deeper consideration and mainly when critical systems are at stake. In this framework, infrastructural systems, including power systems, represent a possible target due to the huge impact they can have on society. Malicious attacks are different in their nature from other more traditional cause of threats to power system, since they embed a strategic interaction between the attacker and the defender (characteristics that cannot be found in natural events or systemic failures). This difference has not been systematically analyzed by the existent literature. In this respect, new approaches and tools are needed. This paper presents a mixed-strategy game-theory model able to capture the strategic interactions between malicious agents that may be willing to attack power systems and the system operators, with its related bodies, that are in charge of defending them. At the game equilibrium, the different strategies of the two players, in terms of attacking/protecting the critical elements of the systems, can be obtained. The information about the attack probability to various elements can be used to assess the risk associated with each of them, and the efficiency of defense resource allocation is evidenced in terms of the corresponding risk. Reference defense plans related to the online defense action and the defense action with a time delay can be obtained according to their respective various time constraints. Moreover, risk sensitivity to the defense/attack-resource variation is also analyzed. The model is applied to a standard IEEE RTS-96 test system for illustrative purpose and, on the basis of that system, some peculiar aspects of the malicious attacks are pointed ou
Recommended from our members
Artificial Intelligence, International Competition, and the Balance of Power (May 2018)
World leaders, CEOs, and academics have suggested that a revolution in artificial intelligence is upon us. Are they right, and what will advances in artificial intelligence mean for international competition and the balance of power? This article evaluates how developments in artificial intelligence (AI) â advanced, narrow applications in particular â are poised to influence military power and international politics. It describes how AI more closely resembles âenablingâ technologies such as the combustion engine or electricity than a specific weapon. AIâs still-emerging developments make it harder to assess than many technological changes, especially since many of the organizational decisions about the adoption and uses of new technology that generally shape the impact of that technology are in their infancy. The article then explores the possibility that key drivers of AI development in the private sector could cause the rapid diffusion of military applications of AI, limiting first-mover advantages for innovators. Alternatively, given uncertainty about the technological trajectory of AI, it is also possible that military uses of AI will be harder to develop based on private-sector AI technologies than many expect, generating more potential first-mover advantages for existing powers such as China and the United States, as well as larger consequences for relative power if a country fails to adapt. Finally, the article discusses the extent to which U.S. military rhetoric about the importance of AI matches the reality of U.S. investments.LBJ School of Public Affair
Global Solutions vs. Local Solutions for the AI Safety Problem
There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or âboxâ a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided into four groups: 1. No AI: AGI technology is banned or its use is otherwise prevented; 2. One AI: the first superintelligent AI is used to prevent the creation of any others; 3. Net of AIs as AI police: a balance is created between many AIs, so they evolve as a net and can prevent any rogue AI from taking over the world; 4. Humans inside AI: humans are augmented or part of AI. We explore many ideas, both old and new, regarding global solutions for AI safety. They include changing the number of AI teams, different forms of âAI Nannyâ (non-self-improving global control AI system able to prevent creation of dangerous AIs), selling AI safety solutions, and sending messages to future AI. Not every local solution scales to a global solution or does it ethically and safely. The choice of the best local solution should include understanding of the ways in which it will be scaled up. Human-AI teams or a superintelligent AI Service as suggested by Drexler may be examples of such ethically scalable local solutions, but the final choice depends on some unknown variables such as the speed of AI progres
Computable Rationality, NUTS, and the Nuclear Leviathan
This paper explores how the Leviathan that projects power through nuclear arms exercises a unique nuclearized sovereignty. In the case of nuclear superpowers, this sovereignty extends to wielding the power to destroy human civilization as we know it across the globe. Nuclearized sovereignty depends on a hybrid form of power encompassing human decision-makers in a hierarchical chain of command, and all of the technical and computerized functions necessary to maintain command and control at every moment of the sovereign's existence: this sovereign power cannot sleep. This article analyzes how the form of rationality that informs this hybrid exercise of power historically developed to be computable. By definition, computable rationality must be able to function without any intelligible grasp of the context or the comprehensive significance of decision-making outcomes. Thus, maintaining nuclearized sovereignty necessarily must be able to execute momentous life and death decisions without the type of sentience we usually associate with ethical individual and collective decisions
Morality and Conflicts
In recent debates, morality or social norms have been proposed as an instrument to reduce conflict behavior. As the argument goes, moral people will not engage in socially not-tolerated behavior or, less so than amoral people. Analyzing this question in the framework of contest theory, we find that if morality can discriminate between appropriation and defense, it is an effective instrument to lower socially unwanted behavior and support the enforcement of property rights. If it cannot discriminate between these different conflict efforts, strategic effects due to a one-sided increase in morality might actually lead to total increased conflict effort in the economy.Contests, property right enforcement, morality, education
- âŠ