57,563 research outputs found

    Applying modern portfolio theory to the analysis of terrorism: computing the set of attack method combinations from which the rational terrorist group will choose in order to maximise injuries and fatalities

    Get PDF
    In this paper, terrorism is analysed using the tools of modern portfolio theory. This approach permits the analysis of the returns that a terrorist group can expect from their activities as well as the risk they face. The analysis sheds new light on the nature of the terrorist group’s (attack method) choice set and the efficiency properties of that set. If terrorist groups are, on average, more risk averse, the economist can expect the terrorist group to exhibit a bias towards bombing and armed attack. In addition, even the riskiest (from the terrorist group’s point of view) combinations of attack methods have maximum expected returns of less than 70 injuries and fatalities per attack per year

    Trust in social machines: the challenges

    No full text
    The World Wide Web has ushered in a new generation of applications constructively linking people and computers to create what have been called ‘social machines.’ The ‘components’ of these machines are people and technologies. It has long been recognised that for people to participate in social machines, they have to trust the processes. However, the notions of trust often used tend to be imported from agent-based computing, and may be too formal, objective and selective to describe human trust accurately. This paper applies a theory of human trust to social machines research, and sets out some of the challenges to system designers

    Does bounded rationality lead to individual heterogeneity? The impact of the experimentation process and of memory constraints

    Get PDF
    In this paper we explore the effect of bounded rationality on the convergence of individual behavior toward equilibrium. In the context of a Cournot game with a unique and symmetric Nash equilibrium, firms are modeled as adaptive economic agents through a genetic algorithm. Computational experiments show that (1) there is remarkable heterogeneity across identical but boundedly rational agents; (2) such individual heterogeneity is not simply a consequence of the random elements contained in the genetic algorithm; (3) the more rational agents are in terms of memory abilities and pre-play evaluation of strategies, the less heterogeneous they are in their actions. At the limit case of full rationality, the outcome converges to the standard result of uniform individual behavior.bounded rationality; genetic algorithms; individual heterogeneitybounded rationality; genetic algorithms; individual heterogeneity

    Artificial consciousness and the consciousness-attention dissociation

    Get PDF
    Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to implement sophisticated forms of human intelligence in machines. This research attempts to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable once we overcome short-term engineering challenges. We believe, however, that phenomenal consciousness cannot be implemented in machines. This becomes clear when considering emotions and examining the dissociation between consciousness and attention in humans. While we may be able to program ethical behavior based on rules and machine learning, we will never be able to reproduce emotions or empathy by programming such control systems—these will be merely simulations. Arguments in favor of this claim include considerations about evolution, the neuropsychological aspects of emotions, and the dissociation between attention and consciousness found in humans. Ultimately, we are far from achieving artificial consciousness

    On Automating the Doctrine of Double Effect

    Full text link
    The doctrine of double effect (DDE\mathcal{DDE}) is a long-studied ethical principle that governs when actions that have both positive and negative effects are to be allowed. The goal in this paper is to automate DDE\mathcal{DDE}. We briefly present DDE\mathcal{DDE}, and use a first-order modal logic, the deontic cognitive event calculus, as our framework to formalize the doctrine. We present formalizations of increasingly stronger versions of the principle, including what is known as the doctrine of triple effect. We then use our framework to simulate successfully scenarios that have been used to test for the presence of the principle in human subjects. Our framework can be used in two different modes: One can use it to build DDE\mathcal{DDE}-compliant autonomous systems from scratch, or one can use it to verify that a given AI system is DDE\mathcal{DDE}-compliant, by applying a DDE\mathcal{DDE} layer on an existing system or model. For the latter mode, the underlying AI system can be built using any architecture (planners, deep neural networks, bayesian networks, knowledge-representation systems, or a hybrid); as long as the system exposes a few parameters in its model, such verification is possible. The role of the DDE\mathcal{DDE} layer here is akin to a (dynamic or static) software verifier that examines existing software modules. Finally, we end by presenting initial work on how one can apply our DDE\mathcal{DDE} layer to the STRIPS-style planning model, and to a modified POMDP model.This is preliminary work to illustrate the feasibility of the second mode, and we hope that our initial sketches can be useful for other researchers in incorporating DDE in their own frameworks.Comment: 26th International Joint Conference on Artificial Intelligence 2017; Special Track on AI & Autonom
    corecore