40,769 research outputs found
A canonical theory of dynamic decision-making
Decision-making behavior is studied in many very different fields, from medicine and eco- nomics to psychology and neuroscience, with major contributions from mathematics and statistics, computer science, AI, and other technical disciplines. However the conceptual- ization of what decision-making is and methods for studying it vary greatly and this has resulted in fragmentation of the field. A theory that can accommodate various perspectives may facilitate interdisciplinary working. We present such a theory in which decision-making is articulated as a set of canonical functions that are sufficiently general to accommodate diverse viewpoints, yet sufficiently precise that they can be instantiated in different ways for specific theoretical or practical purposes. The canons cover the whole decision cycle, from the framing of a decision based on the goals, beliefs, and background knowledge of the decision-maker to the formulation of decision options, establishing preferences over them, and making commitments. Commitments can lead to the initiation of new decisions and any step in the cycle can incorporate reasoning about previous decisions and the rationales for them, and lead to revising or abandoning existing commitments. The theory situates decision-making with respect to other high-level cognitive capabilities like problem solving, planning, and collaborative decision-making. The canonical approach is assessed in three domains: cognitive and neuropsychology, artificial intelligence, and decision engineering
An Argumentation-Based Reasoner to Assist Digital Investigation and Attribution of Cyber-Attacks
We expect an increase in the frequency and severity of cyber-attacks that
comes along with the need for efficient security countermeasures. The process
of attributing a cyber-attack helps to construct efficient and targeted
mitigating and preventive security measures. In this work, we propose an
argumentation-based reasoner (ABR) as a proof-of-concept tool that can help a
forensics analyst during the analysis of forensic evidence and the attribution
process. Given the evidence collected from a cyber-attack, our reasoner can
assist the analyst during the investigation process, by helping him/her to
analyze the evidence and identify who performed the attack. Furthermore, it
suggests to the analyst where to focus further analyses by giving hints of the
missing evidence or new investigation paths to follow. ABR is the first
automatic reasoner that can combine both technical and social evidence in the
analysis of a cyber-attack, and that can also cope with incomplete and
conflicting information. To illustrate how ABR can assist in the analysis and
attribution of cyber-attacks we have used examples of cyber-attacks and their
analyses as reported in publicly available reports and online literature. We do
not mean to either agree or disagree with the analyses presented therein or
reach attribution conclusions
The Jiminy Advisor: Moral Agreements Among Stakeholders Based on Norms and Argumentation
An autonomous system is constructed by a manufacturer, operates in a society
subject to norms and laws, and is interacting with end users. All of these
actors are stakeholders affected by the behavior of the autonomous system. We
address the challenge of how the ethical views of such stakeholders can be
integrated in the behavior of the autonomous system. We propose an ethical
recommendation component, which we call Jiminy, that uses techniques from
normative systems and formal argumentation to reach moral agreements among
stakeholders. Jiminy represents the ethical views of each stakeholder by using
normative systems, and has three ways of resolving moral dilemmas involving the
opinions of the stakeholders. First, Jiminy considers how the arguments of the
stakeholders relate to one another, which may already resolve the dilemma.
Secondly, Jiminy combines the normative systems of the stakeholders such that
the combined expertise of the stakeholders may resolve the dilemma. Thirdly,
and only if these two other methods have failed, Jiminy uses context-sensitive
rules to decide which of the stakeholders take preference. At the abstract
level, these three methods are characterized by the addition of arguments, the
addition of attacks among arguments, and the removal of attacks among
arguments. We show how Jiminy can be used not only for ethical reasoning and
collaborative decision making, but also for providing explanations about
ethical behavior
Improving argumentation-based recommender systems through context-adaptable selection criteria
Recommender Systems based on argumentation represent an important proposal where the recommendation is supported by qualitative information. In these systems, the role of the comparison criterion used to decide between competing arguments is paramount and the possibility of using the most appropriate for a given domain becomes a central issue; therefore, an argumentative recommender system that offers an interchangeable argument comparison criterion provides a significant ability that can be exploited by the user. However, in most of current recommender systems, the argument comparison criterion is either fixed, or codified within the arguments. In this work we propose a formalization of context-adaptable selection criteria that enhances the argumentative reasoning mechanism. Thus, we do not propose of a new type of recommender system; instead we present a mechanism that expand the capabilities of existing argumentation-based recommender systems. More precisely, our proposal is to provide a way of specifying how to select and use the most appropriate argument comparison criterion effecting the selection on the userÂŽs preferences, giving the possibility of programming, by the use of conditional expressions, which argument preference criterion has to be used in each particular situation.Fil: Teze, Juan Carlos Lionel. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Centro CientĂfico TecnolĂłgico Conicet - BahĂa Blanca; Argentina. Universidad Nacional del Sur. Departamento de Ciencias e IngenierĂa de la ComputaciĂłn; Argentina. Universidad Nacional de Entre RĂos; ArgentinaFil: Gottifredi, SebastiĂĄn. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Centro CientĂfico TecnolĂłgico Conicet - BahĂa Blanca; Argentina. Universidad Nacional del Sur. Departamento de Ciencias e IngenierĂa de la ComputaciĂłn; ArgentinaFil: GarcĂa, Alejandro Javier. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Centro CientĂfico TecnolĂłgico Conicet - BahĂa Blanca; Argentina. Universidad Nacional del Sur. Departamento de Ciencias e IngenierĂa de la ComputaciĂłn; ArgentinaFil: Simari, Guillermo Ricardo. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Centro CientĂfico TecnolĂłgico Conicet - BahĂa Blanca; Argentina. Universidad Nacional del Sur. Departamento de Ciencias e IngenierĂa de la ComputaciĂłn; Argentin
SAsSy â Scrutable Autonomous Systems
Abstract. An autonomous system consists of physical or virtual systems that can perform tasks without continuous human guidance. Autonomous systems are becoming increasingly ubiquitous, ranging from unmanned vehicles, to robotic surgery devices, to virtual agents which collate and process information on the internet. Existing autonomous systems are opaque, limiting their usefulness in many situations. In order to realise their promise, techniques for making such autonomous systems scrutable are therefore required. We believe that the creation of such scrutable autonomous systems rests on four foundations, namely an appropriate planning representation; the use of a human understandable reasoning mechanism, such as argumentation theory; appropriate natural language generation tools to translate logical statements into natural ones; and information presentation techniques to enable the user to cope with the deluge of information that autonomous systems can provide. Each of these foundations has its own unique challenges, as does the integration of all of these into a single system.
- âŠ