59,502 research outputs found

    OperA/ALIVE/OperettA

    Get PDF
    Comprehensive models for organizations must, on the one hand, be able to specify global goals and requirements but, on the other hand, cannot assume that particular actors will always act according to the needs and expectations of the system design. Concepts as organizational rules (Zambonelli 2002), norms and institutions (Dignum and Dignum 2001; Esteva et al. 2002), and social structures (Parunak and Odell 2002) arise from the idea that the effective engineering of organizations needs high-level, actor-independent concepts and abstractions that explicitly define the organization in which agents live (Zambonelli 2002).Peer ReviewedPostprint (author's final draft

    Taking Turing by Surprise? Designing Digital Computers for morally-loaded contexts

    Full text link
    There is much to learn from what Turing hastily dismissed as Lady Lovelace s objection. Digital computers can indeed surprise us. Just like a piece of art, algorithms can be designed in such a way as to lead us to question our understanding of the world, or our place within it. Some humans do lose the capacity to be surprised in that way. It might be fear, or it might be the comfort of ideological certainties. As lazy normative animals, we do need to be able to rely on authorities to simplify our reasoning: that is ok. Yet the growing sophistication of systems designed to free us from the constraints of normative engagement may take us past a point of no-return. What if, through lack of normative exercise, our moral muscles became so atrophied as to leave us unable to question our social practices? This paper makes two distinct normative claims: 1. Decision-support systems should be designed with a view to regularly jolting us out of our moral torpor. 2. Without the depth of habit to somatically anchor model certainty, a computer s experience of something new is very different from that which in humans gives rise to non-trivial surprises. This asymmetry has key repercussions when it comes to the shape of ethical agency in artificial moral agents. The worry is not just that they would be likely to leap morally ahead of us, unencumbered by habits. The main reason to doubt that the moral trajectories of humans v. autonomous systems might remain compatible stems from the asymmetry in the mechanisms underlying moral change. Whereas in humans surprises will continue to play an important role in waking us to the need for moral change, cognitive processes will rule when it comes to machines. This asymmetry will translate into increasingly different moral outlooks, to the point of likely unintelligibility. The latter prospect is enough to doubt the desirability of autonomous moral agents

    Adaptive logic characterizations of input/output logic

    Get PDF
    We translate unconstrained and constrained input/output logics as introduced by Makinson and van der Torre to modal logics, using adaptive logics for the constrained case. The resulting reformulation has some additional benefits. First, we obtain a proof-theoretic (dynamic) characterization of input/output logics. Second, we demonstrate that our framework naturally gives rise to useful variants and allows to express important notions that go beyond the expressive means of input/output logics, such as violations and sanctions

    Norm Monitoring under Partial Action Observability

    Get PDF
    In the context of using norms for controlling multi-agent systems, a vitally important question that has not yet been addressed in the literature is the development of mechanisms for monitoring norm compliance under partial action observability. This paper proposes the reconstruction of unobserved actions to tackle this problem. In particular, we formalise the problem of reconstructing unobserved actions, and propose an information model and algorithms for monitoring norms under partial action observability using two different processes for reconstructing unobserved actions. Our evaluation shows that reconstructing unobserved actions increases significantly the number of norm violations and fulfilments detected.Comment: Accepted at the IEEE Transaction on Cybernetic

    Limitations on applying Peircean semeiotic. Biosemiotics as applied objective ethics and esthetics rather than semeiotic.

    Get PDF
    This paper explores the critical conditions of such semiotic realism that is commonly presumed in the so-called Copenhagen interpretation of biosemiotics. The central task is to make basic biosemiotic concepts as clear as possible by applying C.S. Peirce’s pragmaticist methodology to his own concepts, especially to those that have had a strong influence on the Copenhagian biosemiotics. It appears essential to study what kinds of observation the basic semiotic concepts are derived from. Peirce had two different derivations to the concept of sign, both having a strong logical character. Therefore, it is discussed at length what Peirce’s conception of logic consists of and how logical concepts relate to the concepts of other sciences. It is shown that Peirce had two different perspectives toward sign, the ‘transcendental’ one and the objective one, and only the latter one is executable in biosemiotic applications. Although Peirce’ theory of signs seems to appear as twofold (if not even manifold), it is concluded that the ore conception has been stable. The apparent differences are presumably due to the different perspectives of consideration. Severe limitations for the application of Peirce’s semiotic concepts follow from this analysis that should be taken into account in biosemiotics relying on its Copenhagen interpretation. The first one concerns the ‘interpreter’ of a suggested biosemiotic sign — whether it is ‘we’ (as a ‘meta-agent’) or some genuine biosemiotic ‘object-agent’. Only if the latter one is determinable, some real biosemiotic sign-action may occur. The second one concerns the application of the concept of the object of sign — its use is limited so that a sign has an object if and only if it seeks a true conception about it. This conclusion has drastic further consequences. Most of the genuinely biosemiotic sign-processes do not tend toward truth about anything but toward various practical ends. Therefore, the logical concept of sign, e.g. the one of Peirce’s semeiotic, is an insufficient concept for biosemiotics. In order to establish a sufficient one, Peircean theoretical ethics and esthetics are introduced. It is concluded that they involve simpler and more general but still normative concept of sign — the concept of anticipative or constructive representation that does not represent any object at all. Instead, it is a completely future-oriented representation that guides action. Objective ethics provides the suitable concept of representation, but it appeals to objective esthetics that provides a theory of (local) natural self-normativity. The concepts of objective logic form the special species of objective ethics. The conclusion is that biosemiotics should be based on applied objective ethics and esthetics rather than on (Peircean semeiotic) logic and its metaphysical application. Finally, the physiosemiotic over-generalization of the concept of sign is shortly discussed. It is suggested that it would be more appropriate to rename such controversial generalizations than to adhere to semiotic terminology. Here, again, Peirce appears as a healthy role model with his ‘ethics of terminology’

    A canonical theory of dynamic decision-making

    Get PDF
    Decision-making behavior is studied in many very different fields, from medicine and eco- nomics to psychology and neuroscience, with major contributions from mathematics and statistics, computer science, AI, and other technical disciplines. However the conceptual- ization of what decision-making is and methods for studying it vary greatly and this has resulted in fragmentation of the field. A theory that can accommodate various perspectives may facilitate interdisciplinary working. We present such a theory in which decision-making is articulated as a set of canonical functions that are sufficiently general to accommodate diverse viewpoints, yet sufficiently precise that they can be instantiated in different ways for specific theoretical or practical purposes. The canons cover the whole decision cycle, from the framing of a decision based on the goals, beliefs, and background knowledge of the decision-maker to the formulation of decision options, establishing preferences over them, and making commitments. Commitments can lead to the initiation of new decisions and any step in the cycle can incorporate reasoning about previous decisions and the rationales for them, and lead to revising or abandoning existing commitments. The theory situates decision-making with respect to other high-level cognitive capabilities like problem solving, planning, and collaborative decision-making. The canonical approach is assessed in three domains: cognitive and neuropsychology, artificial intelligence, and decision engineering

    Building machines that learn and think about morality

    Get PDF
    Lake et al. propose three criteria which, they argue, will bring artificial intelligence (AI) systems closer to human cognitive abilities. In this paper, we explore the application of these criteria to a particular domain of human cognition: our capacity for moral reasoning. In doing so, we explore a set of considerations relevant to the development of AI moral decision-making. Our main focus is on the relation between dual-process accounts of moral reasoning and model-free/model-based forms of machine learning. We also discuss how work in embodied and situated cognition could provide a valu- able perspective on future research
    corecore