300,794 research outputs found

    Three dimensional norm-based knowledge management for knowledge intensive business service organizations: an organizational semiotics perspective

    Get PDF
    The utilization of knowledge enables knowledge intensive business service (KIBS) organizations, such as law firms, to perform and deliver value to their customers. Organizational semiotics views norms as knowledge that are developed through practical experience of human agents in organizations. Building on organizational semiotics and knowledge management, this paper proposes a three dimensional norm-based knowledge management (3DNKM) framework for legal sector in the UK. Abductive reasoning is adopted for guiding the research process in this paper. The three identified contextual dimensions of knowledge include customer, practice area and lawyer. For each dimension, there are informal, formal and technical norms establishing context-based knowledge. The proposed framework provides a way for KIBS organizations to manage the intertwined norms from the three dimensions and various levels

    Some characterizations of T-power based implications

    Get PDF
    Recently, the so-called family of T-power based implications was introduced. These operators involve the use of Zadeh’s quantifiers based on powers of t-norms in its definition. Due to the fact that Zadeh’s quantifiers constitute the usual method to modify fuzzy propositions, this family of fuzzy implication functions satisfies an important property in approximate reasoning such as the invariance of the truth value of the fuzzy conditional when both the antecedent and the consequent are modified using the same quantifier. In this paper, an in-depth analysis of this property is performed by characterizing all binary functions satisfying it. From this general result, a fully characterization of the family of T-power based implications is presented. Furthermore, a second characterization is also proved in which surprisingly the invariance property is not explicitly used.Peer ReviewedPostprint (author's final draft

    The Jiminy Advisor: Moral Agreements Among Stakeholders Based on Norms and Argumentation

    Full text link
    An autonomous system is constructed by a manufacturer, operates in a society subject to norms and laws, and is interacting with end users. All of these actors are stakeholders affected by the behavior of the autonomous system. We address the challenge of how the ethical views of such stakeholders can be integrated in the behavior of the autonomous system. We propose an ethical recommendation component, which we call Jiminy, that uses techniques from normative systems and formal argumentation to reach moral agreements among stakeholders. Jiminy represents the ethical views of each stakeholder by using normative systems, and has three ways of resolving moral dilemmas involving the opinions of the stakeholders. First, Jiminy considers how the arguments of the stakeholders relate to one another, which may already resolve the dilemma. Secondly, Jiminy combines the normative systems of the stakeholders such that the combined expertise of the stakeholders may resolve the dilemma. Thirdly, and only if these two other methods have failed, Jiminy uses context-sensitive rules to decide which of the stakeholders take preference. At the abstract level, these three methods are characterized by the addition of arguments, the addition of attacks among arguments, and the removal of attacks among arguments. We show how Jiminy can be used not only for ethical reasoning and collaborative decision making, but also for providing explanations about ethical behavior

    Severity-sensitive norm-governed multi-agent planning

    Get PDF
    This research was funded by Selex ES. The software developed during this research, including the norm analysis and planning algorithms, the simulator and harbour protection scenario used during evaluation is freely available from doi:10.5258/SOTON/D0139Peer reviewedPublisher PD

    Responsible Autonomy

    Full text link
    As intelligent systems are increasingly making decisions that directly affect society, perhaps the most important upcoming research direction in AI is to rethink the ethical implications of their actions. Means are needed to integrate moral, societal and legal values with technological developments in AI, both during the design process as well as part of the deliberation algorithms employed by these systems. In this paper, we describe leading ethics theories and propose alternative ways to ensure ethical behavior by artificial systems. Given that ethics are dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit leading to better understanding and trust on artificial autonomous systems.Comment: IJCAI2017 (International Joint Conference on Artificial Intelligence

    Pushing the bounds of rationality: Argumentation and extended cognition

    Get PDF
    One of the central tasks of a theory of argumentation is to supply a theory of appraisal: a set of standards and norms according to which argumentation, and the reasoning involved in it, is properly evaluated. In their most general form, these can be understood as rational norms, where the core idea of rationality is that we rightly respond to reasons by according the credence we attach to our doxastic and conversational commitments with the probative strength of the reasons we have for them. Certain kinds of rational failings are so because they are manifestly illogical – for example, maintaining overtly contradictory commitments, violating deductive closure by refusing to accept the logical consequences of one’s present commitments, or failing to track basing relations by not updating one’s commitments in view of new, defeating information. Yet, according to the internal and empirical critiques, logic and probability theory fail to supply a fit set of norms for human reasoning and argument. Particularly, theories of bounded rationality have put pressure on argumentation theory to lower the normative standards of rationality for reasoners and arguers on the grounds that we are bounded, finite, and fallible agents incapable of meeting idealized standards. This paper explores the idea that argumentation, as a set of practices, together with the procedures and technologies of argumentation theory, is able to extend cognition such that we are better able to meet these idealized logical standards, thereby extending our responsibilities to adhere to idealized rational norms
    corecore