4,532 research outputs found

    A Labelling Framework for Probabilistic Argumentation

    Full text link
    The combination of argumentation and probability paves the way to new accounts of qualitative and quantitative uncertainty, thereby offering new theoretical and applicative opportunities. Due to a variety of interests, probabilistic argumentation is approached in the literature with different frameworks, pertaining to structured and abstract argumentation, and with respect to diverse types of uncertainty, in particular the uncertainty on the credibility of the premises, the uncertainty about which arguments to consider, and the uncertainty on the acceptance status of arguments or statements. Towards a general framework for probabilistic argumentation, we investigate a labelling-oriented framework encompassing a basic setting for rule-based argumentation and its (semi-) abstract account, along with diverse types of uncertainty. Our framework provides a systematic treatment of various kinds of uncertainty and of their relationships and allows us to back or question assertions from the literature

    Practical Challenges in Explicit Ethical Machine Reasoning

    Get PDF
    We examine implemented systems for ethical machine reasoning with a view to identifying the practical challenges (as opposed to philosophical challenges) posed by the area. We identify a need for complex ethical machine reasoning not only to be multi-objective, proactive, and scrutable but that it must draw on heterogeneous evidential reasoning. We also argue that, in many cases, it needs to operate in real time and be verifiable. We propose a general architecture involving a declarative ethical arbiter which draws upon multiple evidential reasoners each responsible for a particular ethical feature of the system's environment. We claim that this architecture enables some separation of concerns among the practical challenges that ethical machine reasoning poses

    The belief noisy-or model applied to network reliability analysis

    Get PDF
    One difficulty faced in knowledge engineering for Bayesian Network (BN) is the quan-tification step where the Conditional Probability Tables (CPTs) are determined. The number of parameters included in CPTs increases exponentially with the number of parent variables. The most common solution is the application of the so-called canonical gates. The Noisy-OR (NOR) gate, which takes advantage of the independence of causal interactions, provides a logarithmic reduction of the number of parameters required to specify a CPT. In this paper, an extension of NOR model based on the theory of belief functions, named Belief Noisy-OR (BNOR), is proposed. BNOR is capable of dealing with both aleatory and epistemic uncertainty of the network. Compared with NOR, more rich information which is of great value for making decisions can be got when the available knowledge is uncertain. Specially, when there is no epistemic uncertainty, BNOR degrades into NOR. Additionally, different structures of BNOR are presented in this paper in order to meet various needs of engineers. The application of BNOR model on the reliability evaluation problem of networked systems demonstrates its effectiveness

    Connectionist Inference Models

    Get PDF
    The performance of symbolic inference tasks has long been a challenge to connectionists. In this paper, we present an extended survey of this area. Existing connectionist inference systems are reviewed, with particular reference to how they perform variable binding and rule-based reasoning, and whether they involve distributed or localist representations. The benefits and disadvantages of different representations and systems are outlined, and conclusions drawn regarding the capabilities of connectionist inference systems when compared with symbolic inference systems or when used for cognitive modeling

    Applying CBR to manage argumentation in MAS

    Full text link
    [EN] The application of argumentation theories and techniques in multi-agent systems has become a prolific area of research. Argumentation allows agents to harmonise two types of disagreement situations: internal, when the acquisition of new information (e.g., about the environment or about other agents) produces incoherences in the agents' mental state; and external, when agents that have different positions about a topic engage in a discussion. The focus of this paper is on the latter type of disagreement situations. In those settings, agents must be able to generate, select and send arguments to other agents that will evaluate them in their turn. An efficient way for agents to manage these argumentation abilities is by using case-based reasoning, which has been successfully applied to argumentation from its earliest beginnings. This reasoning methodology also allows agents to learn from their experiences and therefore, to improve their argumentation skills. This paper analyses the advantages of applying case-based reasoning to manage arguments in multi-agent systems dialogues, identifies open issues and proposes new ideas to tackle them.This work was partially supported by CONSOLIDERINGENIO 2010 under grant CSD2007-00022 and by the Spanish government and FEDER funds under CICYT TIN2005-03395 and TIN2006-14630-C0301 projects.Heras BarberĂĄ, SM.; Julian Inglada, VJ.; Botti Navarro, VJ. (2010). Applying CBR to manage argumentation in MAS. International Journal of Reasoning-based Intelligent Systems. 2(2):110-117. https://doi.org/10.1504/IJRIS.2010.034906S1101172

    Reasons for Reliabilism

    Get PDF
    One leading approach to justification comes from the reliabilist tradition, which maintains that a belief is justified provided that it is reliably formed. Another comes from the ‘Reasons First’ tradition, which claims that a belief is justified provided that it is based on reasons that support it. These two approaches are typically developed in isolation from each other; this essay motivates and defends a synthesis. On the view proposed here, justification is understood in terms of an agent’s reasons for belief, which are in turn analyzed along reliabilist lines: an agent's reasons for belief are the states that serve as inputs to their reliable processes. I show that this synthesis allows each tradition to profit from the other's explanatory resources. In particular, it enables reliabilists to explain epistemic defeat without abandoning their naturalistic ambitions. I go on to compare my proposed synthesis with other hybrid versions of reliabilism that have been proposed in the literature

    On the emergent Semantic Web and overlooked issues

    Get PDF
    The emergent Semantic Web, despite being in its infancy, has already received a lotof attention from academia and industry. This resulted in an abundance of prototype systems and discussion most of which are centred around the underlying infrastructure. However, when we critically review the work done to date we realise that there is little discussion with respect to the vision of the Semantic Web. In particular, there is an observed dearth of discussion on how to deliver knowledge sharing in an environment such as the Semantic Web in effective and efficient manners. There are a lot of overlooked issues, associated with agents and trust to hidden assumptions made with respect to knowledge representation and robust reasoning in a distributed environment. These issues could potentially hinder further development if not considered at the early stages of designing Semantic Web systems. In this perspectives paper, we aim to help engineers and practitioners of the Semantic Web by raising awareness of these issues
    • 

    corecore