74 research outputs found

    Interpretability of Gradual Semantics in Abstract Argumentation

    Get PDF
    International audiencergumentation, in the field of Artificial Intelligence, is a for-malism allowing to reason with contradictory information as well as tomodel an exchange of arguments between one or several agents. For thispurpose, many semantics have been defined with, amongst them, grad-ual semantics aiming to assign an acceptability degree to each argument.Although the number of these semantics continues to increase, there iscurrently no method allowing to explain the results returned by thesesemantics. In this paper, we study the interpretability of these seman-tics by measuring, for each argument, the impact of the other argumentson its acceptability degree. We define a new property and show that thescore of an argument returned by a gradual semantics which satisfies thisproperty can also be computed by aggregating the impact of the otherarguments on it. This result allows to provide, for each argument in anargumentation framework, a ranking between arguments from the most to the least impacting ones w.r.t a given gradual semantic

    Interpretability of Gradual Semantics in Abstract Argumentation

    Get PDF
    International audiencergumentation, in the field of Artificial Intelligence, is a for-malism allowing to reason with contradictory information as well as tomodel an exchange of arguments between one or several agents. For thispurpose, many semantics have been defined with, amongst them, grad-ual semantics aiming to assign an acceptability degree to each argument.Although the number of these semantics continues to increase, there iscurrently no method allowing to explain the results returned by thesesemantics. In this paper, we study the interpretability of these seman-tics by measuring, for each argument, the impact of the other argumentson its acceptability degree. We define a new property and show that thescore of an argument returned by a gradual semantics which satisfies thisproperty can also be computed by aggregating the impact of the otherarguments on it. This result allows to provide, for each argument in anargumentation framework, a ranking between arguments from the most to the least impacting ones w.r.t a given gradual semantic

    Labeled bipolar argumentation frameworks

    Get PDF
    An essential part of argumentation-based reasoning is to identify arguments in favor and against a statement or query, select the acceptable ones, and then determine whether or not the original statement should be accepted. We present here an abstract framework that considers two independent forms of argument interaction-support and conflict-and is able to represent distinctive information associated with these arguments. This information can enable additional actions such as: (i) a more in-depth analysis of the relations between the arguments; (ii) a representation of the user's posture to help in focusing the argumentative process, optimizing the values of attributes associated with certain arguments; and (iii) an enhancement of the semantics taking advantage of the availability of richer information about argument acceptability. Thus, the classical semantic definitions are enhanced by analyzing a set of postulates they satisfy. Finally, a polynomial-time algorithm to perform the labeling process is introduced, in which the argument interactions are considered.Fil: Escañuela Gonzalez, Melisa Gisselle. Universidad Nacional de Santiago del Estero; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tucumán; ArgentinaFil: Budan, Maximiliano Celmo David. Universidad Nacional de Santiago del Estero; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tucumán; ArgentinaFil: Simari, Gerardo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Simari, Guillermo Ricardo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; Argentin

    Gaining Insight into Determinants of Physical Activity using Bayesian Network Learning

    Get PDF
    Contains fulltext : 228326pre.pdf (preprint version ) (Open Access) Contains fulltext : 228326pub.pdf (publisher's version ) (Open Access)BNAIC/BeneLearn 202

    Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions

    Get PDF
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from diverse fields to identify open problems, striving to synchronize research agendas and accelerate XAI in practical applications. By fostering collaborative discussion and interdisciplinary cooperation, we aim to propel XAI forward, contributing to its continued success. Our goal is to put forward a comprehensive proposal for advancing XAI. To achieve this goal, we present a manifesto of 27 open problems categorized into nine categories. These challenges encapsulate the complexities and nuances of XAI and offer a road map for future research. For each problem, we provide promising research directions in the hope of harnessing the collective intelligence of interested stakeholders

    Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions

    Get PDF
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from diverse fields to identify open problems, striving to synchronize research agendas and accelerate XAI in practical applications. By fostering collaborative discussion and interdisciplinary cooperation, we aim to propel XAI forward, contributing to its continued success. Our goal is to put forward a comprehensive proposal for advancing XAI. To achieve this goal, we present a manifesto of 27 open problems categorized into nine categories. These challenges encapsulate the complexities and nuances of XAI and offer a road map for future research. For each problem, we provide promising research directions in the hope of harnessing the collective intelligence of interested stakeholders

    On the Design, Implementation and Application of Novel Multi-disciplinary Techniques for explaining Artificial Intelligence Models

    Get PDF
    284 p.Artificial Intelligence is a non-stopping field of research that has experienced some incredible growth lastdecades. Some of the reasons for this apparently exponential growth are the improvements incomputational power, sensing capabilities and data storage which results in a huge increment on dataavailability. However, this growth has been mostly led by a performance-based mindset that has pushedmodels towards a black-box nature. The performance prowess of these methods along with the risingdemand for their implementation has triggered the birth of a new research field. Explainable ArtificialIntelligence. As any new field, XAI falls short in cohesiveness. Added the consequences of dealing withconcepts that are not from natural sciences (explanations) the tumultuous scene is palpable. This thesiscontributes to the field from two different perspectives. A theoretical one and a practical one. The formeris based on a profound literature review that resulted in two main contributions: 1) the proposition of anew definition for Explainable Artificial Intelligence and 2) the creation of a new taxonomy for the field.The latter is composed of two XAI frameworks that accommodate in some of the raging gaps found field,namely: 1) XAI framework for Echo State Networks and 2) XAI framework for the generation ofcounterfactual. The first accounts for the gap concerning Randomized neural networks since they havenever been considered within the field of XAI. Unfortunately, choosing the right parameters to initializethese reservoirs falls a bit on the side of luck and past experience of the scientist and less on that of soundreasoning. The current approach for assessing whether a reservoir is suited for a particular task is toobserve if it yields accurate results, either by handcrafting the values of the reservoir parameters or byautomating their configuration via an external optimizer. All in all, this poses tough questions to addresswhen developing an ESN for a certain application, since knowing whether the created structure is optimalfor the problem at hand is not possible without actually training it. However, some of the main concernsfor not pursuing their application is related to the mistrust generated by their black-box" nature. Thesecond presents a new paradigm to treat counterfactual generation. Among the alternatives to reach auniversal understanding of model explanations, counterfactual examples is arguably the one that bestconforms to human understanding principles when faced with unknown phenomena. Indeed, discerningwhat would happen should the initial conditions differ in a plausible fashion is a mechanism oftenadopted by human when attempting at understanding any unknown. The search for counterfactualsproposed in this thesis is governed by three different objectives. Opposed to the classical approach inwhich counterfactuals are just generated following a minimum distance approach of some type, thisframework allows for an in-depth analysis of a target model by means of counterfactuals responding to:Adversarial Power, Plausibility and Change Intensity

    Pertanika Journal of Science & Technology

    Get PDF

    Pertanika Journal of Science & Technology

    Get PDF
    corecore