4 research outputs found

    Towards an argument-based music recommender system

    Get PDF
    The significance of recommender systems has steadily grown in recent years as they help users to access relevant items from the vast universe of possibilities available these days. However, most of the research in recommenders is based purely on quantitative aspects, i.e., measures of similarity between items or users. In this paper we introduce a novel hybrid approach to refine recommendations achieved by quantitative methods with a qualitative approach based on argumentation, where suggestions are given after considering several arguments in favor or against the recommendations. In order to accomplish this, we use Defeasible Logic Programming (DeLP) as the underlying formalism for obtaining recommendations. This approach has a number of advantages over other existing recommendation techniques.In particular, recommendations can be refined at any time by adding new polished rules, and explanations may be provided supporting each  recommendation in a way that can be easily understood by the user, by means of the computed arguments.Fil: Briguez, Cristian Emanuel. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca; Argentina. Universidad Nacional del Sur. Departamento de Ciencia e Ingeniería de la Computación. Laboratorio de Investigación y Desarrollo en Inteligencia Artificial; ArgentinaFil: Budan, Maximiliano Celmo David. Universidad Nacional del Sur. Departamento de Ciencia e Ingeniería de la Computación. Laboratorio de Investigación y Desarrollo en Inteligencia Artificial; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca; ArgentinaFil: Deagustini, Cristhian Ariel David. Universidad Nacional del Sur. Departamento de Ciencia e Ingeniería de la Computación. Laboratorio de Investigación y Desarrollo en Inteligencia Artificial; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca; ArgentinaFil: Maguitman, Ana Gabriela. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca; Argentina. Universidad Nacional del Sur. Departamento de Ciencia e Ingeniería de la Computación. Laboratorio de Investigación y Desarrollo en Inteligencia Artificial; ArgentinaFil: Capobianco, Marcela. Universidad Nacional del Sur. Departamento de Ciencia e Ingeniería de la Computación. Laboratorio de Investigación y Desarrollo en Inteligencia Artificial; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca; ArgentinaFil: Simari, Guillermo Ricardo. Universidad Nacional del Sur. Departamento de Ciencia e Ingeniería de la Computación. Laboratorio de Investigación y Desarrollo en Inteligencia Artificial; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca; Argentin

    Desarrollo de API de consulta a fuentes de información en la web para sistemas de argumentación rebatible

    Get PDF
    Es cada vez más necesario automatizar los procesos que integran la información que se encuentra disponible en diferentes formatos, descentralizada, con orígenes heterogéneos y administraciones diferentes. Uno de los grandes desafíos al integrar fuentes de datos heterogénea, es poder manejar la inconsistencia informacional y la posible incompletitud que puede trasladarse de las fuentes locales. Los formalismos de Argumentación son adecuados para manejar este tipo de problemas. En particular, pensar en formalismos de argumentación rebatible (como DeLP) para la definición y automatización de la integración de fuentes de datos heterogéneas, inconsistentes entre sí e incompletas, es una opción aceptable. El trabajo de esta línea de investigación propone desarrollar interfaces para vincular DeLP con fuentes de datos en formatos web. En especial, fuentes de sitios web descriptas en HTML y en Servicios Web. La información obtenidas de estas fuentes, podrán ser utilizadas como hechos o presunciones en la base de conocimiento que utiliza el sistema argumentativo.Eje: Agentes y Sistemas InteligentesRed de Universidades con Carreras en Informática (RedUNCI

    Context-aware feature attribution through argumentation

    Full text link
    Feature attribution is a fundamental task in both machine learning and data analysis, which involves determining the contribution of individual features or variables to a model's output. This process helps identify the most important features for predicting an outcome. The history of feature attribution methods can be traced back to General Additive Models (GAMs), which extend linear regression models by incorporating non-linear relationships between dependent and independent variables. In recent years, gradient-based methods and surrogate models have been applied to unravel complex Artificial Intelligence (AI) systems, but these methods have limitations. GAMs tend to achieve lower accuracy, gradient-based methods can be difficult to interpret, and surrogate models often suffer from stability and fidelity issues. Furthermore, most existing methods do not consider users' contexts, which can significantly influence their preferences. To address these limitations and advance the current state-of-the-art, we define a novel feature attribution framework called Context-Aware Feature Attribution Through Argumentation (CA-FATA). Our framework harnesses the power of argumentation by treating each feature as an argument that can either support, attack or neutralize a prediction. Additionally, CA-FATA formulates feature attribution as an argumentation procedure, and each computation has explicit semantics, which makes it inherently interpretable. CA-FATA also easily integrates side information, such as users' contexts, resulting in more accurate predictions

    Explaining Reputation Assessments

    Get PDF
    Reputation is crucial to enabling human or software agents to select among alternative providers. Although several effective reputation assessment methods exist, they typically distil reputation into a numerical representation, with no accompanying explanation of the rationale behind the assessment. Such explanations would allow users or clients to make a richer assessment of providers, and tailor selection according to their preferences and current context. In this paper, we propose an approach to explain the rationale behind assessments from quantitative reputation models, by generating arguments that are combined to form explanations. Our approach adapts, extends and combines existing approaches for explaining decisions made using multi-attribute decision models in the context of reputation. We present example argument templates, and describe how to select their parameters using explanation algorithms. Our proposal was evaluated by means of a user study, which followed an existing protocol. Our results give evidence that although explanations present a subset of the information of trust scores, they are sufficient to equally evaluate providers recommended based on their trust score. Moreover, when explanation arguments reveal implicit model information, they are less persuasive than scores
    corecore