293 research outputs found

    Sistemas de recomendación basados en técnicas de predicción de enlaces para jueces en línea

    Get PDF
    La oferta de todo tipo de productos o experiencias que se puede encontrar en Internet hoy en día es inmensa y difícil de valorar para los usuarios que quieren buscar un producto que se adapte a sus necesidades. Debido a este problema, surgen los sistemas de recomendación, que ayudan a los usuarios a encontrar productos que sean de su interés facilitando sus tareas de búsqueda. Los sistemas de recomendación están implantados en muchísimas plataformas de consumo, pero no en otro tipo de plataformas donde su uso también sería interesante y necesario. Una de estas plataformas son los jueces en línea, donde los sistemas de recomendación podrían ayudar a los usuarios en la selección de los problemas a resolver que les resulten más interesantes. En este Trabajo Fin de Máster se proponen diferentes métodos de recomendación para implantar en jueces en línea que están basados en grafos de interacciones y que hacen uso de técnicas de predicción de enlaces con el fin de generar recomendaciones. En este trabajo se ha realizado una evaluación de los métodos de recomendación propuestos a través de la generación de experimentos realizados sobre el juez en línea de Acepta el Reto con el objetivo de determinar qué métodos resultan más prometedore

    Sistemas de recomendación y explicaciones basados en grafos de interacción

    Get PDF
    En la sociedad actual, con la utilización de nuevas tecnologías en todos los ámbitos de nuestras vidas, el consumo en Internet ha avanzado a pasos agigantados. Los usuarios pueden encontrar infinidad de productos que consumir en cualquier ámbito, especialmente a través del comercio electrónico, las redes sociales y el entretenimiento en streaming. Determinar qué productos son los más adecuados según sus necesidades y preferencias puede llegar a ser una tarea complicada debido a la amplitud de la oferta de productos que se pueden encontrar en estas plataformas. Los sistemas de recomendación surgen para resolver este problema, facilitando el proceso de búsqueda y de toma de decisiones. no obstante, cuando los usuarios no confían en el sistema, los sistemas de recomendación no son tan útiles como cabría esperar..

    A practical exploration of the convergence of case-based reasoning and explainable artificial intelligence.

    Get PDF
    As Artificial Intelligence (AI) systems become increasingly complex, ensuring their decisions are transparent and understandable to users has become paramount. This paper explores the integration of Case-Based Reasoning (CBR) with Explainable Artificial Intelligence (XAI) through a real-world example, which presents an innovative CBR-driven XAI platform. This study investigates how CBR, a method that solves new problems based on the solutions of similar past problems, can be harnessed to enhance the explainability of AI systems. Though the literature has few works on the synergy between CBR and XAI, exploring the principles for developing a CBR-driven XAI platform is necessary. This exploration outlines the key features and functionalities, examines the alignment of CBR principles with XAI goals to make AI reasoning more transparent to users, and discusses methodological strategies for integrating CBR into XAI frameworks. Through a case study of our CBR-driven XAI platform, iSee: Intelligent Sharing of Explanation Experience, we demonstrate the practical application of these principles, highlighting the enhancement of system transparency and user trust. The platform elucidates the decision-making processes of AI models and adapts to provide explanations tailored to diverse user needs. Our findings emphasize the importance of interdisciplinary approaches in AI research and the significant role CBR can play in advancing the goals of XAI

    RACMA o cómo dar vida a un mapa mudo en el Museo de América

    Get PDF
    La Realidad Aumentada es una tecnología que permite aumentar el mundo real que percibimos con elementos virtuales interactivos. En este artículo describimos el uso de esta tecnología en el Museo de América de Madrid, sobre un mapa mudo del continente americano en el que, gracias a la Realidad Aumentada creamos personajes que dan vida al mapa y proporcionan información sobre las culturas presentes en el museo

    Conceptual modelling of explanation experiences through the iSeeonto ontology.

    Get PDF
    Explainable Artificial Intelligence is a big research field required in many situations where we need to understand Artificial Intelligence behaviour. However, each explanation need is unique which makes it difficult to apply explanation techniques and solutions that are already implemented when faced with a new problem. Therefore, the task to implement an explanation system can be very challenging because we need to take the AI model into account, user's needs and goals, available data, suitable explainers, etc. In this work, we propose a formal model to define and orchestrate all the elements involved in an explanation system, and make a novel contribution regarding the formalisation of this model as the iSeeOnto ontology. This ontology not only enables the conceptualisation of a wide range of explanation systems, but also supports the application of Case-Based Reasoning as a knowledge transfer approach that reuses previous explanation experiences from unrelated domains. To demonstrate the suitability of the proposed model, we present an exhaustive validation by classifying reference explanation systems found in the literature into the iSeeOnto ontology

    Consejos de participación infantil y adolescente: posibilidades y opacidades en términos de subjetividad política y promoción de una cultura de paz

    Get PDF
    Maestría en Educación y Desarrollo Humano, Facultad de Ciencias Sociales y Humanas.Sin mayores esfuerzos es fácil percibir que el mundo contemporáneo se mueve entre dos corrientes que se enfrentan de manera permanente; de un lado están quienes apuestan por la dignidad humana y, en consecuencia, reconocen la necesidad de establecer y fortalecer relaciones orientadas a la equidad, la justicia, la responsabilidad, la solidaridad, la participacióny el respeto por la diversidad, y en contraste a ello están quienes crean y/o justificanestructuras relacionales “que provocan desigualdades tremendas, negociaciones mercantiles injustas, que imponen a través de los medios masivos de comunicación un solo modelo de sociedad de consumo y en las que los ideales del respeto y la equidad se ven coartadas” (Van Dijk, 2007: 44), siendo precisamente los niños, niñas y adolescentes quienes más afectados resultan de dicho enfrentamiento porque no sólo lo padecen sino que éste seconstituye en el contexto de interacción cotidiana donde aprenden a relacionarse y a habitar el mundo con otros y otras

    CBR driven interactive explainable AI.

    Get PDF
    Explainable AI (XAI) can greatly enhance user trust and satisfaction in AI-assisted decision-making processes. Numerous explanation techniques (explainers) exist in the literature, and recent findings suggest that addressing multiple user needs requires employing a combination of these explainers. We refer to such combinations as explanation strategies. This paper introduces iSee - Intelligent Sharing of Explanation Experience, an interactive platform that facilitates the reuse of explanation strategies and promotes best practices in XAI by employing the Case-based Reasoning (CBR) paradigm. iSee uses an ontology-guided approach to effectively capture explanation requirements, while a behaviour tree-driven conversational chatbot captures user experiences of interacting with the explanations and provides feedback. In a case study, we illustrate the iSee CBR system capabilities by formalising a realworld radiograph fracture detection system and demonstrating how each interactive tools facilitate the CBR processes

    Cortistatin as a Novel Multimodal Therapy for the Treatment of Parkinson’s Disease

    Get PDF
    Parkinson’s disease (PD) is a complex disorder characterized by the impairment of the dopaminergic nigrostriatal system. PD has duplicated its global burden in the last few years, becoming the leading neurological disability worldwide. Therefore, there is an urgent need to develop innovative approaches that target multifactorial underlying causes to potentially prevent or limit disease progression. Accumulating evidence suggests that neuroinflammatory responses may play a pivotal role in the neurodegenerative processes that occur during the development of PD. Cortistatin is a neuropeptide that has shown potent anti-inflammatory and immunoregulatory effects in preclinical models of autoimmune and neuroinflammatory disorders. The goal of this study was to explore the therapeutic potential of cortistatin in a well-established preclinical mouse model of PD induced by acute exposure to the neurotoxin 1-methil-4-phenyl1-1,2,3,6-tetrahydropyridine (MPTP). We observed that treatment with cortistatin mitigated the MPTP-induced loss of dopaminergic neurons in the substantia nigra and their connections to the striatum. Consequently, cortistatin administration improved the locomotor activity of animals intoxicated with MPTP. In addition, cortistatin diminished the presence and activation of glial cells in the affected brain regions of MPTP-treated mice, reduced the production of immune mediators, and promoted the expression of neurotrophic factors in the striatum. In an in vitro model of PD, treatment with cortistatin also demonstrated a reduction in the cell death of dopaminergic neurons that were exposed to the neurotoxin. Taken together, these findings suggest that cortistatin could emerge as a promising new therapeutic agent that combines anti-inflammatory and neuroprotective properties to regulate the progression of PD at multiple levels

    iSee: a case-based reasoning platform for the design of explanation experiences.

    Get PDF
    Explainable Artificial Intelligence (XAI) is an emerging field within Artificial Intelligence (AI) that has provided many methods that enable humans to understand and interpret the outcomes of AI systems. However, deciding on the best explanation approach for a given AI problem is currently a challenging decision-making task. This paper presents the iSee project, which aims to address some of the XAI challenges by providing a unifying platform where personalized explanation experiences are generated using Case-Based Reasoning. An explanation experience includes the proposed solution to a particular explainability problem and its corresponding evaluation, provided by the end user. The ultimate goal is to provide an open catalog of explanation experiences that can be transferred to other scenarios where trustworthy AI is required

    iSee: intelligent sharing of explanation experiences.

    Get PDF
    The right to an explanation of the decision reached by a machine learning (ML) model is now an EU regulation. However, different system stakeholders may have different background knowledge, competencies and goals, thus requiring different kinds of explanations. There is a growing armoury of XAI methods, interpreting ML models and explaining their predictions, recommendations and diagnoses. We refer to these collectively as "explanation strategies". As these explanation strategies mature, practitioners gain experience in understanding which strategies to deploy in different circumstances. What is lacking, and what the iSee project will address, is the science and technology for capturing, sharing and re-using explanation strategies based on similar user experiences, along with a much-needed route to explainable AI (XAI) compliance. Our vision is to improve every user's experience of AI, by harnessing experiences of best practice in XAI by providing an interactive environment where personalised explanation experiences are accessible to everyone. Video Link: https://youtu.be/81O6-q_yx0
    corecore