1,423 research outputs found

    XAI.it 2022 - Preface to the Third Italian Workshop on Explainable Artificial Intelligence

    Get PDF
    Artificial Intelligence systems are increasingly playing an increasingly important role in our daily lives. As their importance in our everyday lives grows, it is fundamental that the internal mechanisms that guide these algorithms are as clear as possible. It is not by chance that the recent General Data Protection Regulation (GDPR) emphasized the users’ right to explanation when people face artificial intelligence-based technologies. Unfortunately, the current research tends to go in the opposite direction, since most of the approaches try to maximize the effectiveness of the models (e.g., recommendation accuracy) at the expense of the explainability and the transparency. The main research questions which arise from this scenario is straightforward: how can we deal with such a dichotomy between the need for effective adaptive systems and the right to transparency and interpretability? Several research lines are triggered by this question: building transparent intelligent systems, analyzing the impact of opaque algorithms on final users, studying the role of explanation strategies, investigating how to provide users with more control in the behavior of intelligent systems. XAI.it, the Italian workshop on Explainable AI, tries to address these research lines and aims to provide a forum for the Italian community to discuss problems, challenges and innovative approaches in the various sub-fields of XAI

    XAI.it 2021 - Preface to the Second Italian Workshop on Explainable Artificial Intelligence

    Get PDF
    Artificial Intelligence systems are increasingly playing an increasingly important role in our daily lives. As their importance in our everyday lives grows, it is fundamental that the internal mechanisms that guide these algorithms are as clear as possible. It is not by chance that the recent General Data Protection Regulation (GDPR) emphasized the users' right to explanation when people face artificial intelligencebased technologies. Unfortunately, the current research tends to go in the opposite direction, since most of the approaches try to maximize the effectiveness of the models (e.g., recommendation accuracy) at the expense of the explainability and the transparency. The main research questions which arise from this scenario is straightforward: how can we deal with such a dichotomy between the need for effective adaptive systems and the right to transparency and interpretability? Several research lines are triggered by this question: building transparent intelligent systems, analyzing the impact of opaque algorithms on final users, studying the role of explanation strategies, investigating how to provide users with more control in the behavior of intelligent systems. XAI.it, the Italian workshop on Explainable AI, tries to address these research lines and aims to provide a forum for the Italian community to discuss problems, challenges and innovative approaches in the various sub-fields of XAI

    Evaluation of MU-MIMO Digital Beamforming Algorithms in B5G/6G LEO Satellite Systems

    Get PDF
    Satellite Communication (SatCom) systems will be a key component of 5G and 6G networks to achieve the goal of providing unlimited and ubiquitous communications and deploying smart and sustainable networks. To meet the ever-increasing demand for higher throughput in 5G and beyond, aggressive frequency reuse schemes (i.e., full frequency reuse), combined with digital beamforming techniques to cope with the massive co-channel interference, are recognized as a key solution. Aimed at (i) eliminating the joint optimization problem among the beamforming vectors of all users, (ii) splitting it into distinct ones, and (iii) finding a closed-form solution, we propose a beamforming algorithm based on maximizing the users' Signal-to-Leakage-and-Noise Ratio (SLNR) served by a Low Earth Orbit (LEO) satellite. We investigate and assess the performance of several beamforming algorithms, including both those based on Channel State Information (CSI) at the transmitter, i.e., Minimum Mean Square Error (MMSE) and Zero-Forcing (ZF), and those only requiring the users' locations, i.e., Switchable Multi-Beam (MB). Through a detailed numerical analysis, we provide a thorough comparison of the performance in terms of per-user achievable spectral efficiency of the aforementioned beamforming schemes, and we show that the proposed SLNR beamforming technique is able to outperform both MMSE and ZF schemes in the presented SatCom scenario

    Evaluation of multi-user multiple-input multiple-output digital beamforming algorithms in B5G/6G low Earth orbit satellite systems

    Get PDF
    Satellite communication systems will be a key component of 5G and 6G networks to achieve the goal of providing unlimited and ubiquitous communications and deploying smart and sustainable networks. To meet the ever-increasing demand for higher throughput in 5G and beyond, aggressive frequency reuse schemes (i.e., full frequency reuse), combined with digital beamforming techniques to cope with the massive co-channel interference, are recognized as a key solution. Aimed at (i) eliminating the joint optimization problem among the beamforming vectors of all users, (ii) splitting it into distinct ones, and (iii) finding a closed-form solution, we propose a beamforming algorithm based on maximizing the users' signal-to-leakage-and-noise ratio served by a low Earth orbit satellite. We investigate and assess the performance of several beamforming algorithms, including both those based on channel state information at the transmitter, that is, minimum mean square error and zero forcing, and those only requiring the users' locations, that is, switchable multi-beam. Through a detailed numerical analysis, we provide a thorough comparison of the performance in terms of per-user achievable spectral efficiency of the aforementioned beamforming schemes, and we show that the proposed signal to-leakage-plus-noise ratio beamforming technique is able to outperform both minimum mean square error and multi-beam schemes in the presented satellite communication scenario

    Reconstructive nature of temporal memory for movie scenes

    Get PDF
    Remembering when events took place is a key component of episodic memory. Using a sensitive behavioral measure, the present study investigates whether spontaneous event segmentation and script-based prior knowledge affect memory for the time of movie scenes. In three experiments, different groups of participants were asked to indicate when short video clips extracted from a previously encoded movie occurred on a horizontal timeline that represented the video duration. When participants encoded the entire movie, they were more precise at judging the temporal occurrence of clips extracted from the beginning and the end of the film compared to its middle part, but also at judging clips that were closer to event boundaries. Removing the final part of the movie from the encoding session resulted in a systematic bias in memory for time. Specifically, participants increasingly underestimated the time of occurrence of the video clips as a function of their proximity to the missing part of the movie. An additional experiment indicated that such an underestimation effect generalizes to different audio-visual material and does not necessarily reflect poor temporal memory. By showing that memories are moved in time to make room for missing information, the present study demonstrates that narrative time can be adapted to fit a standard template regardless of what has been effectively encoded, in line with reconstructive theories of memory

    Opening the black box: a primer for anti-discrimination

    Get PDF
    The pervasive adoption of Artificial Intelligence (AI) models in the modern information society, requires counterbalancing the growing decision power demanded to AI models with risk assessment methodologies. In this paper, we consider the risk of discriminatory decisions and review approaches for discovering discrimination and for designing fair AI models. We highlight the tight relations between discrimination discovery and explainable AI, with the latter being a more general approach for understanding the behavior of black boxes

    GLocalX - From Local to Global Explanations of Black Box AI Models

    Get PDF
    Artificial Intelligence (AI) has come to prominence as one of the major components of our society, with applications in most aspects of our lives. In this field, complex and highly nonlinear machine learning models such as ensemble models, deep neural networks, and Support Vector Machines have consistently shown remarkable accuracy in solving complex tasks. Although accurate, AI models often are “black boxes” which we are not able to understand. Relying on these models has a multifaceted impact and raises significant concerns about their transparency. Applications in sensitive and critical domains are a strong motivational factor in trying to understand the behavior of black boxes. We propose to address this issue by providing an interpretable layer on top of black box models by aggregating “local” explanations. We present GLOCALX, a “local-first” model agnostic explanation method. Starting from local explanations expressed in form of local decision rules, GLOCALX iteratively generalizes them into global explanations by hierarchically aggregating them. Our goal is to learn accurate yet simple interpretable models to emulate the given black box, and, if possible, replace it entirely. We validate GLOCALX in a set of experiments in standard and constrained settings with limited or no access to either data or local explanations. Experiments show that GLOCALX is able to accurately emulate several models with simple and small models, reaching state-of-the-art performance against natively global solutions. Our findings show how it is often possible to achieve a high level of both accuracy and comprehensibility of classification models, even in complex domains with high-dimensional data, without necessarily trading one property for the other. This is a key requirement for a trustworthy AI, necessary for adoption in high-stakes decision making applications.Artificial Intelligence (AI) has come to prominence as one of the major components of our society, with applications in most aspects of our lives. In this field, complex and highly nonlinear machine learning models such as ensemble models, deep neural networks, and Support Vector Machines have consistently shown remarkable accuracy in solving complex tasks. Although accurate, AI models often are “black boxes” which we are not able to understand. Relying on these models has a multifaceted impact and raises significant concerns about their transparency. Applications in sensitive and critical domains are a strong motivational factor in trying to understand the behavior of black boxes. We propose to address this issue by providing an interpretable layer on top of black box models by aggregating “local” explanations. We present GLOCALX, a “local-first” model agnostic explanation method. Starting from local explanations expressed in form of local decision rules, GLOCALX iteratively generalizes them into global explanations by hierarchically aggregating them. Our goal is to learn accurate yet simple interpretable models to emulate the given black box, and, if possible, replace it entirely. We validate GLOCALX in a set of experiments in standard and constrained settings with limited or no access to either data or local explanations. Experiments show that GLOCALX is able to accurately emulate several models with simple and small models, reaching state-of-the-art performance against natively global solutions. Our findings show how it is often possible to achieve a high level of both accuracy and comprehensibility of classification models, even in complex domains with high-dimensional data, without necessarily trading one property for the other. This is a key requirement for a trustworthy AI, necessary for adoption in high-stakes decision making applications

    Aclees Cf. Sp. Foveatus (Coleoptera Curculionidae), an exotic pest of ficus carica in Italy : a sustainable approach to defence based on aluminosilicate minerals as host plant masking solids

    Get PDF
    The exceptionally frequent entries of alien pest are a major source of concern for the farmers who have to protect their crops from unknown insects, often without natural enemies in the new areas. A new pest belonging to the Molytinae family (Coleoptera: Curculionidae), tribe Hylobiini, reported as Aclees sp. cf. foveatus Voss, was recently introduced in Italy. The species is responsible for severe damages in many Italian fig nurseries and orchards, particularly in the Italian Central Northern regions, i.e. Tuscany, Ligury and Latium. Currently, no active ingredients are registered against this insect on fig crops. An innovative and eco-friendly approach for controlling this exotic weevil infestation was investigated, by using montmorillonite-based clays, either in their native state or containing copper(II) species, and clinoptilolite zeolites, in order to check the perception of the adults\u2019 weevil towards the different solid materials and, subsequently, to evaluate the capability of these innovative products to act as masking agent with respect to the host plant and/or as repellent upon contact. The formulations containing copper(II)-exchanged clay and clinoptilolite zeolite showed preliminary promising results in terms of efficacy and environmental sustainability
    • …
    corecore