1,111 research outputs found

    Il Concilio Vaticano II attraverso lo studio degli Archivi dei Padri conciliari

    Get PDF

    XAI.it 2022 - Preface to the Third Italian Workshop on Explainable Artificial Intelligence

    Get PDF
    Artificial Intelligence systems are increasingly playing an increasingly important role in our daily lives. As their importance in our everyday lives grows, it is fundamental that the internal mechanisms that guide these algorithms are as clear as possible. It is not by chance that the recent General Data Protection Regulation (GDPR) emphasized the users’ right to explanation when people face artificial intelligence-based technologies. Unfortunately, the current research tends to go in the opposite direction, since most of the approaches try to maximize the effectiveness of the models (e.g., recommendation accuracy) at the expense of the explainability and the transparency. The main research questions which arise from this scenario is straightforward: how can we deal with such a dichotomy between the need for effective adaptive systems and the right to transparency and interpretability? Several research lines are triggered by this question: building transparent intelligent systems, analyzing the impact of opaque algorithms on final users, studying the role of explanation strategies, investigating how to provide users with more control in the behavior of intelligent systems. XAI.it, the Italian workshop on Explainable AI, tries to address these research lines and aims to provide a forum for the Italian community to discuss problems, challenges and innovative approaches in the various sub-fields of XAI

    Mathematical Models for Minimizing Total Tardiness on Parallel Additive Manufacturing Machines

    Get PDF
    In this research we tackle the scheduling problem in additive manufacturing for unrelated parallel machines. Both the nesting and scheduling aspects are considered. Parts have several alternative build orientations. The goal is to minimize the total tardiness of parts. We propose a mixed-integer linear programming model which considers the nesting subproblem as a 2D bin-packing problem, as well as a model which simplifies the nesting subproblem to a 1D bin-packing problem. The computational efficiency and properties of the proposed models are investigated by numerical experiments. Results show that the total tardiness optimization significantly increases the complexity of the problem, only the simple instances are solved optimally, whereas the makespan variant is able to solve all testing instances. Using the 1D bin-packing simplification allows for solving more instances to optimality, but with a risk of obtaining nesting-infeasibility. We also observed the compromise between the total tardiness and makespan objectives, which originates from the dilemma of “packing more parts to benefit from the common machine setup/recoating time” or “packing less parts to maintain the flexibility for handling distributed duedates”

    XAI.it 2020 - Preface to the first italian workshop on explainable artificial intelligence

    Get PDF
    Artificial Intelligence systems are increasingly playing an increasingly important role in our daily lives. As their importance in our everyday lives grows, it is fundamental that the internal mechanisms that guide these algorithms are as clear as possible. It is not by chance that the recent General Data Protection Regulation (GDPR) emphasized the users’ right to explanation when people face artificial intelligence-based technologies. Unfortunately, the current research tends to go in the opposite direction, since most of the approaches try to maximize the effectiveness of the models (e.g., recommendation accuracy) at the expense of the explainability and the transparency. The main research questions which arise from this scenario is straightforward: how can we deal with such a dichotomy between the need for effective adaptive systems and the right to transparency and interpretability? Several research lines are triggered by this question: building transparent intelligent systems, analyzing the impact of opaque algorithms on final users, studying the role of explanation strategies, investigating how to provide users with more control in the behavior of intelligent systems. XAI.it, the first Italian workshop on Explainable AI, tries to address these research lines and aims to provide a forum for the Italian community to discuss problems, challenges and innovative approaches in the various sub-fields of XAI

    Ocular Refraction at Birth and Its Development During the First Year of Life in a Large Cohort of Babies in a Single Center in Northern Italy

    Get PDF
    The purpose of this study was to investigate refraction at birth and during the first year of life in a large cohort of babies born in a single center in Northern Italy. We also aimed to analyze refractive errors in relation to the gestational age at birth. An observational ophthalmological assessment was performed within 24 h of birth on 12,427 newborns. Refraction was examined using streak retinoscopy after the administration of tropicamide (1%). Values in the range of between +0.50 ≤ D ≤ +4.00 were defined as physiological refraction at birth. Newborns with refraction values outside of the physiological range were followed up during the first year of life. Comparative analyses were conducted in a subgroup of babies with known gestational ages. The following distribution of refraction at birth was recorded: 88.03% of the babies had physiological refraction, 5.03% had moderate hyperopia, 2.14% had severe hyperopia, 3.4%, had emmetropia, 0.45%, had myopia, 0.94% had astigmatism, and 0.01% had anisometropia. By the end of the first year of life, we observed reductions in hyperopia and astigmatism, and stabilization of myopia. Preterm babies had a four-fold higher risk of congenital myopia and a three-fold higher risk of congenital emmetropia as compared to term babies. Refraction profiles obtained at birth changed during the first year of life, leading to a normalization of the refraction values. Gestational age at birth affected the incidence of refractive errors and amblyopia

    XAI.it 2021 - Preface to the Second Italian Workshop on Explainable Artificial Intelligence

    Get PDF
    Artificial Intelligence systems are increasingly playing an increasingly important role in our daily lives. As their importance in our everyday lives grows, it is fundamental that the internal mechanisms that guide these algorithms are as clear as possible. It is not by chance that the recent General Data Protection Regulation (GDPR) emphasized the users' right to explanation when people face artificial intelligencebased technologies. Unfortunately, the current research tends to go in the opposite direction, since most of the approaches try to maximize the effectiveness of the models (e.g., recommendation accuracy) at the expense of the explainability and the transparency. The main research questions which arise from this scenario is straightforward: how can we deal with such a dichotomy between the need for effective adaptive systems and the right to transparency and interpretability? Several research lines are triggered by this question: building transparent intelligent systems, analyzing the impact of opaque algorithms on final users, studying the role of explanation strategies, investigating how to provide users with more control in the behavior of intelligent systems. XAI.it, the Italian workshop on Explainable AI, tries to address these research lines and aims to provide a forum for the Italian community to discuss problems, challenges and innovative approaches in the various sub-fields of XAI

    Exploring the effects of natural language justifications in food recommender systems

    Get PDF
    Users of food recommender systems typically prefer popular recipes, which tend to be unhealthy. To encourage users to select healthier recommendations by making more informed food decisions, we introduce a methodology to generate and present a natural language justification that emphasizes the nutritional content, or health risks and benefits of recommended recipes. We designed a framework that takes a user and two food recommendations as input and produces an automatically generated natural language justification as output, which is based on the user's characteristics and the recipes' features. In doing so, we implemented and evaluated eight different justification strategies through two different justification styles (e.g., comparing each recipe's food features) in an online user study (N = 503). We compared user food choices for two personalized recommendation approaches, popularity-based vs our health-aware algorithm, and evaluated the impact of presenting natural language justifications. We showed that comparative justifications styles are effective in supporting choices for our healthy-aware recommendations, confirming the impact of our methodology on food choices

    Exploring the effects of natural language justifications in food recommender systems

    Get PDF
    Users of food recommender systems typically prefer popular recipes, which tend to be unhealthy. To encourage users to select healthier recommendations by making more informed food decisions, we introduce a methodology to generate and present a natural language justification that emphasizes the nutritional content, or health risks and benefits of recommended recipes. We designed a framework that takes a user and two food recommendations as input and produces an automatically generated natural language justification as output, which is based on the user’s characteristics and the recipes’ features. In doing so, we implemented and evaluated eight different justification strategies through two different justification styles (e.g., comparing each recipe’s food features) in an online user study (N = 503). We compared user food choices for two personalized recommendation approaches, popularity-based vs our health-aware algorithm, and evaluated the impact of presenting natural language justifications. We showed that comparative justifications styles are effective in supporting choices for our healthy-aware recommendations, confirming the impact of our methodology on food choices
    • …
    corecore