61,955 research outputs found

    Trust Management for Artificial Intelligence: A Standardization Perspective

    Get PDF
    With the continuous increase in the development and use of artificial intelligence systems and applications, problems due to unexpected operations and errors of artificial intelligence systems have emerged. In particular, the importance of trust analysis and management technology for artificial intelligence systems is continuously growing so that users who desire to apply and use artificial intelligence systems can predict and safely use services. This study proposes trust management requirements for artificial intelligence and a trust management framework based on it. Furthermore, we present challenges for standardization so that trust management technology can be applied and spread to actual artificial intelligence systems. In this paper, we aim to stimulate related standardization activities to develop globally acceptable methodology in order to support trust management for artificial intelligence while emphasizing challenges to be addressed in the future from a standardization perspective

    Towards European Anticipatory Governance for Artificial Intelligence

    Get PDF
    This report presents the findings of the Interdisciplinary Research Group “Responsibility: Machine Learning and Artificial Intelligence” of the Berlin-Brandenburg Academy of Sciences and Humanities and the Technology and Global Affairs research area of DGAP. In September 2019, they brought leading experts from research and academia together with policy makers and representatives of standardization authorities and technology organizations to set framework conditions for a European anticipatory governance regime for artificial intelligence (AI)

    IE WP 20/03 An Evolutionary Approach to the Process of Technology Diffusion and Standardization

    Get PDF
    The study described here aims to make a threefold contribution to the analysis of technology diffusion. First of all, it tries to offer a new approach to the study of the dynamic of innovation diffusion, not from the traditional perspective of the rate at which one new technology is fully adopted, but the extent of the diffusion of several technologies and the related phenomenon of standardization. Secondly, it aims to show a broadened and evolutionary view of the process of technology standardization, avoiding the habitual determinism of conventional models of technology diffusion and lock-in. Finally, it tries to identify and evaluate the relationships existing between the main characteristics of industries and the attributes of the technology standardization processes in them. To achieve these goals we have developed an agent based model (ABM), using distributed artificial intelligence (DAI) concepts drawn from the general methodology of social simulation.Technology diffusion; standardization; lock-in; evolutionary models; agent-based models

    Social Choice Optimization

    Full text link
    Social choice is the theory about collective decision towards social welfare starting from individual opinions, preferences, interests or welfare. The field of Computational Social Welfare is somewhat recent and it is gaining impact in the Artificial Intelligence Community. Classical literature makes the assumption of single-peaked preferences, i.e. there exist a order in the preferences and there is a global maximum in this order. This year some theoretical results were published about Two-stage Approval Voting Systems (TAVs), Multi-winner Selection Rules (MWSR) and Incomplete (IPs) and Circular Preferences (CPs). The purpose of this paper is three-fold: Firstly, I want to introduced Social Choice Optimisation as a generalisation of TAVs where there is a max stage and a min stage implementing thus a Minimax, well-known Artificial Intelligence decision-making rule to minimize hindering towards a (Social) Goal. Secondly, I want to introduce, following my Open Standardization and Open Integration Theory (in refinement process) put in practice in my dissertation, the Open Standardization of Social Inclusion, as a global social goal of Social Choice Optimization

    Technology and the environment: an evolutionary approach to sustainable technological change

    Get PDF
    (WP 02/04 Clave pdf) The results of our model show that it would be advisable to undertake policies expressly aimed at the process of sustainable technological change in a way that is complementary to the conventional equilibrium oriented environmental policies. In short, the main objectives of this paper are to understand more fully the dynamics of the process of technological change, its role in sustainable development, and to assess the implications of this dynamic approach to techno-environmental policy. To achieve these goals we have developed an agent based model, using distributed artificial intelligence concepts drawn from the general methodology of social simulation.Agent-based models, Evolutionary models, Lock-in , Standardization, Technology difussion, Sustainability

    Artificial intelligence as a coming revolution in medicine

    Get PDF
    Introduction: The development of medicine and information technology in recent decades has undoubtedly contributed to improving public health. Artificial intelligence is a technology that has great potential to revolutionize the functioning of health care around the world. Appropriate use of the development of technology can revolutionize many areas of modern medicine, however, it should not be forgotten that this technology should be subjected to appropriate standardization and legal regulation. Objective: The purpose of this study is to review the available scientific literature in order to systematize the current knowledge on the use of artificial intelligence in the process of diagnosis and treatment. Ethical aspects related to the implementation of AI for use in health care are also analyzed. Results: Artificial intelligence uses deep machine learning algorithms. It is a technology that has been known for a long time, but recently the chances of its widespread use have increased significantly, although scientists still do not fully understand the operation of AI algorithms. Currently, there are attempts to use this technology in many medical fields such as cardiology, diagnostic imaging, gastroenterology, pathomorphology, ultrasound. Artificial intelligence can also be used to improve the functioning of patient service in health care. Summary: The development of artificial intelligence algorithms creates a huge opportunity to improve the quality of diagnostic and treatment processes. The current rapid development of the technology is revolutionizing many branches of medicine, improving treatment outcomes. However, the development of this technology requires the creation of an appropriate law governing AI in medicine

    Multimodality Imaging in Sarcomeric Hypertrophic Cardiomyopathy: Get It Right…on Time

    Full text link
    Hypertrophic cardiomyopathy (HCM) follows highly variable paradigms and disease-specific patterns of progression towards heart failure, arrhythmias and sudden cardiac death. Therefore, a generalized standard approach, shared with other cardiomyopathies, can be misleading in this setting. A multimodality imaging approach facilitates differential diagnosis of phenocopies and improves clinical and therapeutic management of the disease. However, only a profound knowledge of the progression patterns, including clinical features and imaging data, enables an appropriate use of all these resources in clinical practice. Combinations of various imaging tools and novel techniques of artificial intelligence have a potentially relevant role in diagnosis, clinical management and definition of prognosis. Nonetheless, several barriers persist such as unclear appropriate timing of imaging or universal standardization of measures and normal reference limits. This review provides an overview of the current knowledge on multimodality imaging and potentialities of novel tools, including artificial intelligence, in the management of patients with sarcomeric HCM, highlighting the importance of specific "red alerts" to understand the phenotype-genotype linkage

    Argument mining: A machine learning perspective

    Get PDF
    Argument mining has recently become a hot topic, attracting the interests of several and diverse research communities, ranging from artificial intelligence, to computational linguistics, natural language processing, social and philosophical sciences. In this paper, we attempt to describe the problems and challenges of argument mining from a machine learning angle. In particular, we advocate that machine learning techniques so far have been under-exploited, and that a more proper standardization of the problem, also with regards to the underlying argument model, could provide a crucial element to develop better systems

    Towards Experimental Standardization for AI governance in the EU

    Get PDF
    The EU has adopted a hybrid governance approach to address the challenges posed by Artificial Intelligence (AI), emphasizing the role of harmonized European standards (HES). Despite advantages in expertise and flexibility, HES processes face legitimacy problems and struggle with epistemic gaps in the context of AI. This article addresses the problems that characterize HES processes by outlining the conceptual need, theoretical basis, and practical application of experimental standardization, which is defined as an ex-ante evaluation method that can be used to test standards for their effects and effectiveness. Experimental standardization is based on theoretical and practical developments in experimental governance, legislation, and innovation. Aligned with ideas and frameworks like Science for Policy and evidence-based policymaking, it enables co-creation between science and policymaking. We apply the proposed concept in the context of HES processes, where we submit that experimental standardization contributes to increasing throughput and output legitimacy, addressing epistemic gaps, and generating new regulatory knowledge

    Deep Reinforcement Learning that Matters

    Full text link
    In recent years, significant progress has been made in solving challenging problems across various domains using deep reinforcement learning (RL). Reproducing existing work and accurately judging the improvements offered by novel methods is vital to sustaining this progress. Unfortunately, reproducing results for state-of-the-art deep RL methods is seldom straightforward. In particular, non-determinism in standard benchmark environments, combined with variance intrinsic to the methods, can make reported results tough to interpret. Without significance metrics and tighter standardization of experimental reporting, it is difficult to determine whether improvements over the prior state-of-the-art are meaningful. In this paper, we investigate challenges posed by reproducibility, proper experimental techniques, and reporting procedures. We illustrate the variability in reported metrics and results when comparing against common baselines and suggest guidelines to make future results in deep RL more reproducible. We aim to spur discussion about how to ensure continued progress in the field by minimizing wasted effort stemming from results that are non-reproducible and easily misinterpreted.Comment: Accepted to the Thirthy-Second AAAI Conference On Artificial Intelligence (AAAI), 201
    corecore