11,220 research outputs found

    CEPS Task Force on Artificial Intelligence and Cybersecurity Technology, Governance and Policy Challenges Task Force Evaluation of the HLEG Trustworthy AI Assessment List (Pilot Version). CEPS Task Force Report 22 January 2020

    Get PDF
    The Centre for European Policy Studies launched a Task Force on Artificial Intelligence (AI) and Cybersecurity in September 2019. The goal of this Task Force is to bring attention to the market, technical, ethical and governance challenges posed by the intersection of AI and cybersecurity, focusing both on AI for cybersecurity but also cybersecurity for AI. The Task Force is multi-stakeholder by design and composed of academics, industry players from various sectors, policymakers and civil society. The Task Force is currently discussing issues such as the state and evolution of the application of AI in cybersecurity and cybersecurity for AI; the debate on the role that AI could play in the dynamics between cyber attackers and defenders; the increasing need for sharing information on threats and how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and possible EU policy measures to ease the adoption of AI in cybersecurity in Europe. As part of such activities, this report aims at assessing the High-Level Expert Group (HLEG) on AI Ethics Guidelines for Trustworthy AI, presented on April 8, 2019. In particular, this report analyses and makes suggestions on the Trustworthy AI Assessment List (Pilot version), a non-exhaustive list aimed at helping the public and the private sector in operationalising Trustworthy AI. The list is composed of 131 items that are supposed to guide AI designers and developers throughout the process of design, development, and deployment of AI, although not intended as guidance to ensure compliance with the applicable laws. The list is in its piloting phase and is currently undergoing a revision that will be finalised in early 2020. This report would like to contribute to this revision by addressing in particular the interplay between AI and cybersecurity. This evaluation has been made according to specific criteria: whether and how the items of the Assessment List refer to existing legislation (e.g. GDPR, EU Charter of Fundamental Rights); whether they refer to moral principles (but not laws); whether they consider that AI attacks are fundamentally different from traditional cyberattacks; whether they are compatible with different risk levels; whether they are flexible enough in terms of clear/easy measurement, implementation by AI developers and SMEs; and overall, whether they are likely to create obstacles for the industry. The HLEG is a diverse group, with more than 50 members representing different stakeholders, such as think tanks, academia, EU Agencies, civil society, and industry, who were given the difficult task of producing a simple checklist for a complex issue. The public engagement exercise looks successful overall in that more than 450 stakeholders have signed in and are contributing to the process. The next sections of this report present the items listed by the HLEG followed by the analysis and suggestions raised by the Task Force (see list of the members of the Task Force in Annex 1)

    Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS - a collection of Technical Notes Part 1

    Get PDF
    This report provides an introduction and overview of the Technical Topic Notes (TTNs) produced in the Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS (Tigars) project. These notes aim to support the development and evaluation of autonomous vehicles. Part 1 addresses: Assurance-overview and issues, Resilience and Safety Requirements, Open Systems Perspective and Formal Verification and Static Analysis of ML Systems. Part 2: Simulation and Dynamic Testing, Defence in Depth and Diversity, Security-Informed Safety Analysis, Standards and Guidelines

    Towards Specifying And Evaluating The Trustworthiness Of An AI-Enabled System

    Get PDF
    Applied AI has shown promise in the data processing of key industries and government agencies to extract actionable information used to make important strategical decisions. One of the core features of AI-enabled systems is the trustworthiness of these systems which has an important implication for the robustness and full acceptance of these systems. In this paper, we explain what trustworthiness in AI-enabled systems means, and the key technical challenges of specifying, and verifying trustworthiness. Toward solving these technical challenges, we propose a method to specify and evaluate the trustworthiness of AI-based systems using quality-attribute scenarios and design tactics. Using our trustworthiness scenarios and design tactics, we can analyze the architectural design of AI-enabled systems to ensure that trustworthiness has been properly expressed and achieved.The contributions of the thesis include (i) the identification of the trustworthiness sub-attributes that affect the trustworthiness of AI systems (ii) the proposal of trustworthiness scenarios to specify trustworthiness in an AI system (iii) a design checklist to support the analysis of the trustworthiness of AI systems and (iv) the identification of design tactics that can be used to achieve trustworthiness in an AI system

    Trust and Transparency in Artificial Intelligence. Ethics & Society Opinion. European Commission

    Get PDF
    The Ethics and Society Subproject has developed this Opinion in order to clarify lessons the Human Brain Project (HBP) can draw from the current discussion of artificial intelligence, in particular the social and ethical aspects of AI, and outline areas where it could usefully contribute. The EU and numerous other bodies are promoting and implementing a wide range of policies aimed to ensure that AI is beneficial - that it serves society. The HBP as a leading project bringing together neuroscience and ICT is in an excellent position to contribute to and to benefit from these discussions. This Opinion therefore highlights some key aspects of the discussion, shows its relevance to the HBP and develops a list of six recommendations

    Chasing the Chatbots: Directions for Interaction and Design Research

    Get PDF
    Big tech-players have been successful in pushing the chatbots forward. Investments in the technology are growing fast, as well as the number of users and applications available. Instead of driving investments towards a successful diffusion of the technology, user-centred studies are currently chasing the popularity of chatbots. A literature analysis evidences how recent this research topic is, and the predominance of technical challenges rather than understanding users’ perceptions, expectations and contexts of use. Looking for answers to interaction and design questions raised in 2007, when the presence of clever computers in everyday life had been predicted for the year 2020, this paper presents a panorama of the recent literature, revealing gaps and pointing directions for further user-centred research

    A multilevel framework for AI governance

    Full text link
    To realize the potential benefits and mitigate potential risks of AI, it is necessary to develop a framework of governance that conforms to ethics and fundamental human values. Although several organizations have issued guidelines and ethical frameworks for trustworthy AI, without a mediating governance structure, these ethical principles will not translate into practice. In this paper, we propose a multilevel governance approach that involves three groups of interdependent stakeholders: governments, corporations, and citizens. We examine their interrelationships through dimensions of trust, such as competence, integrity, and benevolence. The levels of governance combined with the dimensions of trust in AI provide practical insights that can be used to further enhance user experiences and inform public policy related to AI.Comment: This paper has been accepted for publication and is forthcoming in The Global and Digital Governance Handbook. Cite as: Choung, H., David, P., & Seberger, J.S. (2023). A multilevel framework for AI governance. The Global and Digital Governance Handbook. Routledge, Taylor & Francis Grou

    The Power of Trust: Designing Trustworthy Machine Learning Systems in Healthcare

    Get PDF
    Machine Learning (ML) systems have an enormous potential to improve medical care, but skepticism about their use persists. Their inscrutability is a major concern which can lead to negative attitudes reducing end users trust and resulting in rejection. Consequently, many ML systems in healthcare suffer from a lack of user-centricity. To overcome these challenges, we designed a user-centered, trustworthy ML system by applying design science research. The design includes meta-requirements and design principles instantiated by mockups. The design is grounded on our kernel theory, the Trustworthy Artificial Intelligence principles. In three design cycles, we refined the design through focus group discussions (N1=8), evaluation of existing applications, and an online survey (N2=40). Finally, an effectiveness test was conducted with end users (N3=80) to assess the perceived trustworthiness of our design. The results demonstrated that the end users did indeed perceive our design as more trustworthy
    • …
    corecore