640,580 research outputs found

    Artificial intelligence and UK national security: Policy considerations

    Get PDF
    RUSI was commissioned by GCHQ to conduct an independent research study into the use of artificial intelligence (AI) for national security purposes. The aim of this project is to establish an independent evidence base to inform future policy development regarding national security uses of AI. The findings are based on in-depth consultation with stakeholders from across the UK national security community, law enforcement agencies, private sector companies, academic and legal experts, and civil society representatives. This was complemented by a targeted review of existing literature on the topic of AI and national security. The research has found that AI offers numerous opportunities for the UK national security community to improve efficiency and effectiveness of existing processes. AI methods can rapidly derive insights from large, disparate datasets and identify connections that would otherwise go unnoticed by human operators. However, in the context of national security and the powers given to UK intelligence agencies, use of AI could give rise to additional privacy and human rights considerations which would need to be assessed within the existing legal and regulatory framework. For this reason, enhanced policy and guidance is needed to ensure the privacy and human rights implications of national security uses of AI are reviewed on an ongoing basis as new analysis methods are applied to data

    CEPS Task Force on Artificial Intelligence and Cybersecurity Technology, Governance and Policy Challenges Task Force Evaluation of the HLEG Trustworthy AI Assessment List (Pilot Version). CEPS Task Force Report 22 January 2020

    Get PDF
    The Centre for European Policy Studies launched a Task Force on Artificial Intelligence (AI) and Cybersecurity in September 2019. The goal of this Task Force is to bring attention to the market, technical, ethical and governance challenges posed by the intersection of AI and cybersecurity, focusing both on AI for cybersecurity but also cybersecurity for AI. The Task Force is multi-stakeholder by design and composed of academics, industry players from various sectors, policymakers and civil society. The Task Force is currently discussing issues such as the state and evolution of the application of AI in cybersecurity and cybersecurity for AI; the debate on the role that AI could play in the dynamics between cyber attackers and defenders; the increasing need for sharing information on threats and how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and possible EU policy measures to ease the adoption of AI in cybersecurity in Europe. As part of such activities, this report aims at assessing the High-Level Expert Group (HLEG) on AI Ethics Guidelines for Trustworthy AI, presented on April 8, 2019. In particular, this report analyses and makes suggestions on the Trustworthy AI Assessment List (Pilot version), a non-exhaustive list aimed at helping the public and the private sector in operationalising Trustworthy AI. The list is composed of 131 items that are supposed to guide AI designers and developers throughout the process of design, development, and deployment of AI, although not intended as guidance to ensure compliance with the applicable laws. The list is in its piloting phase and is currently undergoing a revision that will be finalised in early 2020. This report would like to contribute to this revision by addressing in particular the interplay between AI and cybersecurity. This evaluation has been made according to specific criteria: whether and how the items of the Assessment List refer to existing legislation (e.g. GDPR, EU Charter of Fundamental Rights); whether they refer to moral principles (but not laws); whether they consider that AI attacks are fundamentally different from traditional cyberattacks; whether they are compatible with different risk levels; whether they are flexible enough in terms of clear/easy measurement, implementation by AI developers and SMEs; and overall, whether they are likely to create obstacles for the industry. The HLEG is a diverse group, with more than 50 members representing different stakeholders, such as think tanks, academia, EU Agencies, civil society, and industry, who were given the difficult task of producing a simple checklist for a complex issue. The public engagement exercise looks successful overall in that more than 450 stakeholders have signed in and are contributing to the process. The next sections of this report present the items listed by the HLEG followed by the analysis and suggestions raised by the Task Force (see list of the members of the Task Force in Annex 1)

    AI-Generated Fashion Designs: Who or What Owns the Goods?

    Get PDF
    As artificial intelligence (“AI”) becomes an increasingly prevalent tool in a plethora of industries in today’s society, analyzing the potential legal implications attached to AI-generated works is becoming more popular. One of the industries impacted by AI is fashion. AI tools and devices are currently being used in the fashion industry to create fashion models, fabric designs, and clothing. An AI device’s ability to generate fashion designs raises the question of who will own the copyrights of the fashion designs. Will it be the fashion designer who hires or contracts with the AI device programmer? Will it be the programmer? Or will it be the AI device itself? Designers invest a lot of talent, time, and finances into designing and creating each article of clothing and accessory it releases to the public; yet, under the current copyright standards, designers will not likely be considered the authors of their creations. Ultimately, this Note makes policy proposals for future copyright legislation within the United States, particularly recommending that AI-generated and AI-assisted designs be copyrightable and owned by the designers who purchase the AI device

    AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI

    Get PDF
    Humans and AI systems are usually portrayed as separate sys- tems that we need to align in values and goals. However, there is a great deal of AI technology found in non-autonomous systems that are used as cognitive tools by humans. Under the extended mind thesis, the functional contributions of these tools become as essential to our cognition as our brains. But AI can take cognitive extension towards totally new capabil- ities, posing new philosophical, ethical and technical chal- lenges. To analyse these challenges better, we define and place AI extenders in a continuum between fully-externalized systems, loosely coupled with humans, and fully-internalized processes, with operations ultimately performed by the brain, making the tool redundant. We dissect the landscape of cog- nitive capabilities that can foreseeably be extended by AI and examine their ethical implications. We suggest that cognitive extenders using AI be treated as distinct from other cognitive enhancers by all relevant stakeholders, including developers, policy makers, and human users

    The Making of Trustworthy and Competitive Artificial Intelligence : A Critical Analysis of the Problem Representations of AI in the European Commission’s AI Policy

    Get PDF
    Artificial intelligence (AI), as a constantly developing technology that is difficult to define, strains a society not prepared for its impact. On the other hand, AI represents the future and comes with many opportunities. The European Commission has taken both views into account in its policy for AI, the European approach to AI. The European Commission’s AI policy, that introduces a regulation-based approach to AI as the first policy initiative in the world, offers a timely and intriguing topic of study. This thesis critically examines how AI is represented as a problem in the European Commission’s policy over the course of a four-year time frame from 2018 to 2021. It uses a combined set of methods: qualitative content analysis together with Carol Bacchi’s WPR approach to inspect five selected European Commission’s policy documents. Four of these policy documents are communication papers with an additional white paper. With the help of qualitative content analysis, the main repetitive themes of AI challenges and opportunities are teased out. The WPR approach is used to examine the progression of the AI policy and analyze the problem representations found in policy. Research questions are the following: how has the European Commission’s policy on AI come about and how has AI been represented as a policy problem by the European Commission? The thesis presents the formation of the AI policy by going through policy documents over the period of four years. Additionally, the thesis demonstrates how the Commission’s AI policy is one piece of the puzzle that is EU digital politics aiming for technological sovereignty. From the Commission’s problem representation of AI, the challenges and opportunities, it is possible to analyze the implicit representations of AI in policy. Although, the policy highlights trustworthiness and competitiveness through its regulatory actions there are other aspects present as well. AI has been represented in policy through eight perspectives, including safety and security, ethical, legal, competitiveness, AI leadership, socioeconomic, ecological, and education. All perspectives rationalize ways for AI to be embraced inside the European Union borders and participate in the shaping of how AI is to be approached. The analysis of each category shows that issues related to safety and security, ethical, legal, competitiveness, and AI leadership seem to stand out whereas socioeconomic, ecological, and education matters are not as strongly stressed. Overall, this thesis has demonstrated how AI has been represented as a problem in the European Commission’s policy

    Ethics of Artificial Intelligence

    Get PDF
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and used by humans; here, the main sections are privacy (2.1), manipulation (2.2), opacity (2.3), bias (2.4), autonomy & responsibility (2.6) and the singularity (2.7). Then we look at AI systems as subjects, i.e. when ethics is for the AI systems themselves in machine ethics (2.8.) and artificial moral agency (2.9). Finally we look at future developments and the concept of AI (3). For each section within these themes, we provide a general explanation of the ethical issues, we outline existing positions and arguments, then we analyse how this plays out with current technologies and finally what policy conse-quences may be drawn
    • …
    corecore