11 research outputs found

    AI Watch: Methodology to Monitor the Evolution of AI Technologies

    Get PDF
    In this report, we present a methodology to assess the evolution of AI technologies in the context of the AI WATCH initiative. The methodology is centred on building the AIcollaboratory, a data-driven framework to collect and explore data about AI results, progress and ultimately capabilities. From the collaborator framework we later extract qualitative information related to the state of the art, challenges and trends of AI research and development. This report first describes the administrative context of study, followed by the proposed methodology to build the AIcollaboratory framework and exploit it for qualitative assessment. In addition, we present some preliminary results of this monitoring process and some conclusions and suggestions for future work. This document is an internal report of the AI WATCH initiative, to be agreed for future work on Task 2 of the administrative arrangement between the Joint Research Centre and DG CNECT.JRC.B.6-Digital Econom

    AI Watch: Assessing Technology Readiness Levels for Artificial Intelligence

    Get PDF
    Artificial Intelligence (AI) offers the potential to transform our lives in radical ways. However, the main unanswered questions about this foreseen transformation are when and how this is going to happen. Not only do we lack the tools to determine what achievements will be attained in the near future, but we even underestimate what various technologies in AI are capable of today. Many so-called breakthroughs in AI are simply associated with highly-cited research papers or good performance on some particular benchmarks. Certainly, the translation from papers and benchmark performance to products is faster in AI than in other non-digital sectors. However, it is still the case that research breakthroughs do not directly translate to a technology that is ready to use in real-world environments. This document describes an exemplar-based methodology to categorise and assess several AI research and development technologies, by mapping them into Technology Readiness Levels (TRL) (e.g., maturity and availability levels). We first interpret the nine TRLs in the context of AI and identify different categories in AI to which they can be assigned. We then introduce new bidimensional plots, called readiness-vs-generality charts, where we see that higher TRLs are achievable for low-generality technologies focusing on narrow or specific abilities, while low TRLs are still out of reach for more general capabilities. We include numerous examples of AI technologies in a variety of fields, and show their readiness-vs-generality charts, serving as exemplars. Finally, we use the dynamics of several AI technology exemplars at different generality layers and moments of time to forecast some short-term and mid-term trends for AI.JRC.B.6-Digital Econom

    AI Watch 2019 Activity Report

    Get PDF
    This report provides an overview of AI Watch activities in 2019. AI Watch is the European Commission knowledge service to monitor the development, uptake and impact of Artificial Intelligence (AI) for Europe.,. As part of the European strategy on AI, the European Commission and the Member States published in December 2018 a “Coordinated Plan on Artificial Intelligence” on the development of AI in the EU. The Coordinated Plan mentions the role of AI Watch to monitor its implementation. AI Watch was launched in December 2018. It aims to monitor European Union’s industrial, technological and research capacity in AI; AI national strategies and policy initiatives in the EU Member States; uptake and technical developments of AI; and AI use and impact in public services. AI Watch will also provide analyses of education and skills for AI; AI key technological enablers; data ecosystems; and social perspective on AI. AI Watch has a European focus within the global landscape, and works in coordination with Member States. In its first year AI Watch has developed and proposed methodologies for data collection and analysis in a wide scope of AI-impacted domains, and has presented new results that can already support policy making on AI in the EU. In the coming months AI Watch will continue collecting and analysing new information. All AI Watch results and analyses are published on the AI Watch public web portal (https://ec.europa.eu/knowledge4policy/ai-watch_en). AI Watch welcomes feedback. This report will be updated annually.JRC.B.6-Digital Econom

    Tracking AI: The Capability Is (Not) Near

    No full text
    AI is an area of strategic importance with potential to be a key driver of economic development and with a wide range of potential social implications. In order to assess present and future impact, there is a need to analyse what AI can (and will) achieve. But, what is AI capable of? This question is as crucial as elusive, as AI is progressing in ways that are open-ended about the techniques and resources AI can operate with. The truth is that whenever a task is solved, researchers find increasingly challenging to extrapolate whether this task can be reproduced, even when only a few things change: the data, the domain knowledge, the level of uncertainty, the (hyper)parameters, the techniques, the team, the compute, etc. In the end, we would like to infer whether a good result (or a breakthrough) in task A transfers to a similar good result in task B. This extrapolation is precisely what the notion of capability, borrowed from psychology, tries to answer. However, we lack the tools, and the data, to do similarly in AI. Benchmarks, competitions and challenges are behind much of the recent progress in AI, especially in machine learning (ML) [10], but the dynamics of rushing breakthroughs at the expense of massive data, compute, specialisation, etc., has led to a more complex AI landscape, in terms of what can be achieved and how. As a result, policy makers and other stakeholders have no way of assessing what AI systems can do today and in the future. This does not mean that we must disregard or understate the valuable information that is provided by a plethora of benchmarks. On the contrary, the analysis of the progress of AI must be based on data-grounded evidence, relying on finding and testing hypotheses through the computational analysis of big amounts of shared data [6], using open data science tools [11]. But this analysis must be abstracted from tasks to capabilities, for the purposes of integration3 and evaluation [8]. In this paper, we identify a series of problems to track and understand what AI is capable of, surveying some previous initiatives. We present the AIcollaboratory, a data-driven framework to collect and explore data about AI results, progress and ultimately capabilities, being developed in the context of AI WATCH, the European Commission (EC) knowledge service to monitor the development, uptake and impact of AI in Europe4. We close the paper with some challenges for the community emerging around the collaboratory.JRC.B.6-Digital Econom

    AI Paradigms and AI Safety: Mapping Artefacts and Techniques to Safety Issues

    No full text
    AI safety often analyses a risk or safety issue, such as interruptibility, under a particular AI paradigm, such as reinforcement learning. But what is an AI paradigm and how does it affect the understanding and implications of the safety issue? Is AI safety research covering the most representative paradigms and the right combinations of paradigms with safety issues? Will current research directions in AI safety be able to anticipate more capable and powerful systems yet to come? In this paper we analyse these questions, introducing a distinction between two types of paradigms in AI: artefacts and techniques. We then use experimental data of research and media documents from AI Topics, an official publication of the AAAI, to examine how safety research is distributed across artefacts and techniques. We observe that AI safety research is not sufficiently anticipatory, and is heavily weighted towards certain research paradigms. We identify a need for AI safety to be more explicit about the artefacts and techniques for which a particular issue may be applicable, in order to identify gaps and cover a broader range of issues.JRC.B.6-Digital Econom

    AI Watch. Defining Artificial Intelligence. Towards an operational definition and taxonomy of artificial intelligence

    Get PDF
    This report proposes an operational definition of artificial intelligence to be adopted in the context of AI Watch, the Commission knowledge service to monitor the development, uptake and impact of artificial intelligence for Europe. The definition, which will be used as a basis for the AI Watch monitoring activity, is established by means of a flexible scientific methodology that allows regular revision. The operational definition is constituted by a concise taxonomy and a list of keywords that characterise the core domains of the AI research field, and transversal topics such as applications of the former or ethical and philosophical considerations, in line with the wider monitoring objective of AI Watch. The AI taxonomy is designed to inform the AI landscape analysis and will expectedly detect AI applications in neighbour technological domains such as robotics (in a broader sense), neuroscience or internet of things. The starting point to develop the operational definition is the definition of AI adopted by the High Level Expert Group on artificial intelligence.To derive this operational definition we have followed a mixed methodology. On one hand, we apply natural language processing methods to a large set of AI literature. On the other hand, we carry out a qualitative analysis on 55 key documents including artificial intelligence definitions from three complementary perspectives: policy, research and industry. A valuable contribution of this work is the collection of definitions developed between 1955 and 2019, and the summarisation of the main features of the concept of artificial intelligence as reflected in the relevant literature

    Family and Prejudice: A Behavioural Taxonomy of Machine Learning Techniques

    No full text
    One classical way of characterising the rich range of machine learning techniques is by defining ‘families’, according to their formulation and learning strategy (e.g., neural networks, Bayesian methods, etc.). However, this taxonomy of learning techniques does not consider the extent to which models built with techniques from the same or different family agree on their outputs, especially when their predictions have to extrapolate in sparse zones where insufficient training data was available. In this paper we present a new taxonomy of machine learning techniques for classification, where families are clustered according to their degree of (dis)agreement in behaviour considering both dense and sparse zones, using Cohen’s kappa statistic. To this end, we use a representative collection of datasets and learning techniques. We finally validate the taxonomy by performing a number of experiments for technique selection. We show that ranking techniques by only following prejudice –the reputation they have for other problems– is worse than selecting techniques based on family diversity.JRC.B.6-Digital Econom

    AI WATCH. Defining Artificial Intelligence

    Get PDF
    This report proposes an operational definition of artificial intelligence to be adopted in the context of AI Watch, the Commission knowledge service to monitor the development, uptake and impact of artificial intelligence for Europe. The definition, which will be used as a basis for the AI Watch monitoring activity, is established by means of a flexible scientific methodology that allows regular revision. The operational definition is constituted by a concise taxonomy and a list of keywords that characterise the core domains of the AI research field, and transversal topics such as applications of the former or ethical and philosophical considerations, in line with the wider monitoring objective of AI Watch. The AI taxonomy is designed to inform the AI landscape analysis and will expectedly detect AI applications in neighbour technological domains such as robotics (in a broader sense), neuroscience or internet of things. The starting point to develop the operational definition is the definition of AI adopted by the High Level Expert Group on artificial intelligence. To derive this operational definition we have followed a mixed methodology. On one hand, we apply natural language processing methods to a large set of AI literature. On the other hand, we carry out a qualitative analysis on 55 key documents including artificial intelligence definitions from three complementary perspectives: policy, research and industry. A valuable contribution of this work is the collection of definitions developed between 1955 and 2019, and the summarisation of the main features of the concept of artificial intelligence as reflected in the relevant literature.JRC.B.6-Digital Econom
    corecore