31,621 research outputs found
Recommended from our members
Disruptive Innovations and Disruptive Assurance: Assuring Machine Learning and Autonomy
Autonomous and machine learning-based systems are disruptive innovations and thus require a corresponding disruptive assurance strategy. We offer an overview of a framework based on claims, arguments, and evidence aimed at addressing these systems and use it to identify specific gaps, challenges, and potential solutions
Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS - a collection of Technical Notes Part 1
This report provides an introduction and overview of the Technical Topic Notes (TTNs) produced in the Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS (Tigars) project. These notes aim to support the development and evaluation of autonomous vehicles. Part 1 addresses: Assurance-overview and issues, Resilience and Safety Requirements, Open Systems Perspective and Formal Verification and Static Analysis of ML Systems. Part 2: Simulation and Dynamic Testing, Defence in Depth and Diversity, Security-Informed Safety Analysis, Standards and Guidelines
CEPS Task Force on Artificial Intelligence and Cybersecurity Technology, Governance and Policy Challenges Task Force Evaluation of the HLEG Trustworthy AI Assessment List (Pilot Version). CEPS Task Force Report 22 January 2020
The Centre for European Policy Studies launched a Task Force on Artificial Intelligence (AI) and
Cybersecurity in September 2019. The goal of this Task Force is to bring attention to the market,
technical, ethical and governance challenges posed by the intersection of AI and cybersecurity,
focusing both on AI for cybersecurity but also cybersecurity for AI. The Task Force is multi-stakeholder
by design and composed of academics, industry players from various sectors, policymakers and civil
society.
The Task Force is currently discussing issues such as the state and evolution of the application of AI
in cybersecurity and cybersecurity for AI; the debate on the role that AI could play in the dynamics
between cyber attackers and defenders; the increasing need for sharing information on threats and
how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and
possible EU policy measures to ease the adoption of AI in cybersecurity in Europe.
As part of such activities, this report aims at assessing the High-Level Expert Group (HLEG) on AI Ethics
Guidelines for Trustworthy AI, presented on April 8, 2019. In particular, this report analyses and
makes suggestions on the Trustworthy AI Assessment List (Pilot version), a non-exhaustive list aimed
at helping the public and the private sector in operationalising Trustworthy AI. The list is composed
of 131 items that are supposed to guide AI designers and developers throughout the process of
design, development, and deployment of AI, although not intended as guidance to ensure
compliance with the applicable laws. The list is in its piloting phase and is currently undergoing a
revision that will be finalised in early 2020.
This report would like to contribute to this revision by addressing in particular the interplay between
AI and cybersecurity. This evaluation has been made according to specific criteria: whether and how
the items of the Assessment List refer to existing legislation (e.g. GDPR, EU Charter of Fundamental
Rights); whether they refer to moral principles (but not laws); whether they consider that AI attacks
are fundamentally different from traditional cyberattacks; whether they are compatible with
different risk levels; whether they are flexible enough in terms of clear/easy measurement,
implementation by AI developers and SMEs; and overall, whether they are likely to create obstacles
for the industry.
The HLEG is a diverse group, with more than 50 members representing different stakeholders, such
as think tanks, academia, EU Agencies, civil society, and industry, who were given the difficult task of
producing a simple checklist for a complex issue. The public engagement exercise looks successful
overall in that more than 450 stakeholders have signed in and are contributing to the process.
The next sections of this report present the items listed by the HLEG followed by the analysis and
suggestions raised by the Task Force (see list of the members of the Task Force in Annex 1)
Compositional Verification for Autonomous Systems with Deep Learning Components
As autonomy becomes prevalent in many applications, ranging from
recommendation systems to fully autonomous vehicles, there is an increased need
to provide safety guarantees for such systems. The problem is difficult, as
these are large, complex systems which operate in uncertain environments,
requiring data-driven machine-learning components. However, learning techniques
such as Deep Neural Networks, widely used today, are inherently unpredictable
and lack the theoretical foundations to provide strong assurance guarantees. We
present a compositional approach for the scalable, formal verification of
autonomous systems that contain Deep Neural Network components. The approach
uses assume-guarantee reasoning whereby {\em contracts}, encoding the
input-output behavior of individual components, allow the designer to model and
incorporate the behavior of the learning-enabled components working
side-by-side with the other components. We illustrate the approach on an
example taken from the autonomous vehicles domain
Advanced avionics concepts: Autonomous spacecraft control
A large increase in space operations activities is expected because of Space Station Freedom (SSF) and long range Lunar base missions and Mars exploration. Space operations will also increase as a result of space commercialization (especially the increase in satellite networks). It is anticipated that the level of satellite servicing operations will grow tenfold from the current level within the next 20 years. This growth can be sustained only if the cost effectiveness of space operations is improved. Cost effectiveness is operational efficiency with proper effectiveness. A concept is presented of advanced avionics, autonomous spacecraft control, that will enable the desired growth, as well as maintain the cost effectiveness (operational efficiency) in satellite servicing operations. The concept of advanced avionics that allows autonomous spacecraft control is described along with a brief description of each component. Some of the benefits of autonomous operations are also described. A technology utilization breakdown is provided in terms of applications
Practical applications of multi-agent systems in electric power systems
The transformation of energy networks from passive to active systems requires the embedding of intelligence within the network. One suitable approach to integrating distributed intelligent systems is multi-agent systems technology, where components of functionality run as autonomous agents capable of interaction through messaging. This provides loose coupling between components that can benefit the complex systems envisioned for the smart grid. This paper reviews the key milestones of demonstrated agent systems in the power industry and considers which aspects of agent design must still be addressed for widespread application of agent technology to occur
- …