56,697 research outputs found
A Graphical Adversarial Risk Analysis Model for Oil and Gas Drilling Cybersecurity
Oil and gas drilling is based, increasingly, on operational technology, whose
cybersecurity is complicated by several challenges. We propose a graphical
model for cybersecurity risk assessment based on Adversarial Risk Analysis to
face those challenges. We also provide an example of the model in the context
of an offshore drilling rig. The proposed model provides a more formal and
comprehensive analysis of risks, still using the standard business language
based on decisions, risks, and value.Comment: In Proceedings GraMSec 2014, arXiv:1404.163
The Insurance Data Security Model Law: Strengthening Cybersecurity Insurer-Policyholder Relationships and Protecting Consumers
How does intellectual capital align with cyber security?
Purpose – To position the preservation and protection of intellectual capital as a cyber security concern. We outline the security requirements of intellectual capital to help Boards of Directors and executive management teams to understand their responsibilities and accountabilities in this respect.Design/Methodology/Approach – The research methodology is desk research. In other words, we gathered facts and existing research publications that helped us to define key terms, to formulate arguments to convince BoDs of the need to secure their intellectual capital, and to outline actions to be taken by BoDs to do so.Findings – Intellectual capital, as a valuable business resource, is related to information, knowledge and cyber security. Hence, preservation thereof is also related to cyber security governance, and merits attention from boards of directors.Implications – This paper clarifies boards of directors’ intellectual capital governance responsibilities, which encompass information, knowledge and cyber security governance.Social Implications – If boards of directors know how to embrace their intellectual capital governance responsibilities, this will help to ensure that such intellectual capital is preserved and secured.Practical Implications – We hope that boards of directors will benefit from our clarifications, and especially from the positioning of intellectual capital in cyber space.Originality/Value – This paper extends a previous paper published by Von Solms and Von Solms (2018), which clarified the key terms of information and cyber security, and the governance thereof. The originality and value is the focus on the securing of intellectual capital, a topic that has not yet received a great deal of attention from cyber security researchers
CEPS Task Force on Artificial Intelligence and Cybersecurity Technology, Governance and Policy Challenges Task Force Evaluation of the HLEG Trustworthy AI Assessment List (Pilot Version). CEPS Task Force Report 22 January 2020
The Centre for European Policy Studies launched a Task Force on Artificial Intelligence (AI) and
Cybersecurity in September 2019. The goal of this Task Force is to bring attention to the market,
technical, ethical and governance challenges posed by the intersection of AI and cybersecurity,
focusing both on AI for cybersecurity but also cybersecurity for AI. The Task Force is multi-stakeholder
by design and composed of academics, industry players from various sectors, policymakers and civil
society.
The Task Force is currently discussing issues such as the state and evolution of the application of AI
in cybersecurity and cybersecurity for AI; the debate on the role that AI could play in the dynamics
between cyber attackers and defenders; the increasing need for sharing information on threats and
how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and
possible EU policy measures to ease the adoption of AI in cybersecurity in Europe.
As part of such activities, this report aims at assessing the High-Level Expert Group (HLEG) on AI Ethics
Guidelines for Trustworthy AI, presented on April 8, 2019. In particular, this report analyses and
makes suggestions on the Trustworthy AI Assessment List (Pilot version), a non-exhaustive list aimed
at helping the public and the private sector in operationalising Trustworthy AI. The list is composed
of 131 items that are supposed to guide AI designers and developers throughout the process of
design, development, and deployment of AI, although not intended as guidance to ensure
compliance with the applicable laws. The list is in its piloting phase and is currently undergoing a
revision that will be finalised in early 2020.
This report would like to contribute to this revision by addressing in particular the interplay between
AI and cybersecurity. This evaluation has been made according to specific criteria: whether and how
the items of the Assessment List refer to existing legislation (e.g. GDPR, EU Charter of Fundamental
Rights); whether they refer to moral principles (but not laws); whether they consider that AI attacks
are fundamentally different from traditional cyberattacks; whether they are compatible with
different risk levels; whether they are flexible enough in terms of clear/easy measurement,
implementation by AI developers and SMEs; and overall, whether they are likely to create obstacles
for the industry.
The HLEG is a diverse group, with more than 50 members representing different stakeholders, such
as think tanks, academia, EU Agencies, civil society, and industry, who were given the difficult task of
producing a simple checklist for a complex issue. The public engagement exercise looks successful
overall in that more than 450 stakeholders have signed in and are contributing to the process.
The next sections of this report present the items listed by the HLEG followed by the analysis and
suggestions raised by the Task Force (see list of the members of the Task Force in Annex 1)
Trusted CI Experiences in Cybersecurity and Service to Open Science
This article describes experiences and lessons learned from the Trusted CI
project, funded by the US National Science Foundation to serve the community as
the NSF Cybersecurity Center of Excellence. Trusted CI is an effort to address
cybersecurity for the open science community through a single organization that
provides leadership, training, consulting, and knowledge to that community. The
article describes the experiences and lessons learned of Trusted CI regarding
both cybersecurity for open science and managing the process of providing
centralized services to a broad and diverse community.Comment: 8 pages, PEARC '19: Practice and Experience in Advanced Research
Computing, July 28-August 1, 2019, Chicago, IL, US
- …
