36,653 research outputs found

    Methodology for Designing Decision Support Systems for Visualising and Mitigating Supply Chain Cyber Risk from IoT Technologies

    Full text link
    This paper proposes a methodology for designing decision support systems for visualising and mitigating the Internet of Things cyber risks. Digital technologies present new cyber risk in the supply chain which are often not visible to companies participating in the supply chains. This study investigates how the Internet of Things cyber risks can be visualised and mitigated in the process of designing business and supply chain strategies. The emerging DSS methodology present new findings on how digital technologies affect business and supply chain systems. Through epistemological analysis, the article derives with a decision support system for visualising supply chain cyber risk from Internet of Things digital technologies. Such methods do not exist at present and this represents the first attempt to devise a decision support system that would enable practitioners to develop a step by step process for visualising, assessing and mitigating the emerging cyber risk from IoT technologies on shared infrastructure in legacy supply chain systems

    Architecture-based Qualitative Risk Analysis for Availability of IT Infrastructures

    Get PDF
    An IT risk assessment must deliver the best possible quality of results in a time-eïŹ€ective way. Organisations are used to customise the general-purpose standard risk assessment methods in a way that can satisfy their requirements. In this paper we present the QualTD Model and method, which is meant to be employed together with standard risk assessment methods for the qualitative assessment of availability risks of IT architectures, or parts of them. The QualTD Model is based on our previous quantitative model, but geared to industrial practice since it does not require quantitative data which is often too costly to acquire. We validate the model and method in a real-world case by performing a risk assessment on the authentication and authorisation system of a large multinational company and by evaluating the results w.r.t. the goals of the stakeholders of the system. We also perform a review of the most popular standard risk assessment methods and an analysis of which one can be actually integrated with our QualTD Model

    Conceptualizing human resilience in the face of the global epidemiology of cyber attacks

    Get PDF
    Computer security is a complex global phenomenon where different populations interact, and the infection of one person creates risk for another. Given the dynamics and scope of cyber campaigns, studies of local resilience without reference to global populations are inadequate. In this paper we describe a set of minimal requirements for implementing a global epidemiological infrastructure to understand and respond to large-scale computer security outbreaks. We enumerate the relevant dimensions, the applicable measurement tools, and define a systematic approach to evaluate cyber security resilience. From the experience in conceptualizing and designing a cross-national coordinated phishing resilience evaluation we describe the cultural, logistic, and regulatory challenges to this proposed public health approach to global computer assault resilience. We conclude that mechanisms for systematic evaluations of global attacks and the resilience against those attacks exist. Coordinated global science is needed to address organised global ecrime

    CEPS Task Force on Artificial Intelligence and Cybersecurity Technology, Governance and Policy Challenges Task Force Evaluation of the HLEG Trustworthy AI Assessment List (Pilot Version). CEPS Task Force Report 22 January 2020

    Get PDF
    The Centre for European Policy Studies launched a Task Force on Artificial Intelligence (AI) and Cybersecurity in September 2019. The goal of this Task Force is to bring attention to the market, technical, ethical and governance challenges posed by the intersection of AI and cybersecurity, focusing both on AI for cybersecurity but also cybersecurity for AI. The Task Force is multi-stakeholder by design and composed of academics, industry players from various sectors, policymakers and civil society. The Task Force is currently discussing issues such as the state and evolution of the application of AI in cybersecurity and cybersecurity for AI; the debate on the role that AI could play in the dynamics between cyber attackers and defenders; the increasing need for sharing information on threats and how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and possible EU policy measures to ease the adoption of AI in cybersecurity in Europe. As part of such activities, this report aims at assessing the High-Level Expert Group (HLEG) on AI Ethics Guidelines for Trustworthy AI, presented on April 8, 2019. In particular, this report analyses and makes suggestions on the Trustworthy AI Assessment List (Pilot version), a non-exhaustive list aimed at helping the public and the private sector in operationalising Trustworthy AI. The list is composed of 131 items that are supposed to guide AI designers and developers throughout the process of design, development, and deployment of AI, although not intended as guidance to ensure compliance with the applicable laws. The list is in its piloting phase and is currently undergoing a revision that will be finalised in early 2020. This report would like to contribute to this revision by addressing in particular the interplay between AI and cybersecurity. This evaluation has been made according to specific criteria: whether and how the items of the Assessment List refer to existing legislation (e.g. GDPR, EU Charter of Fundamental Rights); whether they refer to moral principles (but not laws); whether they consider that AI attacks are fundamentally different from traditional cyberattacks; whether they are compatible with different risk levels; whether they are flexible enough in terms of clear/easy measurement, implementation by AI developers and SMEs; and overall, whether they are likely to create obstacles for the industry. The HLEG is a diverse group, with more than 50 members representing different stakeholders, such as think tanks, academia, EU Agencies, civil society, and industry, who were given the difficult task of producing a simple checklist for a complex issue. The public engagement exercise looks successful overall in that more than 450 stakeholders have signed in and are contributing to the process. The next sections of this report present the items listed by the HLEG followed by the analysis and suggestions raised by the Task Force (see list of the members of the Task Force in Annex 1)

    Measuring internet activity: a (selective) review of methods and metrics

    Get PDF
    Two Decades after the birth of the World Wide Web, more than two billion people around the world are Internet users. The digital landscape is littered with hints that the affordances of digital communications are being leveraged to transform life in profound and important ways. The reach and influence of digitally mediated activity grow by the day and touch upon all aspects of life, from health, education, and commerce to religion and governance. This trend demands that we seek answers to the biggest questions about how digitally mediated communication changes society and the role of different policies in helping or hindering the beneficial aspects of these changes. Yet despite the profusion of data the digital age has brought upon us—we now have access to a flood of information about the movements, relationships, purchasing decisions, interests, and intimate thoughts of people around the world—the distance between the great questions of the digital age and our understanding of the impact of digital communications on society remains large. A number of ongoing policy questions have emerged that beg for better empirical data and analyses upon which to base wider and more insightful perspectives on the mechanics of social, economic, and political life online. This paper seeks to describe the conceptual and practical impediments to measuring and understanding digital activity and highlights a sample of the many efforts to fill the gap between our incomplete understanding of digital life and the formidable policy questions related to developing a vibrant and healthy Internet that serves the public interest and contributes to human wellbeing. Our primary focus is on efforts to measure Internet activity, as we believe obtaining robust, accurate data is a necessary and valuable first step that will lead us closer to answering the vitally important questions of the digital realm. Even this step is challenging: the Internet is difficult to measure and monitor, and there is no simple aggregate measure of Internet activity—no GDP, no HDI. In the following section we present a framework for assessing efforts to document digital activity. The next three sections offer a summary and description of many of the ongoing projects that document digital activity, with two final sections devoted to discussion and conclusions
    • 

    corecore