238,612 research outputs found

    An Assurance Framework for Independent Co-assurance of Safety and Security

    Get PDF
    Integrated safety and security assurance for complex systems is difficult for many technical and socio-technical reasons such as mismatched processes, inadequate information, differing use of language and philosophies, etc.. Many co-assurance techniques rely on disregarding some of these challenges in order to present a unified methodology. Even with this simplification, no methodology has been widely adopted primarily because this approach is unrealistic when met with the complexity of real-world system development. This paper presents an alternate approach by providing a Safety-Security Assurance Framework (SSAF) based on a core set of assurance principles. This is done so that safety and security can be co-assured independently, as opposed to unified co-assurance which has been shown to have significant drawbacks. This also allows for separate processes and expertise from practitioners in each domain. With this structure, the focus is shifted from simplified unification to integration through exchanging the correct information at the right time using synchronisation activities

    Online Personal Data Processing and EU Data Protection Reform. CEPS Task Force Report, April 2013

    Get PDF
    This report sheds light on the fundamental questions and underlying tensions between current policy objectives, compliance strategies and global trends in online personal data processing, assessing the existing and future framework in terms of effective regulation and public policy. Based on the discussions among the members of the CEPS Digital Forum and independent research carried out by the rapporteurs, policy conclusions are derived with the aim of making EU data protection policy more fit for purpose in today’s online technological context. This report constructively engages with the EU data protection framework, but does not provide a textual analysis of the EU data protection reform proposal as such

    Balancing Security and Democracy: The Politics of Biometric Identification in the European Union

    Get PDF
    What are the relations between security policies and democratic debate, oversight and rights? And what is the role of expertise in shaping such policies and informing the democratic process? The inquiry that follows tries to answer such questions in the context of the European Union and taking the case of biometric identification, an area where security considerations and the possible impacts on fundamental rights and rule of law are at stake, and where expertise is crucial. Some hypotheses are explored through the case study: that 'securitisation' and 'democratisation' are in tension but some hybrid strategies can emerge, that the plurality of 'authoritative actors' influences policy frames and outcomes, and that knowledge is a key asset in defining these authoritative actors. A counter-intuitive conclusion is presented, namely that biometrics-which seems prima facie an excellent candidate for technocratic decision making, sheltered from democratic debate and accountability-is characterised by intense debate by a plurality of actors. Such pluralism is limited to those actors who have the resources-including knowledge-that allow for inclusion in policy making at EU level, but is nevertheless significant in shaping policy. Tragic events were pivotal in pushing for action on grounds of security, but the chosen instruments were in store and specific actors were capable of proposing them as a solution to security problems; in particular, the strong role of executives is a key factor in the vigorous pursuit of biometric identification. However this is not the whole story, and limited pluralism-including plurality of expertise-explains specific features of the development of biometrics in the EU, namely the central role of the metaphor of 'balancing' security and democracy, and the 'competitive cooperation' between new and more consolidated policy areas. The EU is facing another difficult challenge in the attempt of establishing itself as a new security actor and as a supranational democratic polity: important choices are involved to assure that citizens' security is pursued on the basis of rule of law, respect of fundamental rights and democratic accountability.democracy; pluralism; security/internal

    Model the System from Adversary Viewpoint: Threats Identification and Modeling

    Full text link
    Security attacks are hard to understand, often expressed with unfriendly and limited details, making it difficult for security experts and for security analysts to create intelligible security specifications. For instance, to explain Why (attack objective), What (i.e., system assets, goals, etc.), and How (attack method), adversary achieved his attack goals. We introduce in this paper a security attack meta-model for our SysML-Sec framework, developed to improve the threat identification and modeling through the explicit representation of security concerns with knowledge representation techniques. Our proposed meta-model enables the specification of these concerns through ontological concepts which define the semantics of the security artifacts and introduced using SysML-Sec diagrams. This meta-model also enables representing the relationships that tie several such concepts together. This representation is then used for reasoning about the knowledge introduced by system designers as well as security experts through the graphical environment of the SysML-Sec framework.Comment: In Proceedings AIDP 2014, arXiv:1410.322

    Interpretable Machine Learning for Privacy-Preserving Pervasive Systems

    Get PDF
    Our everyday interactions with pervasive systems generate traces that capture various aspects of human behavior and enable machine learning algorithms to extract latent information about users. In this paper, we propose a machine learning interpretability framework that enables users to understand how these generated traces violate their privacy

    CEPS Task Force on Artificial Intelligence and Cybersecurity Technology, Governance and Policy Challenges Task Force Evaluation of the HLEG Trustworthy AI Assessment List (Pilot Version). CEPS Task Force Report 22 January 2020

    Get PDF
    The Centre for European Policy Studies launched a Task Force on Artificial Intelligence (AI) and Cybersecurity in September 2019. The goal of this Task Force is to bring attention to the market, technical, ethical and governance challenges posed by the intersection of AI and cybersecurity, focusing both on AI for cybersecurity but also cybersecurity for AI. The Task Force is multi-stakeholder by design and composed of academics, industry players from various sectors, policymakers and civil society. The Task Force is currently discussing issues such as the state and evolution of the application of AI in cybersecurity and cybersecurity for AI; the debate on the role that AI could play in the dynamics between cyber attackers and defenders; the increasing need for sharing information on threats and how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and possible EU policy measures to ease the adoption of AI in cybersecurity in Europe. As part of such activities, this report aims at assessing the High-Level Expert Group (HLEG) on AI Ethics Guidelines for Trustworthy AI, presented on April 8, 2019. In particular, this report analyses and makes suggestions on the Trustworthy AI Assessment List (Pilot version), a non-exhaustive list aimed at helping the public and the private sector in operationalising Trustworthy AI. The list is composed of 131 items that are supposed to guide AI designers and developers throughout the process of design, development, and deployment of AI, although not intended as guidance to ensure compliance with the applicable laws. The list is in its piloting phase and is currently undergoing a revision that will be finalised in early 2020. This report would like to contribute to this revision by addressing in particular the interplay between AI and cybersecurity. This evaluation has been made according to specific criteria: whether and how the items of the Assessment List refer to existing legislation (e.g. GDPR, EU Charter of Fundamental Rights); whether they refer to moral principles (but not laws); whether they consider that AI attacks are fundamentally different from traditional cyberattacks; whether they are compatible with different risk levels; whether they are flexible enough in terms of clear/easy measurement, implementation by AI developers and SMEs; and overall, whether they are likely to create obstacles for the industry. The HLEG is a diverse group, with more than 50 members representing different stakeholders, such as think tanks, academia, EU Agencies, civil society, and industry, who were given the difficult task of producing a simple checklist for a complex issue. The public engagement exercise looks successful overall in that more than 450 stakeholders have signed in and are contributing to the process. The next sections of this report present the items listed by the HLEG followed by the analysis and suggestions raised by the Task Force (see list of the members of the Task Force in Annex 1)
    • 

    corecore