262,662 research outputs found

    Incorporating contextual integrity into privacy decision making: a risk based approach.

    Get PDF
    This work sought to create a privacy assessment framework that would encompass legal, policy and contextual considerations to provide a practical decision support tool or prototype for determining privacy risks, thereby integrating the privacy decision-making function into organisational decision-making by default. This was achieved by way of a meta-model from which two separate privacy assessment frameworks were derived, each represented as a stand-alone prototype spreadsheet tool for privacy assessment before being amalgamated into the main contribution of this work, the PACT (PrivACy Throughout) framework, also presented as a prototype spreadsheet. Thus, this work makes four contributions. First, a meta-model of Contextual Integrity (CI) (Nissenbaum 2010) is presented, where CI has been broken down into its component parts to provide an easy to interpret visual representation of CI. Second, a practical privacy decision support framework for assessing data suitability for publication as open data, the ContextuaL Integrity For Open Data (CLIFOD) questionnaire is presented. Third, the scope of the framework is expanded upon to include other industry sectors or domains. To this end, a data protection impact assessment (DPIA), the DPIA Data Wheel, is exhibited that integrates the provisions brought in by the General Data Protection Regulation (GDPR) with CI and a revised version of CLIFOD. This framework is applied and evaluated in the charity sector to demonstrate the applicability of the concepts derived in CLIFOD to any domain where data is processed or shared. Finally, this work culminates with the main contribution of this work, one overarching framework, PrivACy Throughout (PACT). PACT is a privacy decision framework for assessing privacy risks throughout the data lifecycle. It has been derived and underpinned by existing theory though the amalgamation of CLIFOD and the DPIA Data Wheel and extended upon to include a privacy lifecycle plan (PLAN) for managing the data throughout its data life cycle. PACT, incorporates context (using CI), with contemporary legislation, in particular, the General Data Protection Regu- lation (GDPR), to facilitate consistent and repeatable privacy risk assessment from both the perspective of the data subject and the organisation, thereby supporting organisational decision making around privacy risk for both existing and new projects, systems, data and processes

    Moving beyond Consent for Citizen Science in Big Data Health Research

    Get PDF
    Consent has been the cornerstone of personal data privacy regime. This notion is premised on the liberal tenets of individual autonomy, freedom of choice and rationality. More important, consent is only meaningful if data subjects are fully informed and parties are of equal bargaining power. Under orthodox framework, it is believed that privacy can be waived by consent. The above concern is particularly pertinent to citizen science in health and medical research, in which the nature of research is often data intensive with serious implication for individual’s privacy and other interests. Although there is no standard definition for citizen science, it includes generally the gathering and volunteering of data by non-professionals, the participation of non-experts in analysis and scientific experimentation, and public input into research and project. Citizens become experimenters, stakeholders, purveyors of data, research participants or even partners. Consent from citizen scientists is indispensable as it is a constitutive element for self-determination and self-empowerment for participants. Furthermore, consent from data subjects determines the responsibility and accountability of data users. Yet with the advancement of data mining and big data technologies, risks and harm of subsequent data use may not be known at the time of data collection. Progress of research often extends beyond the existing data. Namely, researchers of existing team or even third parties can match data sets to re-identify individuals. Furthermore, big data technology use and transfer of data for other unforeseen purposes maybe outside the control of the original research team. In other words, consent becomes problematic in citizen science in big data era. The model that one can fully specify the terms in notice and consent has become an illusion. Is consent still valid? Should it still be one of the critical criteria in citizen science health research which are collaborative and contributory by nature? With a focus on the issue of consent and privacy protection, this study will analyze not only the traditional informed consent model but also the alternative models of “open consent”, “portable consent,” “dynamic consent,” and “meta consent.” Facing the challenges that big data and citizen science pose to personal data protection and privacy, this paper explores the legal, social and ethical concerns behind the concept of consent. It argues that we need to move beyond the consent paradigm and take into account a much broader context of harm and risk assessment. Ultimately, what lies behind consent are the entailing values of autonomy, fairness and propriety in the name of research.postprin

    Service Level Agreement-based GDPR Compliance and Security assurance in (multi)Cloud-based systems

    Get PDF
    Compliance with the new European General Data Protection Regulation (Regulation (EU) 2016/679) and security assurance are currently two major challenges of Cloud-based systems. GDPR compliance implies both privacy and security mechanisms definition, enforcement and control, including evidence collection. This paper presents a novel DevOps framework aimed at supporting Cloud consumers in designing, deploying and operating (multi)Cloud systems that include the necessary privacy and security controls for ensuring transparency to end-users, third parties in service provision (if any) and law enforcement authorities. The framework relies on the risk-driven specification at design time of privacy and security level objectives in the system Service Level Agreement (SLA) and in their continuous monitoring and enforcement at runtime.The research leading to these results has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 644429 and No 780351, MUSA project and ENACT project, respectively. We would also like to acknowledge all the members of the MUSA Consortium and ENACT Consortium for their valuable help

    Security, Privacy and Safety Risk Assessment for Virtual Reality Learning Environment Applications

    Full text link
    Social Virtual Reality based Learning Environments (VRLEs) such as vSocial render instructional content in a three-dimensional immersive computer experience for training youth with learning impediments. There are limited prior works that explored attack vulnerability in VR technology, and hence there is a need for systematic frameworks to quantify risks corresponding to security, privacy, and safety (SPS) threats. The SPS threats can adversely impact the educational user experience and hinder delivery of VRLE content. In this paper, we propose a novel risk assessment framework that utilizes attack trees to calculate a risk score for varied VRLE threats with rate and duration of threats as inputs. We compare the impact of a well-constructed attack tree with an adhoc attack tree to study the trade-offs between overheads in managing attack trees, and the cost of risk mitigation when vulnerabilities are identified. We use a vSocial VRLE testbed in a case study to showcase the effectiveness of our framework and demonstrate how a suitable attack tree formalism can result in a more safer, privacy-preserving and secure VRLE system.Comment: Tp appear in the CCNC 2019 Conferenc

    CEPS Task Force on Artificial Intelligence and Cybersecurity Technology, Governance and Policy Challenges Task Force Evaluation of the HLEG Trustworthy AI Assessment List (Pilot Version). CEPS Task Force Report 22 January 2020

    Get PDF
    The Centre for European Policy Studies launched a Task Force on Artificial Intelligence (AI) and Cybersecurity in September 2019. The goal of this Task Force is to bring attention to the market, technical, ethical and governance challenges posed by the intersection of AI and cybersecurity, focusing both on AI for cybersecurity but also cybersecurity for AI. The Task Force is multi-stakeholder by design and composed of academics, industry players from various sectors, policymakers and civil society. The Task Force is currently discussing issues such as the state and evolution of the application of AI in cybersecurity and cybersecurity for AI; the debate on the role that AI could play in the dynamics between cyber attackers and defenders; the increasing need for sharing information on threats and how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and possible EU policy measures to ease the adoption of AI in cybersecurity in Europe. As part of such activities, this report aims at assessing the High-Level Expert Group (HLEG) on AI Ethics Guidelines for Trustworthy AI, presented on April 8, 2019. In particular, this report analyses and makes suggestions on the Trustworthy AI Assessment List (Pilot version), a non-exhaustive list aimed at helping the public and the private sector in operationalising Trustworthy AI. The list is composed of 131 items that are supposed to guide AI designers and developers throughout the process of design, development, and deployment of AI, although not intended as guidance to ensure compliance with the applicable laws. The list is in its piloting phase and is currently undergoing a revision that will be finalised in early 2020. This report would like to contribute to this revision by addressing in particular the interplay between AI and cybersecurity. This evaluation has been made according to specific criteria: whether and how the items of the Assessment List refer to existing legislation (e.g. GDPR, EU Charter of Fundamental Rights); whether they refer to moral principles (but not laws); whether they consider that AI attacks are fundamentally different from traditional cyberattacks; whether they are compatible with different risk levels; whether they are flexible enough in terms of clear/easy measurement, implementation by AI developers and SMEs; and overall, whether they are likely to create obstacles for the industry. The HLEG is a diverse group, with more than 50 members representing different stakeholders, such as think tanks, academia, EU Agencies, civil society, and industry, who were given the difficult task of producing a simple checklist for a complex issue. The public engagement exercise looks successful overall in that more than 450 stakeholders have signed in and are contributing to the process. The next sections of this report present the items listed by the HLEG followed by the analysis and suggestions raised by the Task Force (see list of the members of the Task Force in Annex 1)

    Barriers and Facilitators of Suicide Risk Assessment in Emergency Departments: A Qualitative Study of Provider Perspectives

    Get PDF
    Objective To understand emergency department (ED) providers’ perspectives regarding the barriers and facilitators of suicide risk assessment and to use these perspectives to inform recommendations for best practices in ED suicide risk assessment. Methods Ninety-two ED providers from two hospital systems in a Midwestern state responded to open-ended questions via an online survey that assessed their perspectives on the barriers and facilitators to assess suicide risk as well as their preferred assessment methods. Responses were analyzed using an inductive thematic analysis approach. Results Qualitative analysis yielded six themes that impact suicide risk assessment. Time, privacy, collaboration and consultation with other professionals and integration of a standard screening protocol in routine care exemplified environmental and systemic themes. Patient engagement/participation in assessment and providers’ approach to communicating with patients and other providers also impacted the effectiveness of suicide risk assessment efforts. Conclusions The findings inform feasible suicide risk assessment practices in EDs. Appropriately utilizing a collaborative, multidisciplinary approach to assess suicide-related concerns appears to be a promising approach to ameliorate the burden placed on ED providers and facilitate optimal patient care. Recommendations for clinical care, education, quality improvement and research are offered
    • …
    corecore