5,715 research outputs found
Information Accountability Framework for a Trusted Health Care System
Trusted health care outcomes are patient centric. Requirements to ensure both the quality and sharing of patients’ health records are a key for better clinical decision making. In the context of maintaining quality health, the sharing of data and information between professionals and patients is paramount. This information sharing is a challenge and costly if patients’ trust and institutional accountability are not established. Establishment of an Information Accountability Framework (IAF) is one of the approaches in this paper. The concept behind the IAF requirements are: transparent responsibilities, relevance of the information being used, and the establishment and evidence of accountability that all lead to the desired outcome of a Trusted Health Care System. Upon completion of this IAF framework the trust component between the public and professionals will be constructed. Preservation of the confidentiality and integrity of patients’ information will lead to trusted health care outcomes
Recommended from our members
The Freedom of Information Act (FOIA): Background, Legislation, and Policy Issues
[Excerpt] The Freedom of Information Act (FOIA; 5 U.S.C. §552) allows any person—individual or corporate, citizen or not—to request and obtain, without explanation or justification, existing, identifiable, and unpublished agency records on any topic. Pursuant to FOIA, the public has presumptive access to agency records unless the material falls within any of FOIA’s nine categories of exception. Disputes over the release of records requested pursuant to FOIA can be appealed administratively, resolved through mediation, or heard in court.
This report provides background on FOIA, discusses the categories of records FOIA exempts from public release, and analyzes statistics on FOIA administration. The report also provides background on several legal and policy issues related to FOIA, including the release of controversial records, the growth in use of certain FOIA exemptions, and the adoption of new technologies to improve FOIA administration. The report concludes with an examination of potential FOIA-related policy options for the 113th Congress
A Room With an Overview: Towards Meaningful Transparency for the Consumer Internet of Things
As our physical environments become ever-more connected, instrumented and
automated, it can be increasingly difficult for users to understand what is
happening within them and why. This warrants attention; with the pervasive and
physical nature of the IoT comes risks of data misuse, privacy, surveillance,
and even physical harm. Such concerns come amid increasing calls for more
transparency surrounding technologies (in general), as a means for supporting
scrutiny and accountability.
This paper explores the practical dimensions to transparency mechanisms
within the consumer IoT. That is, we consider how smart homes might be made
more meaningfully transparent, so as to support users in gaining greater
understanding, oversight, and control. Through a series of three user-centric
studies, we (i) survey prospective smart home users to gain a general
understanding of what meaningful transparency within smart homes might entail;
(ii) identify categories of user-derived requirements and design elements
(design features for supporting smart home transparency) that have been created
through two co-design workshops; and (iii) validate these through an evaluation
with an altogether new set of participants. In all, these categories of
requirements and interface design elements provide a foundation for
understanding how meaningful transparency might be achieved within smart homes,
and introduces several wider considerations for doing so.Comment: To Appear: C. Norval and J. Singh, "A Room With an Overview: Towards
Meaningful Transparency for the Consumer Internet of Things," in IEEE
Internet of Things Journal. DOI: 10.1109/JIOT.2023.331836
Decentralized Inverse Transparency With Blockchain
Employee data can be used to facilitate work, but their misusage may pose
risks for individuals. Inverse transparency therefore aims to track all usages
of personal data, allowing individuals to monitor them to ensure accountability
for potential misusage. This necessitates a trusted log to establish an
agreed-upon and non-repudiable timeline of events. The unique properties of
blockchain facilitate this by providing immutability and availability. For
power asymmetric environments such as the workplace, permissionless blockchain
is especially beneficial as no trusted third party is required. Yet, two issues
remain: (1) In a decentralized environment, no arbiter can facilitate and
attest to data exchanges. Simple peer-to-peer sharing of data, conversely,
lacks the required non-repudiation. (2) With data governed by privacy
legislation such as the GDPR, the core advantage of immutability becomes a
liability. After a rightful request, an individual's personal data need to be
rectified or deleted, which is impossible in an immutable blockchain.
To solve these issues, we present Kovacs, a decentralized data exchange and
usage logging system for inverse transparency built on blockchain. Its
new-usage protocol ensures non-repudiation, and therefore accountability, for
inverse transparency. Its one-time pseudonym generation algorithm guarantees
unlinkability and enables proof of ownership, which allows data subjects to
exercise their legal rights regarding their personal data. With our
implementation, we show the viability of our solution. The decentralized
communication impacts performance and scalability, but exchange duration and
storage size are still reasonable. More importantly, the provided information
security meets high requirements. We conclude that Kovacs realizes
decentralized inverse transparency through secure and GDPR-compliant use of
permissionless blockchain.Comment: Peer-reviewed version accepted for publication in ACM Distributed
Ledger Technologies: Research and Practice (DLT). arXiv admin note:
substantial text overlap with arXiv:2104.0997
CEPS Task Force on Artificial Intelligence and Cybersecurity Technology, Governance and Policy Challenges Task Force Evaluation of the HLEG Trustworthy AI Assessment List (Pilot Version). CEPS Task Force Report 22 January 2020
The Centre for European Policy Studies launched a Task Force on Artificial Intelligence (AI) and
Cybersecurity in September 2019. The goal of this Task Force is to bring attention to the market,
technical, ethical and governance challenges posed by the intersection of AI and cybersecurity,
focusing both on AI for cybersecurity but also cybersecurity for AI. The Task Force is multi-stakeholder
by design and composed of academics, industry players from various sectors, policymakers and civil
society.
The Task Force is currently discussing issues such as the state and evolution of the application of AI
in cybersecurity and cybersecurity for AI; the debate on the role that AI could play in the dynamics
between cyber attackers and defenders; the increasing need for sharing information on threats and
how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and
possible EU policy measures to ease the adoption of AI in cybersecurity in Europe.
As part of such activities, this report aims at assessing the High-Level Expert Group (HLEG) on AI Ethics
Guidelines for Trustworthy AI, presented on April 8, 2019. In particular, this report analyses and
makes suggestions on the Trustworthy AI Assessment List (Pilot version), a non-exhaustive list aimed
at helping the public and the private sector in operationalising Trustworthy AI. The list is composed
of 131 items that are supposed to guide AI designers and developers throughout the process of
design, development, and deployment of AI, although not intended as guidance to ensure
compliance with the applicable laws. The list is in its piloting phase and is currently undergoing a
revision that will be finalised in early 2020.
This report would like to contribute to this revision by addressing in particular the interplay between
AI and cybersecurity. This evaluation has been made according to specific criteria: whether and how
the items of the Assessment List refer to existing legislation (e.g. GDPR, EU Charter of Fundamental
Rights); whether they refer to moral principles (but not laws); whether they consider that AI attacks
are fundamentally different from traditional cyberattacks; whether they are compatible with
different risk levels; whether they are flexible enough in terms of clear/easy measurement,
implementation by AI developers and SMEs; and overall, whether they are likely to create obstacles
for the industry.
The HLEG is a diverse group, with more than 50 members representing different stakeholders, such
as think tanks, academia, EU Agencies, civil society, and industry, who were given the difficult task of
producing a simple checklist for a complex issue. The public engagement exercise looks successful
overall in that more than 450 stakeholders have signed in and are contributing to the process.
The next sections of this report present the items listed by the HLEG followed by the analysis and
suggestions raised by the Task Force (see list of the members of the Task Force in Annex 1)
KBD-Share: Key Aggregation, Blockchain, and Differential Privacy based Secured Data Sharing for Multi-User Cloud Computing
In today's era of widespread cloud computing and data sharing, the demand for secure and privacy-preserving techniques to facilitate multi-user data sharing is rapidly increasing. However, traditional approaches struggle to effectively address the twin objectives of ensuring privacy protection while preserving the utility of shared data. This predicament holds immense significance due to the pivotal role data sharing plays in diverse domains and applications. However, it also brings about significant privacy vulnerabilities. Consequently, innovative approaches are imperative to achieve a harmonious equilibrium between the utility of shared data and the protection of privacy in scenarios involving multiple users. This paper presents KBD-Share, an innovative framework that addresses the intricacies of ensuring data security and privacy in the context of sharing data among multiple users in cloud computing environments. By seamlessly integrating key aggregation, blockchain technology, and differential privacy techniques, KBD-Share offers an efficient and robust solution to protect sensitive data while facilitating seamless sharing and utilization. Extensive experimental evaluations convincingly establish the superiority of KBD-Share in aspects of data privacy preservation and utility, outperforming existing approaches. This approach achieves the highest R2 value of 0.9969 exhibiting best data utility, essential for multi-user data sharing in diverse cloud computing applications
Framework for Security Transparency in Cloud Computing
The migration of sensitive data and applications from the on-premise data centre to a cloud environment increases cyber risks to users, mainly because the cloud environment is managed and maintained by a third-party. In particular, the partial surrender of sensitive data and application to a cloud environment creates numerous concerns that are related to a lack of security transparency. Security transparency involves the disclosure of information by cloud service providers about the security measures being put in place to protect assets and meet the expectations of customers. It establishes trust in service relationship between cloud service providers and customers, and without evidence of continuous transparency, trust and confidence are affected and are likely to hinder extensive usage of cloud services. Also, insufficient security transparency is considered as an added level of risk and increases the difficulty of demonstrating conformance to customer requirements and ensuring that the cloud service providers adequately implement security obligations.
The research community have acknowledged the pressing need to address security transparency concerns, and although technical aspects for ensuring security and privacy have been researched widely, the focus on security transparency is still scarce. The relatively few literature mostly approach the issue of security transparency from cloud providers’ perspective, while other works have contributed feasible techniques for comparison and selection of cloud service providers using metrics such as transparency and trustworthiness. However, there is still a shortage of research that focuses on improving security transparency from cloud users’ point of view. In particular, there is still a gap in the literature that (i) dissects security transparency from the lens of conceptual knowledge up to implementation from organizational and technical perspectives and; (ii) support continuous transparency by enabling the vetting and probing of cloud service providers’ conformity to specific customer requirements. The significant growth in moving business to the cloud – due to its scalability and perceived effectiveness – underlines the dire need for research in this area.
This thesis presents a framework that comprises the core conceptual elements that constitute security transparency in cloud computing. It contributes to the knowledge domain of security transparency in cloud computing by proposing the following. Firstly, the research analyses the basics of cloud security transparency by exploring the notion and foundational concepts that constitute security transparency. Secondly, it proposes a framework which integrates various concepts from requirement engineering domain and an accompanying process that could be followed to implement the framework. The framework and its process provide an essential set of conceptual ideas, activities and steps that can be followed at an organizational level to attain security transparency, which are based on the principles of industry standards and best practices. Thirdly, for ensuring continuous transparency, the thesis proposes an essential tool that supports the collection and assessment of evidence from cloud providers, including the establishment of remedial actions for redressing deficiencies in cloud provider practices. The tool serves as a supplementary component of the proposed framework that enables continuous inspection of how predefined customer requirements are being satisfied.
The thesis also validates the proposed security transparency framework and tool in terms of validity, applicability, adaptability, and acceptability using two different case studies. Feedbacks are collected from stakeholders and analysed using essential criteria such as ease of use, relevance, usability, etc. The result of the analysis illustrates the validity and acceptability of both the framework and tool in enhancing security transparency in a real-world environment
Using the blockchain to enable transparent and auditable processing of personal data in cloud- based services: Lessons from the Privacy-Aware Cloud Ecosystems (PACE) project
The architecture of cloud-based services is typically opaque and intricate. As a result, data subjects cannot exercise adequate control over their personal data, and overwhelmed data protection authorities must spend their limited resources in costly forensic efforts to ascertain instances of non-compliance. To address these data protection challenges, a group of computer scientists and socio-legal scholars joined forces in the Privacy-Aware Cloud Ecosystems (PACE) project to design a blockchain-based privacy-enhancing technology (PET). This article presents the fruits of this collaboration, highlighting the capabilities and limits of our PET, as well as the challenges we encountered during our interdisciplinary endeavour. In particular, we explore the barriers to interdisciplinary collaboration between law and computer science that we faced, and how these two fields’ different expectations as to what technology can do for data protection law compliance had an impact on the project's development and outcome. We also explore the overstated promises of techno-regulation, and the practical and legal challenges that militate against the implementation of our PET: most industry players have no incentive to deploy it, the transaction costs of running it make it prohibitively expensive, and there are significant clashes between the blockchain's decentralised architecture and GDPR's requirements that hinder its deployability. We share the insights and lessons we learned from our efforts to overcome these challenges, hoping to inform other interdisciplinary projects that are increasingly important to shape a data ecosystem that promotes the protection of our personal data
- …