1,042 research outputs found

    An Agile Approach to Validate a Formal Representation of the GDPR

    Get PDF
    Modeling in a knowledge base of logic formulæ the articles of the GDPR enables semi-automatic reasoning of the Regulation. To be legally substantiated, it requires that the formulæ express validly the legal meaning of the Regulation’s articles. But legal experts are usually not familiar with logic, and this calls for an interdisciplinary validation methodology that bridges the communication gap between formal modelers and legal evaluators. We devise such a validation methodology and exemplify it over a knowledge base of articles of the GDPR translated AQ2 into Reified I/O (RIO) logic and encoded in LegalRuleML. A pivotal element of the methodology is a human-readable intermediate representation of the logic formulæ that preserves the formulæ’s meaning while rendering it in a readable way to non-experts. After being applied over a use case, we prove that it is possible to retrieve feedback from legal experts about the formal representation of Art. 5.1a and Art. 7.1. What emerges is an agile process to build logic knowledge bases of legal texts, and to support their public trust, which we intend to use for a logic AQ3 model of the GDPR, called DAPRECO knowledge base

    TIRA: An OpenAPI Extension and Toolbox for GDPR Transparency in RESTful Architectures

    Full text link
    Transparency - the provision of information about what personal data is collected for which purposes, how long it is stored, or to which parties it is transferred - is one of the core privacy principles underlying regulations such as the GDPR. Technical approaches for implementing transparency in practice are, however, only rarely considered. In this paper, we present a novel approach for doing so in current, RESTful application architectures and in line with prevailing agile and DevOps-driven practices. For this purpose, we introduce 1) a transparency-focused extension of OpenAPI specifications that allows individual service descriptions to be enriched with transparency-related annotations in a bottom-up fashion and 2) a set of higher-order tools for aggregating respective information across multiple, interdependent services and for coherently integrating our approach into automated CI/CD-pipelines. Together, these building blocks pave the way for providing transparency information that is more specific and at the same time better reflects the actual implementation givens within complex service architectures than current, overly broad privacy statements.Comment: Accepted for publication at the 2021 International Workshop on Privacy Engineering (IWPE'21). This is a preprint manuscript (authors' own version before final copy-editing

    Towards Transparent Legal Formalization

    Get PDF
    A key challenge in making a transparent formalization of a legal text is the dependency on two domain experts. While a legal expert is needed in order to interpret the legal text, a logician or a programmer is needed for encoding it into a program or a formula. Various existing methods are trying to solve this challenge by improving or automating the communication between the two experts. In this paper, we follow a different direction and attempt to eliminate the dependency on the target domain expert. This is achieved by inverting the translation back into the original text. By skipping over the logical translation, a legal expert can now both interpret and evaluate a translation

    Applying security features to GA4GH Phenopackets

    Get PDF
    Global Alliance for Genomic and Health has developed a standard file format called Phenopacket to improve the exchange of phenotypic information over the network. However, this standard does not implement any security mechanism, which allows an attacker to obtain sensitive information if he gets hold of it. This project aims to provide security features within the Phenopacket schema to ensure a secure exchange. To achieve this objective, it is necessary to understand the structure of the schema in order to classify which fields need to be protected. Once the schema has been designed, an investigation is conducted into which technologies are currently the most secure, leading to the implementation of three security mechanisms: digital signature, encryption, and hashing. To conclude, several verification tests are performed to ensure that both the creation of Phenopacket and the security measures applied have been correctly implemented, confirming that data exchange is possible without revealing any sensitive data

    Blockchain for requirements traceability: A qualitative approach

    Get PDF
    Blockchain technology has emerged as a “disruptive innovation” that has received significant attention in academic and organizational settings. However, most of the existing research is focused on technical issues of blockchain systems, overlooking the organizational perspective. This study adopted a grounded theory to unveil the blockchain implementation process in organizations from the lens of blockchain experts. The results revealed three main categories: key activities, success factors, and challenges related to blockchain implementation in organizations, the latter being identified as the core category, along with 17 other concepts. Findings suggested that the majority of blockchain projects stop at the pilot stage and outlined organizational resistance to change as the core challenge. According to the experts, the following factors contribute to the organizational resistance to change: innovation–production gap, conservative management, and centralized mentality. The study aims to contribute to the existing blockchain literature by providing a holistic and domain-agnostic view of the blockchain implementation process in organizational settings. This can potentially encourage the development and implementation of blockchain solutions and guide practitioners who are interested in leveraging the inherent benefits of this technology. In addition, the results are used to improve a blockchain-enabled requirements traceability framework proposed in our previous paper.publishedVersio

    A Taxonomy for Mining and Classifying Privacy Requirements in Issue Reports

    Full text link
    Digital and physical footprints are a trail of user activities collected over the use of software applications and systems. As software becomes ubiquitous, protecting user privacy has become challenging. With the increasing of user privacy awareness and advent of privacy regulations and policies, there is an emerging need to implement software systems that enhance the protection of personal data processing. However, existing privacy regulations and policies only provide high-level principles which are difficult for software engineers to design and implement privacy-aware systems. In this paper, we develop a taxonomy that provides a comprehensive set of privacy requirements based on two well-established and widely-adopted privacy regulations and frameworks, the General Data Protection Regulation (GDPR) and the ISO/IEC 29100. These requirements are refined into a level that is implementable and easy to understand by software engineers, thus supporting them to attend to existing regulations and standards. We have also performed a study on how two large open-source software projects (Google Chrome and Moodle) address the privacy requirements in our taxonomy through mining their issue reports. The paper discusses how the collected issues were classified, and presents the findings and insights generated from our study.Comment: Submitted to IEEE Transactions on Software Engineering on 23 December 202
    corecore