60 research outputs found

    Information Security and Knowledge Management: Solutions Through Analogies?

    Get PDF
    Information Security Management and Knowledge Management show a couple of intriguing similarities. This paper identifies some of these similarities and highlights abstract problems arising from them in both areas. Those analogies motivate to look for possibilities to transfer solutions from one area to the other

    TILT: A GDPR-Aligned Transparency Information Language and Toolkit for Practical Privacy Engineering

    Full text link
    In this paper, we present TILT, a transparency information language and toolkit explicitly designed to represent and process transparency information in line with the requirements of the GDPR and allowing for a more automated and adaptive use of such information than established, legalese data protection policies do. We provide a detailed analysis of transparency obligations from the GDPR to identify the expressiveness required for a formal transparency language intended to meet respective legal requirements. In addition, we identify a set of further, non-functional requirements that need to be met to foster practical adoption in real-world (web) information systems engineering. On this basis, we specify our formal language and present a respective, fully implemented toolkit around it. We then evaluate the practical applicability of our language and toolkit and demonstrate the additional prospects it unlocks through two different use cases: a) the inter-organizational analysis of personal data-related practices allowing, for instance, to uncover data sharing networks based on explicitly announced transparency information and b) the presentation of formally represented transparency information to users through novel, more comprehensible, and potentially adaptive user interfaces, heightening data subjects' actual informedness about data-related practices and, thus, their sovereignty. Altogether, our transparency information language and toolkit allow - differently from previous work - to express transparency information in line with actual legal requirements and practices of modern (web) information systems engineering and thereby pave the way for a multitude of novel possibilities to heighten transparency and user sovereignty in practice.Comment: Accepted for publication at the ACM Conference on Fairness, Accountability, and Transparency 2021 (ACM FAccT'21). This is a preprint manuscript (authors' own version before final copy-editing

    Non-Disclosing Credential On-chaining for Blockchain-based Decentralized Applications

    Full text link
    Many service systems rely on verifiable identity-related information of their users. Manipulation and unwanted exposure of this privacy-relevant information, however, must at the same time be prevented and avoided. Peer-to-peer blockchain-based decentralization with a smart contract-based execution model and verifiable off-chain computations leveraging zero-knowledge proofs promise to provide the basis for next-generation, non-disclosing credential management solutions. In this paper, we propose a novel credential on-chaining system that ensures blockchain-based transparency while preserving pseudonymity. We present a general model compliant to the W3C verifiable credential recommendation and demonstrate how it can be applied to solve existing problems that require computational identity-related attribute verification. Our zkSNARKs-based reference implementation and evaluation show that, compared to related approaches based on, e.g., CL-signatures, our approach provides significant performance advantages and more flexible proof mechanisms, underpinning our vision of increasingly decentralized, transparent, and trustworthy service systems

    Towards Cross-Provider Analysis of Transparency Information for Data Protection

    Full text link
    Transparency and accountability are indispensable principles for modern data protection, from both, legal and technical viewpoints. Regulations such as the GDPR, therefore, require specific transparency information to be provided including, e.g., purpose specifications, storage periods, or legal bases for personal data processing. However, it has repeatedly been shown that all too often, this information is practically hidden in legalese privacy policies, hindering data subjects from exercising their rights. This paper presents a novel approach to enable large-scale transparency information analysis across service providers, leveraging machine-readable formats and graph data science methods. More specifically, we propose a general approach for building a transparency analysis platform (TAP) that is used to identify data transfers empirically, provide evidence-based analyses of sharing clusters of more than 70 real-world data controllers, or even to simulate network dynamics using synthetic transparency information for large-scale data-sharing scenarios. We provide the general approach for advanced transparency information analysis, an open source architecture and implementation in the form of a queryable analysis platform, and versatile analysis examples. These contributions pave the way for more transparent data processing for data subjects, and evidence-based enforcement processes for data protection authorities. Future work can build upon our contributions to gain more insights into so-far hidden data-sharing practices.Comment: technical repor

    Configurable Per-Query Data Minimization for Privacy-Compliant Web APIs

    Full text link
    The purpose of regulatory data minimization obligations is to limit personal data to the absolute minimum necessary for a given context. Beyond the initial data collection, storage, and processing, data minimization is also required for subsequent data releases, as it is the case when data are provided using query-capable Web APIs. Data-providing Web APIs, however, typically lack sophisticated data minimization features, leaving the task open to manual and all too often missing implementations. In this paper, we address the problem of data minimization for data-providing, query-capable Web APIs. Based on a careful analysis of functional and non-functional requirements, we introduce Janus, an easy-to-use, highly configurable solution for implementing legally compliant data minimization in GraphQL Web APIs. Janus provides a rich set of information reduction functionalities that can be configured for different client roles accessing the API. We present a technical proof-of-concept along with experimental measurements that indicate reasonable overheads. Janus is thus a practical solution for implementing GraphQL APIs in line with the regulatory principle of data minimization.Comment: Preprint version (2022-03-18) This version of the contribution has been accepted for publication at the 22nd International Conference on Web Engineering (ICWE 2022), Bari, Ital

    TIRA: An OpenAPI Extension and Toolbox for GDPR Transparency in RESTful Architectures

    Full text link
    Transparency - the provision of information about what personal data is collected for which purposes, how long it is stored, or to which parties it is transferred - is one of the core privacy principles underlying regulations such as the GDPR. Technical approaches for implementing transparency in practice are, however, only rarely considered. In this paper, we present a novel approach for doing so in current, RESTful application architectures and in line with prevailing agile and DevOps-driven practices. For this purpose, we introduce 1) a transparency-focused extension of OpenAPI specifications that allows individual service descriptions to be enriched with transparency-related annotations in a bottom-up fashion and 2) a set of higher-order tools for aggregating respective information across multiple, interdependent services and for coherently integrating our approach into automated CI/CD-pipelines. Together, these building blocks pave the way for providing transparency information that is more specific and at the same time better reflects the actual implementation givens within complex service architectures than current, overly broad privacy statements.Comment: Accepted for publication at the 2021 International Workshop on Privacy Engineering (IWPE'21). This is a preprint manuscript (authors' own version before final copy-editing

    Forking, Scratching und Re-Merging : Ein informatischer Blick auf die Rechtsinformatik

    Get PDF
    Der Beitrag zeichnet die Entwicklung der Rechtsinformatik seit 1970 nach. Unter Zuhilfenahme von Methoden und Einsichten des modernen Software-Engineering wird ein bestimmter Strang der Entwicklung genauer betrachtet: das «Forking», die frühe Abspaltung eines Zweiges der Rechtsinformatik in 1974. Aus diesem Strang der Entwicklung ist inzwischen eine eigenständige Berliner Regulationstheorie entstanden. Die Autoren geben diesem Ansatz den Arbeitsbegriff «Neue Rechtsinformatik» (NRI). Ein Teil der Arbeiten führt über den Umweg der USA wahrscheinlich in den Kern der juristischen Wissenschaften zurück. Dies ist ein Beispiel eines erfolgreichen Re-Merging. Ob der in der Informatik verbliebene Strang der Regulationstheorie auch in Zukunft ertragreich ist, ist natürlich nicht abzusehen. Der Beitrag eröffnet Evidenz (am Beispiel von IT-Sicherheit und Datenschutz), dass ohne die Hereinnahme der «Neuen Institutionenökonomik» (NIE) eine wie auch immer konzipierte «Rechtsinformatik» nicht überlebensfähig wäre. Das Neue der NRI ist die Anerkennung von Code als eigenständiger Modalität der Regulation. Die drei Teile des Beitrags, für die je verschiedene Autoren zuständig waren, sollen deren unterschiedliches Lebensalter, die unterschiedlichen Qualifikationen und Lebenssituationen widerspiegeln: Teil 1 behandelt die vergangene Zeit von den Anfängen bis ca. 1995, Teil 2 die Gegenwart mit der neuen Entität Internet, Teil 3 zeigt eine mögliche Zukunft auf. Zusammenfassend ist es evident, dass die von Steinmüller begründete Schule der Rechtsinformatik erfolgreich war. Dazu hat das Forking der Rechtsinformatik von 1974 maßgeblich beigetragen

    Participatory sensing and wearable technologies as tools to support citizen and open science: Technical and organizational challenges and possible solutions

    Get PDF
    Wenn BürgerInnen aktiv am Datengewinnungsprozess als zentralem Baustein empirisch ausgerichteter wissenschaftlicher Projekte teilhaben, kann dies als Beitrag zu einer offenen und bürgernahen Wissenschaft angesehen werden. Eine solche Teilhabe kann durch die Bereitstellung von technischen Werkzeugen erheblich erleichtert werden. Daher sollen Participatory Sensing als Bereitstellung von günstigen Sensoren zur Messung von Umweltparametern sowie Wearable Technologies zur Aufnahme von quantifizierten Vitaldaten und physiologischen Zuständen vorgestellt werden. Konzeptionell kann die Bereitstellung von Daten, die mit diesen Werkzeugen erhoben wurden, als Allmende verstanden werden – mit allen damit verbundenen Chancen und Risiken. Nach der Beschreibung von Beispielen aus den Bereichen von Participatory Sensing und Wearable Technologies werden zu erwartende Herausforderungen identifiziert und technisch-organisatorische Ansätze zu deren Lösung skizziert.If citizens actively participate in the process of collecting empirical data, as a key element of empirically oriented scientific projects, this can be seen as a contribution to an open and citizen-oriented science. Such participation can be supported by providing technical tools. The paper therefore presents examples of participatory sensing as the provision of affordable sensors for measuring environmental parameters as well as wearable technologies for recording quantified vital data and physiological states. Conceptually, the provision of data collected with these tools can be understood as a commons – with all opportunities and risks associated with such goods. After describing examples of participatory sensing and wearable technologies, the authors identify potential challenges and outline technical and organizational approaches to solve them

    Information logistics and fog computing: The DITAS∗ approach

    Get PDF
    Data-intensive applications are usually developed based on Cloud resources whose service delivery model helps towards building reliable and scalable solutions. However, especially in the context of Internet of Things-based applications, Cloud Computing comes with some limitations as data, generated at the edge of the network, are processed at the core of the network producing security, privacy, and latency issues. On the other side, Fog Computing is emerging as an extension of Cloud Computing, where resources located at the edge of the network are used in combination with cloud services. The goal of this paper is to present the approach adopted in the recently started DITAS project: the design of a Cloud platform is proposed to optimize the development of data-intensive applications providing information logistics tools that are able to deliver information and computation resources at the right time, right place, with the right quality. Applications that will be developed with DITAS tools live in a Fog Computing environment, where data move from the cloud to the edge and vice versa to provide secure, reliable, and scalable solutions with excellent performance
    • …
    corecore