176,346 research outputs found

    Machine-Readable Privacy Certificates for Services

    Full text link
    Privacy-aware processing of personal data on the web of services requires managing a number of issues arising both from the technical and the legal domain. Several approaches have been proposed to matching privacy requirements (on the clients side) and privacy guarantees (on the service provider side). Still, the assurance of effective data protection (when possible) relies on substantial human effort and exposes organizations to significant (non-)compliance risks. In this paper we put forward the idea that a privacy certification scheme producing and managing machine-readable artifacts in the form of privacy certificates can play an important role towards the solution of this problem. Digital privacy certificates represent the reasons why a privacy property holds for a service and describe the privacy measures supporting it. Also, privacy certificates can be used to automatically select services whose certificates match the client policies (privacy requirements). Our proposal relies on an evolution of the conceptual model developed in the Assert4Soa project and on a certificate format specifically tailored to represent privacy properties. To validate our approach, we present a worked-out instance showing how privacy property Retention-based unlinkability can be certified for a banking financial service.Comment: 20 pages, 6 figure

    A Blockchain-based Approach for Data Accountability and Provenance Tracking

    Full text link
    The recent approval of the General Data Protection Regulation (GDPR) imposes new data protection requirements on data controllers and processors with respect to the processing of European Union (EU) residents' data. These requirements consist of a single set of rules that have binding legal status and should be enforced in all EU member states. In light of these requirements, we propose in this paper the use of a blockchain-based approach to support data accountability and provenance tracking. Our approach relies on the use of publicly auditable contracts deployed in a blockchain that increase the transparency with respect to the access and usage of data. We identify and discuss three different models for our approach with different granularity and scalability requirements where contracts can be used to encode data usage policies and provenance tracking information in a privacy-friendly way. From these three models we designed, implemented, and evaluated a model where contracts are deployed by data subjects for each data controller, and a model where subjects join contracts deployed by data controllers in case they accept the data handling conditions. Our implementations show in practice the feasibility and limitations of contracts for the purposes identified in this paper

    Mandatory Enforcement of Privacy Policies using Trusted Computing Principles

    Get PDF
    Modern communication systems and information technology create significant new threats to information privacy. In this paper, we discuss the need for proper privacy protection in cooperative intelligent transportation systems (cITS), one instance of such systems. We outline general principles for data protection and their legal basis and argue why pure legal protection is insufficient. Strong privacy-enhancing technologies need to be deployed in cITS to protect user data while it is generated and processed. As data minimization cannot always prevent the need for disclosing relevant personal information, we introduce the new concept of mandatory enforcement of privacy policies. This concept empowers users and data subjects to tightly couple their data with privacy policies and rely on the system to impose such policies onto any data processors. We also describe the PRECIOSA Privacy-enforcing Runtime Architecture that exemplifies our approach. Moreover, we show how an application can utilize this architecture by applying it to a pay as you drive (PAYD) car insurance scenario

    A flexible architecture for privacy-aware trust management

    Get PDF
    In service-oriented systems a constellation of services cooperate, sharing potentially sensitive information and responsibilities. Cooperation is only possible if the different participants trust each other. As trust may depend on many different factors, in a flexible framework for Trust Management (TM) trust must be computed by combining different types of information. In this paper we describe the TAS3 TM framework which integrates independent TM systems into a single trust decision point. The TM framework supports intricate combinations whilst still remaining easily extensible. It also provides a unified trust evaluation interface to the (authorization framework of the) services. We demonstrate the flexibility of the approach by integrating three distinct TM paradigms: reputation-based TM, credential-based TM, and Key Performance Indicator TM. Finally, we discuss privacy concerns in TM systems and the directions to be taken for the definition of a privacy-friendly TM architecture.\u

    A Human-centric Perspective on Digital Consenting: The Case of GAFAM

    Get PDF
    According to different legal frameworks such as the European General Data Protection Regulation (GDPR), an end-user's consent constitutes one of the well-known legal bases for personal data processing. However, research has indicated that the majority of end-users have difficulty in understanding what they are consenting to in the digital world. Moreover, it has been demonstrated that marginalized people are confronted with even more difficulties when dealing with their own digital privacy. In this research, we use an enactivist perspective from cognitive science to develop a basic human-centric framework for digital consenting. We argue that the action of consenting is a sociocognitive action and includes cognitive, collective, and contextual aspects. Based on the developed theoretical framework, we present our qualitative evaluation of the consent-obtaining mechanisms implemented and used by the five big tech companies, i.e. Google, Amazon, Facebook, Apple, and Microsoft (GAFAM). The evaluation shows that these companies have failed in their efforts to empower end-users by considering the human-centric aspects of the action of consenting. We use this approach to argue that their consent-obtaining mechanisms violate principles of fairness, accountability and transparency. We then suggest that our approach may raise doubts about the lawfulness of the obtained consent—particularly considering the basic requirements of lawful consent within the legal framework of the GDPR
    corecore