3,721 research outputs found

    A Generic Information and Consent Framework for the IoT

    Get PDF
    The Internet of Things (IoT) raises specific issues in terms of information and consent, which makes the implementation of the General Data Protection Regulation (GDPR) challenging in this context. In this report, we propose a generic framework for information and consent in the IoT which is protective both for data subjects and for data controllers. We present a high level description of the framework, illustrate its generality through several technical solutions and case studies, and sketch a prototype implementation

    In Defense of the Long Privacy Statement

    Get PDF

    Health research, consent and the GDPR exemption

    Get PDF
    This article analyses the balance which the GDPR strikes between two important social values: protecting personal health data and facilitating health research through the lens of the consent requirement and the research exemption. The article shows that the normative weight of the consent requirement differs depending on the context for the health research in question. This more substantive approach to consent is reflected in the research exemption which allows for a more nuanced balancing of interests. However, because the GDPR articulates the exemption at an abstract and principled level, in practice the balance is struck at Member State level. Thus, the GDPR increases difficulties for EU cross-border health projects and impedes the policy goal of creating a harmonised regulatory framework for health research. The article argues that in order to address this problem, the European Data Protection Board should provide specific guidance on the operation of consent in health research

    Change of Purpose - The effects of the Purpose Limitation Principle in the General Data Protection Regulation on Big Data Profiling

    Get PDF
    Over the past few years, many companies have started to adopt Big Data technologies. Big Data is a method and technology that allows the collection and analysis of huge amounts of all kinds of data, mainly in digital form. Big Data can be used, for example, to create profiles of online shopping users to target ads. I call this Big Data Profiling. Facebook and Google, for example, are able to estimate attributes, such as gender, age and interests, from data provided by their users. This can be worrisome for many users who feel that their privacy is infringed when the Big Data Profiling companies, for example, are able to send advertisements to the users that are scarily relevant to them. Big Data Profiling relies on a vast amount of collected data. Often, at the time of collection, it is not clear how exactly this data will be used and analyzed. The new possibilities with Big Data Profiling have led to companies collecting as much data as possible, and then later figuring out how to extract value from this data. This model can be described as “collect-before select”, since the data is first collected, and then “mined” for correlations that can be used to profile users. In this thesis I analyze whether this form of collection and usage of Personal Data is legal under the General Data Protection Regulation (GDPR), which enters into force in the European Union on 25 May 2018. While many of the provisions of the GDPR already existed in the Data Protection Directive (DPD) since 1995, they have been reinforced and extended in the GDPR. One of the main principles of the GDPR is that of Purpose Limitation. While the principle already exists under the DPD in a very similar fashion, it is likely that it will be enforced more under the GDPR, since the GDPR is directly applicable in member states instead of having to be implemented. The enforcement mechanisms, such as sanctions, have also been significantly strengthened. The Purpose Limitation Principle requires the data controller (such as companies processing Personal Data, like Facebook and Google) to have a specified purpose for and during the collection of Personal Data. Further, the Personal Data cannot be processed beyond this purpose after it has been collected. This seems to run contrary to Big Data Profiling, which regularly looks for purposes only after the Personal Data has been collected. However, I have identified three potential ways the “collect before select” model could still be possible under the GDPR. The first possibility is the anonymization of Personal Data. If data can be efficiently anonymized, it will fall outside of the scope of the GDPR because it will not contain Personal Data after the anonymization. The controller is then free to analyze the data for any purpose, including creating models that could be used to profile other users. However, I found that Big Data methods can often reidentify Personal Data that has been previously anonymized. In such cases even purportedly anonymized data may still fall under the scope of the GDPR. If on the other hand enough Personal Data is removed to make reidentification impossible, the value of the data for large parts of the business world is likely destroyed. The second possibility is collecting Personal Data for a specified purpose that is defined so widely that it covers all potential future use cases. If a controller can collect Personal Data for a vague purpose, such as “marketing”, the controller will have a lot of flexibility in using the data while still being covered by the initial purpose. I found that the GDPR requires data controllers (such as companies) to have a purpose for the data collection that is specific enough so that the data subject is able to determine exactly which kinds of processing the controller will undertake. Having a non-existent or too vague purpose is not sufficient under the GDPR. Companies that collect data with no, or an only vaguely defined, purpose and then try to find a specific purpose for the collected data later will therefore have to stop this practice. The third possibility can be used if the controller wants to re-use Personal Data for further purposes, after the controller has collected the Personal Data initially in compliance with the GDPR for a specified purpose. In this case, the GDPR offers certain possibilities of further processing this data outside of the initial purpose. The GDPR allows this for example if the data subject has given consent to the new purpose. However, I found that Big Data Profiling companies often come up with new purposes later by “letting the data speak”, which means by analyzing the data itself to find new purposes. Before performing an analysis, often the company might not even know how the processing will be done later. In that case, it is impossible to request the data subject’s specific consent, which is required under the GDPR. Even without the data subject’s consent, there are however other possibilities of further processing data under the GDPR, such as determining whether the new processing is compatible with the initial purpose. My thesis examines some of those possibilities for a change of purpose under Big Data Profiling. My conclusion is that the GDPR likely means a drastic impact and limitation on Big Data Profiling as we know it. Personal Data cannot be collected without a purpose or with a vague purpose. Even Personal Data that was collected for a specific purpose cannot be re-used for another purpose except for in very few circumstances. Time will tell how the courts interpret the GDPR and decide different situations, how the companies will adapt to them and if the legislator will react to this reality

    Assessing the Solid Protocol in Relation to Security and Privacy Obligations

    Get PDF
    The Solid specification aims to empower data subjects by giving them direct access control over their data across multiple applications. As governments are manifesting their interest in this framework for citizen empowerment and e-government services, security and privacy represent pivotal issues to be addressed. By analysing the relevant legislation, with an emphasis on GDPR and officially approved documents such as codes of conduct and relevant security ISO standards, we formulate the primary security and privacy requirements for such a framework. The legislation places some obligations on pod providers, much like cloud services. However, what is more interesting is that Solid has the potential to support GDPR compliance of Solid apps and data users that connect, via the protocol, to Solid pods containing personal data. A Solid-based healthcare use case is illustrated where identifying such controllers responsible for apps and data users is essential for the system to be deployed. Furthermore, we survey the current Solid protocol specifications regarding how they cover the highlighted requirements, and draw attention to potential gaps between the specifications and requirements. We also point out the contribution of recent academic work presenting novel approaches to increase the security and privacy degree provided by the Solid project. This paper has a twofold contribution to improve user awareness of how Solid can help protect their data and to present possible future research lines on Solid security and privacy enhancements

    TIRA: An OpenAPI Extension and Toolbox for GDPR Transparency in RESTful Architectures

    Full text link
    Transparency - the provision of information about what personal data is collected for which purposes, how long it is stored, or to which parties it is transferred - is one of the core privacy principles underlying regulations such as the GDPR. Technical approaches for implementing transparency in practice are, however, only rarely considered. In this paper, we present a novel approach for doing so in current, RESTful application architectures and in line with prevailing agile and DevOps-driven practices. For this purpose, we introduce 1) a transparency-focused extension of OpenAPI specifications that allows individual service descriptions to be enriched with transparency-related annotations in a bottom-up fashion and 2) a set of higher-order tools for aggregating respective information across multiple, interdependent services and for coherently integrating our approach into automated CI/CD-pipelines. Together, these building blocks pave the way for providing transparency information that is more specific and at the same time better reflects the actual implementation givens within complex service architectures than current, overly broad privacy statements.Comment: Accepted for publication at the 2021 International Workshop on Privacy Engineering (IWPE'21). This is a preprint manuscript (authors' own version before final copy-editing
    • …
    corecore