14,626 research outputs found

    A Blockchain-based Approach for Data Accountability and Provenance Tracking

    Full text link
    The recent approval of the General Data Protection Regulation (GDPR) imposes new data protection requirements on data controllers and processors with respect to the processing of European Union (EU) residents' data. These requirements consist of a single set of rules that have binding legal status and should be enforced in all EU member states. In light of these requirements, we propose in this paper the use of a blockchain-based approach to support data accountability and provenance tracking. Our approach relies on the use of publicly auditable contracts deployed in a blockchain that increase the transparency with respect to the access and usage of data. We identify and discuss three different models for our approach with different granularity and scalability requirements where contracts can be used to encode data usage policies and provenance tracking information in a privacy-friendly way. From these three models we designed, implemented, and evaluated a model where contracts are deployed by data subjects for each data controller, and a model where subjects join contracts deployed by data controllers in case they accept the data handling conditions. Our implementations show in practice the feasibility and limitations of contracts for the purposes identified in this paper

    Algorithms that Remember: Model Inversion Attacks and Data Protection Law

    Get PDF
    Many individuals are concerned about the governance of machine learning systems and the prevention of algorithmic harms. The EU's recent General Data Protection Regulation (GDPR) has been seen as a core tool for achieving better governance of this area. While the GDPR does apply to the use of models in some limited situations, most of its provisions relate to the governance of personal data, while models have traditionally been seen as intellectual property. We present recent work from the information security literature around `model inversion' and `membership inference' attacks, which indicate that the process of turning training data into machine learned systems is not one-way, and demonstrate how this could lead some models to be legally classified as personal data. Taking this as a probing experiment, we explore the different rights and obligations this would trigger and their utility, and posit future directions for algorithmic governance and regulation.Comment: 15 pages, 1 figur

    Big Data Ethics in Research

    Get PDF
    The main problems faced by scientists in working with Big Data sets, highlighting the main ethical issues, taking into account the legislation of the European Union. After a brief Introduction to Big Data, the Technology section presents specific research applications. There is an approach to the main philosophical issues in Philosophical Aspects, and Legal Aspects with specific ethical issues in the EU Regulation on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (Data Protection Directive - General Data Protection Regulation, "GDPR"). The Ethics Issues section details the specific aspects of Big Data. After a brief section of Big Data Research, I finalize my work with the presentation of Conclusions on research ethics in working with Big Data. CONTENTS: Abstract 1. Introduction - 1.1 Definitions - 1.2 Big Data dimensions 2. Technology - 2.1 Applications - - 2.1.1 In research 3. Philosophical aspects 4. Legal aspects - 4.1 GDPR - - Stages of processing of personal data - - Principles of data processing - - Privacy policy and transparency - - Purposes of data processing - - Design and implicit confidentiality - - The (legal) paradox of Big Data 5. Ethical issues - Ethics in research - Awareness - Consent - Control - Transparency - Trust - Ownership - Surveillance and security - Digital identity - Tailored reality - De-identification - Digital inequality - Privacy 6. Big Data research Conclusions Bibliography DOI: 10.13140/RG.2.2.11054.4640

    Streamlining governmental processes by putting citizens in control of their personal data

    Get PDF
    Governments typically store large amounts of personal information on their citizens, such as a home address, marital status, and occupation, to offer public services. Because governments consist of various governmental agencies, multiple copies of this data often exist. This raises concerns regarding data consistency, privacy, and access control, especially under recent legal frameworks such as GDPR. To solve these problems, and to give citizens true control over their data, we explore an approach using the decentralised Solid ecosystem, which enables citizens to maintain their data in personal data pods. We have applied this approach to two high-impact use cases, where citizen information is stored in personal data pods, and both public and private organisations are selectively granted access. Our findings indicate that Solid allows reshaping the relationship between citizens, their personal data, and the applications they use in the public and private sector. We strongly believe that the insights from this Flemish Solid Pilot can speed up the process for public administrations and private organisations that want to put the users in control of their data

    Market differentiation potential of country-of-origin, quality and traceability labeling

    Get PDF
    Product labeling has gained considerable attention recently, as a means to both provide product-specific information and reduce quality uncertainty faced by consumers, as well as from a regulatory point of view. This article focuses on whether and to what extent origin, quality and traceability labeling is an appropriate way to differentiate food products. The focus is on fresh meat and fresh fish, two mainly generic food product categories with a high degree of credence character. Insights into the potential for market differentiation through origin, quality and traceability labeling are provided and discussed using primary data collected during the period 2000-2005 by means of four consumer surveys. In general, direct indications of quality, including mandatory information cues such as best-before dates and species names, but also including quality marks, are found to be more appealing to consumers in general than origin labeling, and the latter more than traceability. The different studies yield the conclusion that the market differentiation potential of origin and quality labeling pertains mainly to a product’s healthiness appeal, and this potential seems stronger for meat than for fish. The differentiation potential of traceability per se is rather limited. Instead, traceability is needed as the regulatory and logistic backbone for providing guarantees related to origin and quality

    Real deal or no deal? A comparative analysis of raw milk cheese regulation in Australia and France

    Get PDF
    Australia’s regulatory framework has resulted in the standardisation of cheese production based on pasteurisation. Up until early 2015, regulations effectively prohibited raw milk cheese-making in Australia and thus stifled artisanal on-farm production. Although the introduction of Food Standards Australia New Zealand Standard 4.2.4 has allowed the production of certain hard, low-moisture raw milk cheeses, the new standard is rigid and does not encourage new entrants into the emerging raw milk cheese consumer market. This article compares the Australian system with the French raw milk cheese regulation and production system, and argues that its approach in encouraging and supporting small farmhouse artisanal traditional raw milk cheese is beneficial to both producer and consumer, and has not resulted in any significant health risks. The Australian approach amounts to a missed opportunity to encourage the emergence of a value-added industry with local and export potential, and is at odds with important movements in food policy, such as recognition of the value of localism and terroir

    Safeguarding the Evidential Value of Forensic Cryptocurrency Investigations

    Get PDF
    Analyzing cryptocurrency payment flows has become a key forensic method in law enforcement and is nowadays used to investigate a wide spectrum of criminal activities. However, despite its widespread adoption, the evidential value of obtained findings in court is still largely unclear. In this paper, we focus on the key ingredients of modern cryptocurrency analytics techniques, which are clustering heuristics and attribution tags. We identify internationally accepted standards and rules for substantiating suspicions and providing evidence in court and project them onto current cryptocurrency forensics practices. By providing an empirical analysis of CoinJoin transactions, we illustrate possible sources of misinterpretation in algorithmic clustering heuristics. Eventually, we derive a set of legal key requirements and translate them into a technical data sharing framework that fosters compliance with existing legal and technical standards in the realm of cryptocurrency forensics. Integrating the proposed framework in modern cryptocurrency analytics tools could allow more efficient and effective investigations, while safeguarding the evidential value of the analysis and the fundamental rights of affected persons
    • …
    corecore