215,337 research outputs found

    Advanced Probabilistic Couplings for Differential Privacy

    Get PDF
    Differential privacy is a promising formal approach to data privacy, which provides a quantitative bound on the privacy cost of an algorithm that operates on sensitive information. Several tools have been developed for the formal verification of differentially private algorithms, including program logics and type systems. However, these tools do not capture fundamental techniques that have emerged in recent years, and cannot be used for reasoning about cutting-edge differentially private algorithms. Existing techniques fail to handle three broad classes of algorithms: 1) algorithms where privacy depends accuracy guarantees, 2) algorithms that are analyzed with the advanced composition theorem, which shows slower growth in the privacy cost, 3) algorithms that interactively accept adaptive inputs. We address these limitations with a new formalism extending apRHL, a relational program logic that has been used for proving differential privacy of non-interactive algorithms, and incorporating aHL, a (non-relational) program logic for accuracy properties. We illustrate our approach through a single running example, which exemplifies the three classes of algorithms and explores new variants of the Sparse Vector technique, a well-studied algorithm from the privacy literature. We implement our logic in EasyCrypt, and formally verify privacy. We also introduce a novel coupling technique called \emph{optimal subset coupling} that may be of independent interest

    PriCL: Creating a Precedent A Framework for Reasoning about Privacy Case Law

    Full text link
    We introduce PriCL: the first framework for expressing and automatically reasoning about privacy case law by means of precedent. PriCL is parametric in an underlying logic for expressing world properties, and provides support for court decisions, their justification, the circumstances in which the justification applies as well as court hierarchies. Moreover, the framework offers a tight connection between privacy case law and the notion of norms that underlies existing rule-based privacy research. In terms of automation, we identify the major reasoning tasks for privacy cases such as deducing legal permissions or extracting norms. For solving these tasks, we provide generic algorithms that have particularly efficient realizations within an expressive underlying logic. Finally, we derive a definition of deducibility based on legal concepts and subsequently propose an equivalent characterization in terms of logic satisfiability.Comment: Extended versio

    Data Privacy and Dignitary Privacy: Google Spain, the Right To Be Forgotten, and the Construction of the Public Sphere

    Get PDF
    The 2014 decision of the European Court of Justice in Google Spain controversially held that the fair information practices set forth in European Union (EU) Directive 95/46/EC (Directive) require that Google remove from search results links to websites that contain true information. Google Spain held that the Directive gives persons a “right to be forgotten.” At stake in Google Spain are values that involve both privacy and freedom of expression. Google Spain badly analyzes both. With regard to the latter, Google Spain fails to recognize that the circulation of texts of common interest among strangers makes possible the emergence of a “public” capable of forming the “public opinion” that is essential for democratic self-governance. As the rise of American newspapers in the nineteenth and twentieth centuries demonstrates, the press underwrites the public sphere by creating a structure of communication both responsive to public curiosity and independent of the content of any particular news story. Google, even though it is not itself an author, sustains the contemporary virtual public sphere by creating an analogous structure of communication. With regard to privacy values, EU law, like the laws of many nations, recognizes two distinct forms of privacy. The first is data privacy, which is protected by the fair information practices contained in the Directive. These practices regulate the processing of personal information to ensure (among other things) that such information is used only for the specified purposes for which it has been legally gathered. Data privacy operates according to an instrumental logic, and it seeks to endow persons with “control” over their personal data. Data subjects need not demonstrate harm in order to establish violations of data privacy. The second form of privacy recognized by EU law is dignitary privacy. Article 7 of the Charter of Fundamental Rights of the European Union protects the dignity of persons by regulating inappropriate communications that threaten to degrade, humiliate, or mortify them. Dignitary privacy follows a normative logic designed to prevent harm to personality caused by the violation of civility rules. There are the same privacy values as those safeguarded by the American tort of public disclosure of private facts. Throughout the world, courts protect dignitary privacy by balancing the harm that a communication may cause to personality against legitimate public interests in the communication. The instrumental logic of data privacy is inapplicable to public discourse, which is why the Directive contains derogations for journalistic activities. The communicative action characteristic of the public sphere is made up of intersubjective dialogue, which is antithetical both to the instrumental rationality of data privacy and to its aspiration to ensure individual control of personal information. Because the Google search engine underwrites the public sphere in which public discourse takes place, Google Spain should not have applied fair information practices to Google searches. But the Google Spain opinion also invokes Article 7, and in the end the decision creates doctrinal rules that are roughly approximate to those used to protect dignitary privacy. The Google Spain opinion is thus deeply confused about the kind of privacy it wishes to protect. It is impossible to ascertain whether the decision seeks to protect data privacy or dignitary privacy. Google Spain is ultimately pushed in the direction of dignitary privacy because data privacy is incompatible with public discourse, whereas dignitary privacy may be reconciled with the requirements of public discourse. Insofar as freedom of expression is valued because it fosters democratic self-government, public discourse cannot serve as an effective instrument of self-determination without a modicum of civility. Yet the Google Spain decision recognizes dignitary privacy only in a rudimentary and unsatisfactory way. If it had more clearly focused on the requirements of dignitary privacy, Google Spain would not so sharply have distinguished Google links from the underlying websites to which they refer. Google Spain would not have blithely outsourced the enforcement of the right to be forgotten to a private corporation like Google

    Privacy in an Ambient World

    Get PDF
    Privacy is a prime concern in today's information society. To protect\ud the privacy of individuals, enterprises must follow certain privacy practices, while\ud collecting or processing personal data. In this chapter we look at the setting where an\ud enterprise collects private data on its website, processes it inside the enterprise and\ud shares it with partner enterprises. In particular, we analyse three different privacy\ud systems that can be used in the different stages of this lifecycle. One of them is the\ud Audit Logic, recently introduced, which can be used to keep data private when it\ud travels across enterprise boundaries. We conclude with an analysis of the features\ud and shortcomings of these systems

    A Logical Method for Policy Enforcement over Evolving Audit Logs

    Full text link
    We present an iterative algorithm for enforcing policies represented in a first-order logic, which can, in particular, express all transmission-related clauses in the HIPAA Privacy Rule. The logic has three features that raise challenges for enforcement --- uninterpreted predicates (used to model subjective concepts in privacy policies), real-time temporal properties, and quantification over infinite domains (such as the set of messages containing personal information). The algorithm operates over audit logs that are inherently incomplete and evolve over time. In each iteration, the algorithm provably checks as much of the policy as possible over the current log and outputs a residual policy that can only be checked when the log is extended with additional information. We prove correctness and termination properties of the algorithm. While these results are developed in a general form, accounting for many different sources of incompleteness in audit logs, we also prove that for the special case of logs that maintain a complete record of all relevant actions, the algorithm effectively enforces all safety and co-safety properties. The algorithm can significantly help automate enforcement of policies derived from the HIPAA Privacy Rule.Comment: Carnegie Mellon University CyLab Technical Report. 51 page
    • …
    corecore