780,530 research outputs found

    Truthful Linear Regression

    Get PDF
    We consider the problem of fitting a linear model to data held by individuals who are concerned about their privacy. Incentivizing most players to truthfully report their data to the analyst constrains our design to mechanisms that provide a privacy guarantee to the participants; we use differential privacy to model individuals' privacy losses. This immediately poses a problem, as differentially private computation of a linear model necessarily produces a biased estimation, and existing approaches to design mechanisms to elicit data from privacy-sensitive individuals do not generalize well to biased estimators. We overcome this challenge through an appropriate design of the computation and payment scheme.Comment: To appear in Proceedings of the 28th Annual Conference on Learning Theory (COLT 2015

    Averting Robot Eyes

    Get PDF
    Home robots will cause privacy harms. At the same time, they can provide beneficial services—as long as consumers trust them. This Essay evaluates potential technological solutions that could help home robots keep their promises, avert their eyes, and otherwise mitigate privacy harms. Our goals are to inform regulators of robot-related privacy harms and the available technological tools for mitigating them, and to spur technologists to employ existing tools and develop new ones by articulating principles for avoiding privacy harms. We posit that home robots will raise privacy problems of three basic types: (1) data privacy problems; (2) boundary management problems; and (3) social/relational problems. Technological design can ward off, if not fully prevent, a number of these harms. We propose five principles for home robots and privacy design: data minimization, purpose specifications, use limitations, honest anthropomorphism, and dynamic feedback and participation. We review current research into privacy-sensitive robotics, evaluating what technological solutions are feasible and where the harder problems lie. We close by contemplating legal frameworks that might encourage the implementation of such design, while also recognizing the potential costs of regulation at these early stages of the technology

    Data privacy by design: digital infrastructures for clinical collaborations

    No full text
    The clinical sciences have arguably the most stringent security demands on the adoption and roll-out of collaborative e-Infrastructure solutions such as those based upon Grid-based middleware. Experiences from the Medical Research Council (MRC) funded Virtual Organisations for Trials and Epidemiological Studies (VOTES) project and numerous other real world security driven projects at the UK e-Science National e-Science Centre (NeSC – www.nesc.ac.uk) have shown that whilst advanced Grid security and middleware solutions now offer capabilities to address many of the distributed data and security challenges in the clinical domain, the real clinical world as typified by organizations such as the National Health Service (NHS) in the UK are extremely wary of adoption of such technologies: firewalls; ethics; information governance, software validation, and the actual realities of existing infrastructures need to be considered from the outset. Based on these experiences we present a novel data linkage and anonymisation infrastructure that has been developed with close co-operation of the various stakeholders in the clinical domain (including the NHS) that addresses their concerns and satisfies the needs of the academic clinical research community. We demonstrate the implementation of this infrastructure through a representative clinical study on chronic diseases in Scotland

    Privacy Games: Optimal User-Centric Data Obfuscation

    Full text link
    In this paper, we design user-centric obfuscation mechanisms that impose the minimum utility loss for guaranteeing user's privacy. We optimize utility subject to a joint guarantee of differential privacy (indistinguishability) and distortion privacy (inference error). This double shield of protection limits the information leakage through obfuscation mechanism as well as the posterior inference. We show that the privacy achieved through joint differential-distortion mechanisms against optimal attacks is as large as the maximum privacy that can be achieved by either of these mechanisms separately. Their utility cost is also not larger than what either of the differential or distortion mechanisms imposes. We model the optimization problem as a leader-follower game between the designer of obfuscation mechanism and the potential adversary, and design adaptive mechanisms that anticipate and protect against optimal inference algorithms. Thus, the obfuscation mechanism is optimal against any inference algorithm
    corecore