82,312 research outputs found

    VirtualIdentity : privacy preserving user profiling

    Get PDF
    User profiling from user generated content (UGC) is a common practice that supports the business models of many social media companies. Existing systems require that the UGC is fully exposed to the module that constructs the user profiles. In this paper we show that it is possible to build user profiles without ever accessing the user's original data, and without exposing the trained machine learning models for user profiling - which are the intellectual property of the company - to the users of the social media site. We present VirtualIdentity, an application that uses secure multi-party cryptographic protocols to detect the age, gender and personality traits of users by classifying their user-generated text and personal pictures with trained support vector machine models in a privacy preserving manner

    The Serums Tool-Chain:Ensuring Security and Privacy of Medical Data in Smart Patient-Centric Healthcare Systems

    Get PDF
    Digital technology is permeating all aspects of human society and life. This leads to humans becoming highly dependent on digital devices, including upon digital: assistance, intelligence, and decisions. A major concern of this digital dependence is the lack of human oversight or intervention in many of the ways humans use this technology. This dependence and reliance on digital technology raises concerns in how humans trust such systems, and how to ensure digital technology behaves appropriately. This works considers recent developments and projects that combine digital technology and artificial intelligence with human society. The focus is on critical scenarios where failure of digital technology can lead to significant harm or even death. We explore how to build trust for users of digital technology in such scenarios and considering many different challenges for digital technology. The approaches applied and proposed here address user trust along many dimensions and aim to build collaborative and empowering use of digital technologies in critical aspects of human society

    Privacy and Accountability in Black-Box Medicine

    Get PDF
    Black-box medicine—the use of big data and sophisticated machine learning techniques for health-care applications—could be the future of personalized medicine. Black-box medicine promises to make it easier to diagnose rare diseases and conditions, identify the most promising treatments, and allocate scarce resources among different patients. But to succeed, it must overcome two separate, but related, problems: patient privacy and algorithmic accountability. Privacy is a problem because researchers need access to huge amounts of patient health information to generate useful medical predictions. And accountability is a problem because black-box algorithms must be verified by outsiders to ensure they are accurate and unbiased, but this means giving outsiders access to this health information. This article examines the tension between the twin goals of privacy and accountability and develops a framework for balancing that tension. It proposes three pillars for an effective system of privacy-preserving accountability: substantive limitations on the collection, use, and disclosure of patient information; independent gatekeepers regulating information sharing between those developing and verifying black-box algorithms; and information-security requirements to prevent unintentional disclosures of patient information. The article examines and draws on a similar debate in the field of clinical trials, where disclosing information from past trials can lead to new treatments but also threatens patient privacy
    • …
    corecore