359,787 research outputs found

    Libertas: Privacy-Preserving Computation for Decentralised Personal Data Stores

    Full text link
    Data-driven decision-making and AI applications present exciting new opportunities delivering widespread benefits. The rapid adoption of such applications triggers legitimate concerns about loss of privacy and misuse of personal data. This leads to a growing and pervasive tension between harvesting ubiquitous data on the Web and the need to protect individuals. Decentralised personal data stores (PDS) such as Solid are frameworks designed to give individuals ultimate control over their personal data. But current PDS approaches have limited support for ensuring privacy when computations combine data spread across users. Secure Multi-Party Computation (MPC) is a well-known subfield of cryptography, enabling multiple autonomous parties to collaboratively compute a function while ensuring the secrecy of inputs (input privacy). These two technologies complement each other, but existing practices fall short in addressing the requirements and challenges of introducing MPC in a PDS environment. For the first time, we propose a modular design for integrating MPC with Solid while respecting the requirements of decentralisation in this context. Our architecture, Libertas, requires no protocol level changes in the underlying design of Solid, and can be adapted to other PDS. We further show how this can be combined with existing differential privacy techniques to also ensure output privacy. We use empirical benchmarks to inform and evaluate our implementation and design choices. We show the technical feasibility and scalability pattern of the proposed system in two novel scenarios -- 1) empowering gig workers with aggregate computations on their earnings data; and 2) generating high-quality differentially-private synthetic data without requiring a trusted centre. With this, we demonstrate the linear scalability of Libertas, and gained insights about compute optimisations under such an architecture

    BallotShare:an exploration of the design space for digital voting in the workplace

    Get PDF
    Digital voting is used to support group decision-making in a variety of contexts ranging from politics to mundane everyday collaboration, and the rise in popularity of digital voting has provided an opportunity to re-envision voting as a social tool that better serves democracy. A key design goal for any group decision-making system is the promotion of participation, yet there is little research that explores how the features of digital voting systems themselves can be shaped to configure participation appropriately. In this paper we propose a framework that explores the design space of digital voting from the perspective of participation. We ground our discussion in the design of a social media polling tool called BallotShare; a first instantiation of our proposed framework designed to facilitate the study of decision-making practices in a workplace environment. Across five weeks, participants created and took part in non-standard polls relating to events and other spontaneous group decisions. Following interviews with participants we identified significant drivers and limitations of individual and collective participation in the voting process: social visibility, social inclusion, commitment and delegation, accountability, influence and privacy

    Medical Autonomy in Crisis: The Destruction of the Right to Privacy

    Get PDF
    This thesis is presented for a Master of Arts and Science Degree at the University of Tennessee, Knoxville, from the College of Liberal Arts, Department of Philosophy, with a concentration in medical ethics. The thesis\u27s argument assumes that rules of ethics or morality should be rules and guidelines of uniformity and therefore universal in nature and scope. In that assumption, the thesis adheres to the present problematic state of ethics, and in particular, medical ethics. In response to that state it asks: without imprint from religious morality, upon what foundation can we design a system for medical ethics? In particular, does the rule of law and interpretation of our Federal Constitutional rights in the United States of America offer any guidance for a foundation of medical ethics? Still more specifically, the issues of concern here will be the following. First this thesis will explore the connection between the medical term .autonomy. and the legal terms .privacy. and .liberty.. Secondly, the thesis will focus on an examination of the nature of medical autonomy which the Supreme Court of the United States reached in the landmark Roe v. Wade decision. Is the interpretation of the Constitution of the United States of America, adopted in that decision, an interpretation that defines such rights as the Constitutional right of privacy, a method that can be depended upon for guidance or foundational support for a right of medical autonomy? If not, what conclusions can be reached by this analysis? As the thesis will conclude, while medical autonomy and what the courts call privacy and liberty when discussing medical cases appear to be the same thing, interpretation of the Constitution of the United States of America does not provide us with a relevant tool for guidance with respect to the ethics of medical autonomy. The standard interpretation of the Roe v. Wade decision as concerning a national, and therefore uniform right of privacy in making medical decisions, akin to medical definitions of autonomy, was destroyed by the United States Supreme Court decision in the cases known as Washington/Vacco

    A Systematic Security Evaluation of Android's Multi-User Framework

    Get PDF
    Like many desktop operating systems in the 1990s, Android is now in the process of including support for multi-user scenarios. Because these scenarios introduce new threats to the system, we should have an understanding of how well the system design addresses them. Since the security implications of multi-user support are truly pervasive, we developed a systematic approach to studying the system and identifying problems. Unlike other approaches that focus on specific attacks or threat models, ours systematically identifies critical places where access controls are not present or do not properly identify the subject and object of a decision. Finding these places gives us insight into hypothetical attacks that could result, and allows us to design specific experiments to test our hypothesis. Following an overview of the new features and their implementation, we describe our methodology, present a partial list of our most interesting hypotheses, and describe the experiments we used to test them. Our findings indicate that the current system only partially addresses the new threats, leaving the door open to a number of significant vulnerabilities and privacy issues. Our findings span a spectrum of root causes, from simple oversights, all the way to major system design problems. We conclude that there is still a long way to go before the system can be used in anything more than the most casual of sharing environments.Comment: In Proceedings of the Third Workshop on Mobile Security Technologies (MoST) 2014 (http://arxiv.org/abs/1410.6674

    Help4Mood: avatar-based support for treating people with major depression in the community

    Get PDF
    BACKGROUND: The Help4Mood consortium, comprising partners from Scotland, Spain, Romania, and Italy, aims to develop a system for supporting the treatment of people with major depressive disorder in the community. The Help4Mood system consists of three parts: (1) A Personal Monitoring System that collects activity and sleep data; (2) a Virtual Agent (avatar) that interacts with the patient in short, structured sessions that involve mood checks, psychomotor tests, and brief therapy-related exercises; (3) a Decision Support System that controls the interaction between user and Virtual Agent, extracts relevant information from the monitoring data, and produces reports for clinicians. In this paper, we report the results of focus groups that were conducted to gather user requirements for Help4Mood. These involved two core stakeholder groups, patients with depression and clinicians. AIMS AND OBJECTIVES: We invited comments on all aspects of system design, focusing on the nature and intensity of monitoring; the interaction between the Virtual Agent and patients; and the support patients and clinicians would wish to receive from Help4Mood. METHODS: Ten focus groups were conducted in Scotland, Spain, and Romania, one each with patients and 2–3 each with psychiatrists, clinical psychologists, and psychiatric nurses. Following a presentation of a sample Help4Mood session, the discussion was structured using a set of prompts. Group sessions were transcribed; data were analysed using framework analysis. RESULTS: Regarding the overall Help4Mood system, participants raised three main issues, integration with treatment; configurability to support local best practice; and affordability for health services. Monitoring was discussed in terms of complexity, privacy in shared spaces, and crisis. Clinicians proposed physiological, mood, and neuropsychomotor variables that might be monitored, which would yield a complex picture of the patient’s state. Obtrusiveness (of monitors) and intrusiveness (to routines and environment) were raised as important barriers. Patients felt that Help4Mood should provide them with tailored resources in a crisis; clinicians were clear that Help4Mood should not be used to detect acute suicide risks. Three main design characteristics emerged for the Virtual Agent—ease of use, interaction style, and humanoid appearance. The agent should always react appropriately, look and behave like a good therapist, and show positive or neutral emotion. Core functions of the decision support system were characterized by the themes adaptation, informing treatment, and supporting clinician-patient interaction. The decision support system should adapt the patient’s session with the Virtual Agent in accordance with the patient’s mood and stamina. Monitoring data should be presented as a one-page summary highlighting key trends to be discussed with the patient during a consultation. CONCLUSIONS: Consulting with patients and clinicians showed that Help4Mood should focus on tracking recovery in the community in a way that informs and supports ongoing treatment. The design of the Virtual Agent will be crucial for encouraging adherence to the Help4Mood protocol. To ensure uptake of the system, Help4Mood needs to be flexible enough to fit into different service delivery contexts

    Integrated Framework for Data Quality and Security Evaluation on Mobile Devices

    Get PDF
    Data quality (DQ) is an important concept that is used in the design and employment of information, data management, decision making, and engineering systems with multiple applications already available for solving specific problems. Unfortunately, conventional approaches to DQ evaluation commonly do not pay enough attention or even ignore the security and privacy of the evaluated data. In this research, we develop a framework for the DQ evaluation of the sensor originated data acquired from smartphones, that incorporates security and privacy aspects into the DQ evaluation pipeline. The framework provides support for selecting the DQ metrics and implementing their calculus by integrating diverse sensor data quality and security metrics. The framework employs a knowledge graph to facilitate its adaptation in new applications development and enables knowledge accumulation. Privacy aspects evaluation is demonstrated by the detection of novel and sophisticated attacks on data privacy on the example of colluded applications attack recognition. We develop multiple calculi for DQ and security evaluation, such as a hierarchical fuzzy rules expert system, neural networks, and an algebraic function. Case studies that demonstrate the framework\u27s performance in solving real-life tasks are presented, and the achieved results are analyzed. These case studies confirm the framework\u27s capability of performing comprehensive DQ evaluations. The framework development resulted in producing multiple products, and tools such as datasets and Android OS applications. The datasets include the knowledge base of sensors embedded in modern mobile devices and their quality analysis, technological signals recordings of smartphones during the normal usage, and attacks on users\u27 privacy. These datasets are made available for public use and can be used for future research in the field of data quality and security. We also released under an open-source license a set of Android OS tools that can be used for data quality and security evaluation

    Towards Optimal Algorithms For Online Decision Making Under Practical Constraints

    Get PDF
    Artificial Intelligence is increasingly being used in real-life applications such as driving with autonomous cars; deliveries with autonomous drones; customer support with chat-bots; personal assistant with smart speakers . . . An Artificial Intelligent agent (AI) can be trained to become expert at a task through a system of rewards and punishment, also well known as Reinforcement Learning (RL). However, since the AI will deal with human beings, it also has to follow some moral rules to accomplish any task. For example, the AI should be fair to the other agents and not destroy the environment. Moreover, the AI should not leak the privacy of users’ data it processes. Those rules represent significant challenges in designing AI that we tackle in this thesis through mathematically rigorous solutions.More precisely, we start by considering the basic RL problem modeled as a discrete Markov Decision Process. We propose three simple algorithms (UCRL-V, BUCRL and TSUCRL) using two different paradigms: Frequentist (UCRL-V) and Bayesian (BUCRL and TSUCRL). Through a unified theoretical analysis, we show that our three algorithms are near-optimal. Experiments performed confirm the superiority of our methods compared to existing techniques. Afterwards, we address the issue of fairness in the stateless version of reinforcement learning also known as multi-armed bandit. To concentrate our effort on the key challenges, we focus on two-agents multi-armed bandit. We propose a novel objective that has been shown to be connected to fairness and justice. We derive an algorithm UCRG to solve this novel objective and show theoretically its near-optimality. Next, we tackle the issue of privacy by using the recently introduced notion of Differential Privacy. We design multi-armed bandit algorithms that preserve differential-privacy. Theoretical analyses show that for the same level of privacy, our newly developed algorithms achieve better performance than existing techniques

    A Decentralized Approach Towards Responsible AI in Social Ecosystems

    Full text link
    For AI technology to fulfill its full promises, we must design effective mechanisms into the AI systems to support responsible AI behavior and curtail potential irresponsible use, e.g. in areas of privacy protection, human autonomy, robustness, and prevention of biases and discrimination in automated decision making. In this paper, we present a framework that provides computational facilities for parties in a social ecosystem to produce the desired responsible AI behaviors. To achieve this goal, we analyze AI systems at the architecture level and propose two decentralized cryptographic mechanisms for an AI system architecture: (1) using Autonomous Identity to empower human users, and (2) automating rules and adopting conventions within social institutions. We then propose a decentralized approach and outline the key concepts and mechanisms based on Decentralized Identifier (DID) and Verifiable Credentials (VC) for a general-purpose computational infrastructure to realize these mechanisms. We argue the case that a decentralized approach is the most promising path towards Responsible AI from both the computer science and social science perspectives

    Ethical, legal and social issues for personal health records and applications

    Get PDF
    AbstractRobert Wood Johnson Foundation’s Project HealthDesign included funding of an ethical, legal and social issues (ELSI) team, to serve in an advisory capacity to the nine design projects. In that capacity, the authors had the opportunity to analyze the personal health record (PHR) and personal health application (PHA) implementations for recurring themes. PHRs and PHAs invert the long-standing paradigm of health care institutions as the authoritative data-holders and data-processors in the system. With PHRs and PHAs, the individual is the center of his or her own health data universe, a position that brings new benefits but also entails new responsibilities for patients and other parties in the health information infrastructure. Implications for law, policy and practice follow from this shift. This article summarizes the issues raised by the first phase of Project HealthDesign projects, categorizing them into four topics: privacy and confidentiality, data security, decision support, and HIPAA and related legal-regulatory requirements. Discussion and resolution of these issues will be critical to successful PHR/PHA implementations in the years to come
    • …
    corecore