9,718 research outputs found

    Redescribing Health Privacy: The Importance of Health Policy

    Get PDF
    Current conversations about health information policy often tend to be based on three broad assumptions. First, many perceive a tension between regulation and innovation. We often hear that privacy regulations are keeping researchers, companies, and providers from aggregating the data they need to promote innovation. Second, aggregation of fragmented data is seen as a threat to its proper regulation, creating the risk of breaches and other misuse. Third, a prime directive for technicians and policymakers is to give patients ever more granular methods of control over data. This article questions and complicates those assumptions, which I deem (respectively) the Privacy Threat to Research, the Aggregation Threat to Privacy, and the Control Solution. This article is also intended to enrich our concepts of “fragmentation” and “integration” in health care. There is a good deal of sloganeering around “firewalls” and “vertical integration” as idealized implementations of “fragmentation” and “integration” (respective). The problem, though, is that terms like these (as well as “disruption”) are insufficiently normative to guide large-scale health system change. They describe, but they do not adequately prescribe. By examining those instances where: a) regulation promotes innovation, and b) increasing (some kinds of) availability of data actually enhances security, confidentiality, and privacy protections, this article attempts to give a richer account of the ethics of fragmentation and integration in the U.S. health care system. But, it also has a darker side, highlighting the inevitable conflicts of values created in a “reputation society” driven by stigmatizing social sorting systems. Personal data control may exacerbate social inequalities. Data aggregation may increase both our powers of research and our vulnerability to breach. The health data policymaking landscape of the next decade will feature a series of intractable conflicts between these important social values

    Observing and recommending from a social web with biases

    No full text
    The research question this report addresses is: how, and to what extent, those directly involved with the design, development and employment of a specific black box algorithm can be certain that it is not unlawfully discriminating (directly and/or indirectly) against particular persons with protected characteristics (e.g. gender, race and ethnicity)?Comment: Technical Report, University of Southampton, March 201

    Perceived fairness of direct-to-consumer genetic testing business models

    Get PDF
    Although consumers and experts often express concerns regarding the questionable business practices of direct-to-consumer (DTC) genetic testing services (e.g., reselling of consumers’ genetic data), the DTC genetic testing market keeps expanding rapidly. We employ retail fairness as our theoretical lens to address this seeming paradox and conduct a discrete choice experiment with 16 attributes to better understand consumers’ fairness perceptions of DTC genetic testing business models. Our results suggest that, while consumers perceive privacy-preserving DTC genetic testing services fairer, price is the main driver for fairness perception. We contribute to research on consumer perceptions of DTC genetic testing by investigating consumer preferences of DTC genetic testing business models and respective attributes. Further, this research contributes to knowledge about disruptive business models in healthcare and retail fairness by contextualizing the concept of retail fairness in the DTC genetic testing market. We also demonstrate how to utilize discrete choice experiments to elicit perceived fairness

    A Human-centric Perspective on Digital Consenting: The Case of GAFAM

    Get PDF
    According to different legal frameworks such as the European General Data Protection Regulation (GDPR), an end-user's consent constitutes one of the well-known legal bases for personal data processing. However, research has indicated that the majority of end-users have difficulty in understanding what they are consenting to in the digital world. Moreover, it has been demonstrated that marginalized people are confronted with even more difficulties when dealing with their own digital privacy. In this research, we use an enactivist perspective from cognitive science to develop a basic human-centric framework for digital consenting. We argue that the action of consenting is a sociocognitive action and includes cognitive, collective, and contextual aspects. Based on the developed theoretical framework, we present our qualitative evaluation of the consent-obtaining mechanisms implemented and used by the five big tech companies, i.e. Google, Amazon, Facebook, Apple, and Microsoft (GAFAM). The evaluation shows that these companies have failed in their efforts to empower end-users by considering the human-centric aspects of the action of consenting. We use this approach to argue that their consent-obtaining mechanisms violate principles of fairness, accountability and transparency. We then suggest that our approach may raise doubts about the lawfulness of the obtained consent—particularly considering the basic requirements of lawful consent within the legal framework of the GDPR

    Data Privacy, What Still Need Consideration in Online Application System?

    Get PDF
    This paper aims to conduct an analysis and exploration of matters that still needs to be considered in relation to data privacy in the online application system. This research is still a preliminary study. We conduct research related to data privacy using systematic literature review approach (SLR). Bt using SLR stages, we made a synthesis of 44 publications from Scopus Database Online that were released in the range 2015 - 2019. Based on this study, we found six things points to consider in data privacy, namely security and data protection, user awareness, risk managment, control setting, ethics, and transparency

    Vertical Federated Learning

    Full text link
    Vertical Federated Learning (VFL) is a federated learning setting where multiple parties with different features about the same set of users jointly train machine learning models without exposing their raw data or model parameters. Motivated by the rapid growth in VFL research and real-world applications, we provide a comprehensive review of the concept and algorithms of VFL, as well as current advances and challenges in various aspects, including effectiveness, efficiency, and privacy. We provide an exhaustive categorization for VFL settings and privacy-preserving protocols and comprehensively analyze the privacy attacks and defense strategies for each protocol. In the end, we propose a unified framework, termed VFLow, which considers the VFL problem under communication, computation, privacy, and effectiveness constraints. Finally, we review the most recent advances in industrial applications, highlighting open challenges and future directions for VFL
    • …
    corecore