10,378 research outputs found

    Non-Consensual Disclosures

    Get PDF
    In the course of biomedical research on humans — for example, flu, imaging, and genomic studies — researchers often uncover information about participants that is important to their health and wellbeing. In many cases, the information is not anticipated in advance, and participants did not consent to receiving it. This Article examines the law and policy governing human subjects research, focusing on the set of regulations known as the Common Rule. I argue that human subjects researchers will often have strong ethical reason s to disclose results even when participants did not consent to the disclosure in advance. I also show how the current regulatory scheme stands in the way of ethical disclosures, putting researchers in a difficult position where they might not be able to fulfill their ethical duties without transgressing legal ones. Although we need to contend with autonomy and welfare risks associated with returning results, not to mention financial and administrative costs, these downsides are similarly present in analogous scenarios where non consensual warnings are legally permitted and sometimes even required. There does not appear to be any good reason to make a policy exception for biomedical researchers when it comes to issuing warnings in the form of information disclosure. To aid difficult determinations about which results warrant return, I suggest that policymakers should take advantage of the interest and willingness of the bioethics community to develop consensus norms and incorporate these norms into regulation such that the regulations would at least permit researchers to disclose results whenever consensus standards would recommend disclosure. In this way, the law would make space for ethically optimal conduct without necessarily compelling it. At the same time, bioethicists and researchers should train their attention on non-ideal consent settings — the focus of this Article — rather than continuing to assume or hope that participants will have a chance to consent to the disclosure of results in advance

    Virtual Clinical Trials: One Step Forward, Two Steps Back

    Get PDF
    Virtual clinical trials have entered the medical research landscape. Today’s clinical trials recruit subjects online, obtain informed consent online, send treatments such as medications or devices to the subjects’ homes, and require subjects to record their responses online. Virtual clinical trials could be a way to democratize clinical research and circumvent geographical limitations by allowing access to clinical research for people who live far from traditional medical research centers. But virtual clinical trials also depart dramatically from traditional medical research studies in ways that can harm individuals and the public at large. This article addresses the issues presented by virtual clinical trials with regard to: (1) recruitment methods; (2) informed consent; (3) confidentiality; (4) potential risks to the subjects; and (5) the safety and efficacy of treatments that are approved

    What Makes Health Data Privacy Calculus Unique? Separating Probability from Impact

    Get PDF
    Patient health data is heavily regulated and sensitive. Patients will sometimes falsify data to avoid embarrassment resulting in misdiagnoses and even death. Existing research to explain this phenomenon is scarce with little more than attitudes and intents modeled. Similarly, health data disclosure research has only applied existing theories with additional constructs for the healthcare context. We argue that health data has a fundamentally different cost/benefit calculus than the non-health contexts of traditional privacy research. By separating the probability of disclosure risks and benefits from the impact of that disclosure, it is easier to understand and interpret health data disclosure. In a study of 1590 patients disclosing health information electronically, we find that the benefits of disclosure are more difficult to conceptualize than the impact of the risk. We validate this using both a stated and objective (mouse tracking) measure of patient lying

    Privacy and Markets: A Love Story

    Get PDF
    After defining terms, Part I lays out the law and economics case against privacy, including its basis in economic thought more generally. Part II canvasses the literature responding to economic skepticism in the privacy law literature. Some scholars mount an insider critique, accepting the basic tenets of economics but suggesting that privacy actually increases efficiency in some contexts, or else noting that markets themselves will yield privacy under the right conditions. Others critique economic thinking from the outside. Markets “unravel” privacy by penalizing it, degrade privacy by treating it as just another commodity, or otherwise interfere with the values or processes that privacy exists to preserve. Part III tells the love story from the Article’s title. I develop here a novel account of the relationship between privacy and markets, positioning the two concepts as sympathetic instead of antithetical. Neither insider nor outsider, the framework understands privacy as a crucial ingredient of the market mechanism, while simultaneously demonstrating how markets enable privacy to achieve its most important functions. It turns out opposites attract, just as Hollywood has been telling us all along. The final Part discusses what’s at stake. First, at the descriptive level, this Article sheds light on certain institutional puzzles such as why the Federal Trade Commission (“FTC” or “the Commission”)—an agency dedicated to free markets and brimming with economists—would arise as the de facto privacy authority for the United States. The Article’s framework not only explains and perhaps justifies the FTC’s role in policing privacy, but also predicts other agencies such as the Consumer Financial Protection Bureau will increasingly become involved in privacy enforcement. Second, at the level of discourse, the Article opens up new avenues of analytic inquiry, previously obscured by a mutual skepticism. In particular, the framework helps surface the role of privacy in avoiding market discrimination for the simple reason that it hides many objects of potential bias. And third, normatively, this Article argues in support of laws and policies, such as conditioning access to political databases on non-commercial use, that try to keep personal information out of markets

    After Over-Privileged Permissions: Using Technology and Design to Create Legal Compliance

    Get PDF
    Consumers in the mobile ecosystem can putatively protect their privacy with the use of application permissions. However, this requires the mobile device owners to understand permissions and their privacy implications. Yet, few consumers appreciate the nature of permissions within the mobile ecosystem, often failing to appreciate the privacy permissions that are altered when updating an app. Even more concerning is the lack of understanding of the wide use of third-party libraries, most which are installed with automatic permissions, that is permissions that must be granted to allow the application to function appropriately. Unsurprisingly, many of these third-party permissions violate consumers’ privacy expectations and thereby, become “over-privileged” to the user. Consequently, an obscurity of privacy expectations between what is practiced by the private sector and what is deemed appropriate by the public sector is exhibited. Despite the growing attention given to privacy in the mobile ecosystem, legal literature has largely ignored the implications of mobile permissions. This article seeks to address this omission by analyzing the impacts of mobile permissions and the privacy harms experienced by consumers of mobile applications. The authors call for the review of industry self-regulation and the overreliance upon simple notice and consent. Instead, the authors set out a plan for greater attention to be paid to socio-technical solutions, focusing on better privacy protections and technology embedded within the automatic permission-based application ecosystem

    PRIVACY-RELATED DECISION-MAKING IN THE CONTEXT OF WEARABLE USE

    Get PDF
    The widespread use of wearables for self-tracking activities despite potential privacy risks is an intriguing phenomenon. For firms, the data collected from individuals’ wearable use are highly valuable for generating in-depth customer insights. Accordingly, firms have an increasing desire for these data. Despite the undisputed relevance of self-tracking activities in practice, there is scarce knowledge among information systems (IS) scholars about the perceived values of wearables that drive individuals’ use and the reasons why these values prevail over the privacy risks. Against this background, our research set out to better understand why people use wearables despite privacy risks by investigating the perceived values of wearables that drive individuals’ use and disclosure of data and the reasons why these values prevail over privacy risks of wearable use. Based on the concept of the privacy calculus and concepts from behavioural decision-making, we conducted in-depth interviews with 22 wearable users from Switzerland. As a result, we reveal eight values that individuals perceive through the use of wearable devices. Furthermore, we illustrate the low awareness regarding privacy risks and explain how the reliance on prominent dimensions and heuristics are influencing individuals’ value-risk assessment

    Unraveling Privacy: The Personal Prospectus and the Threat of a Full-Disclosure Future

    Get PDF
    • 

    corecore