78,518 research outputs found

    Protecting Patient Privacy: Strategies for Regulating Electronic Health Records Exchange

    Get PDF
    The report offers policymakers 10 recommendations to protect patient privacy as New York state develops a centralized system for sharing electronic medical records. Those recommendations include:Require that the electronic systems employed by HIEs have the capability to sort and segregate medical information in order to comply with guaranteed privacy protections of New York and federal law. Presently, they do not.Offer patients the right to opt-out of the system altogether. Currently, people's records can be uploaded to the system without their consent.Require that patient consent forms offer clear information-sharing options. The forms should give patients three options: to opt-in and allow providers access to their electronic medical records, to opt-out except in the event of a medical emergency, or to opt-out altogether.Prohibit and sanction the misuse of medical information. New York must protect patients from potential bad actors--that small minority of providers who may abuse information out of fear, prejudice or malice.Prohibit the health information-sharing networks from selling data. The State Legislature should pass legislation prohibiting the networks from selling patients' private health information

    Supporting Regularized Logistic Regression Privately and Efficiently

    Full text link
    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Increasing concerns over data privacy make it more and more difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used machine learning model in various disciplines while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluation on several studies validated the privacy guarantees, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc

    Handling Cases of Willful Exposure Through HIV Partner Counseling and Referral Services

    Get PDF
    Cases of willful exposure reveal the existing and future risks to the public health (especially women) which may be presented by individuals who willfully expose others to HIV through unsafe sexual or needle-sharing behaviors. In response to a documented case of willful exposure, a PCRS counselor or other public health official may, in his or her professional judgment, decide to act to avert a legitimate public health threat to known or unknown persons in the community. Yet handling such cases raises difficult issues in law, ethics, and public health practice. Public health authorities may be unable or ill-equipped to successfully control risks of this type for several reasons: (1) they may lack sufficient resources to properly investigate these cases; (2) they may lack knowledge or jurisdiction over the individual who willfully exposes others to HIV once his behaviors extend into other communities; and (3) they are bound to maintain the confidentiality of sensitive information they obtain from PCRS. How do health care workers and public health departments balance the duty to maintain the privacy of public health information related to PCRS against their obligation to fulfill a partner\u27s right to know of their exposure to infection? What are the legal powers and duties of public health departments to protect the health and safety of individuals as part of their mission to protect the public health? What is the role of the criminal law concerning persons who may intentionally or knowingly attempt to infect others with HIV or other communicable diseases

    Privacy and Accountability in Black-Box Medicine

    Get PDF
    Black-box medicine—the use of big data and sophisticated machine learning techniques for health-care applications—could be the future of personalized medicine. Black-box medicine promises to make it easier to diagnose rare diseases and conditions, identify the most promising treatments, and allocate scarce resources among different patients. But to succeed, it must overcome two separate, but related, problems: patient privacy and algorithmic accountability. Privacy is a problem because researchers need access to huge amounts of patient health information to generate useful medical predictions. And accountability is a problem because black-box algorithms must be verified by outsiders to ensure they are accurate and unbiased, but this means giving outsiders access to this health information. This article examines the tension between the twin goals of privacy and accountability and develops a framework for balancing that tension. It proposes three pillars for an effective system of privacy-preserving accountability: substantive limitations on the collection, use, and disclosure of patient information; independent gatekeepers regulating information sharing between those developing and verifying black-box algorithms; and information-security requirements to prevent unintentional disclosures of patient information. The article examines and draws on a similar debate in the field of clinical trials, where disclosing information from past trials can lead to new treatments but also threatens patient privacy

    Open Data, Grey Data, and Stewardship: Universities at the Privacy Frontier

    Full text link
    As universities recognize the inherent value in the data they collect and hold, they encounter unforeseen challenges in stewarding those data in ways that balance accountability, transparency, and protection of privacy, academic freedom, and intellectual property. Two parallel developments in academic data collection are converging: (1) open access requirements, whereby researchers must provide access to their data as a condition of obtaining grant funding or publishing results in journals; and (2) the vast accumulation of 'grey data' about individuals in their daily activities of research, teaching, learning, services, and administration. The boundaries between research and grey data are blurring, making it more difficult to assess the risks and responsibilities associated with any data collection. Many sets of data, both research and grey, fall outside privacy regulations such as HIPAA, FERPA, and PII. Universities are exploiting these data for research, learning analytics, faculty evaluation, strategic decisions, and other sensitive matters. Commercial entities are besieging universities with requests for access to data or for partnerships to mine them. The privacy frontier facing research universities spans open access practices, uses and misuses of data, public records requests, cyber risk, and curating data for privacy protection. This paper explores the competing values inherent in data stewardship and makes recommendations for practice, drawing on the pioneering work of the University of California in privacy and information security, data governance, and cyber risk.Comment: Final published version, Sept 30, 201

    Designing the Health-related Internet of Things: Ethical Principles and Guidelines

    Get PDF
    The conjunction of wireless computing, ubiquitous Internet access, and the miniaturisation of sensors have opened the door for technological applications that can monitor health and well-being outside of formal healthcare systems. The health-related Internet of Things (H-IoT) increasingly plays a key role in health management by providing real-time tele-monitoring of patients, testing of treatments, actuation of medical devices, and fitness and well-being monitoring. Given its numerous applications and proposed benefits, adoption by medical and social care institutions and consumers may be rapid. However, a host of ethical concerns are also raised that must be addressed. The inherent sensitivity of health-related data being generated and latent risks of Internet-enabled devices pose serious challenges. Users, already in a vulnerable position as patients, face a seemingly impossible task to retain control over their data due to the scale, scope and complexity of systems that create, aggregate, and analyse personal health data. In response, the H-IoT must be designed to be technologically robust and scientifically reliable, while also remaining ethically responsible, trustworthy, and respectful of user rights and interests. To assist developers of the H-IoT, this paper describes nine principles and nine guidelines for ethical design of H-IoT devices and data protocols
    • …
    corecore