5,898 research outputs found

    Privacy and Health Information Technology

    Get PDF
    The increased use of health information technology (health IT) is a common element of nearly every health reform proposal because it has the potential to decrease costs, improve health outcomes, coordinate care, and improve public health. However, it raises concerns about security and privacy of medical information. This paper examines some of the “gaps” in privacy protections that arise out of the current federal health privacy standard, the Health Insurance Portability and Accountability (HIPAA) Privacy Rule, the main federal law which governs the use and disclosure of health information. Additionally, it puts forth a range of possible solutions, accompanied by arguments for and against each. The solutions provide some options for strengthening the current legal framework of privacy protections in order to build public trust in health IT and facilitate its use for health reform. The American Recovery and Reinvestment Act (ARRA) enacted in February 2009 includes a number of changes to HIPAA and its regulations, and those changes are clearly noted among the list of solutions (and ARRA is indicated in the Executive Summary and paper where the Act has a relevant provision)

    Legal Solutions in Health Reform: Privacy and Health Information Technology

    Get PDF
    Identifies gaps in the federal health privacy standard and proposes options for strengthening the legal framework for privacy protections in order to build public trust in health information technology. Presents arguments for and against each option

    Slave to the Algorithm? Why a \u27Right to an Explanation\u27 Is Probably Not the Remedy You Are Looking For

    Get PDF
    Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers\u27 worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ( right to be forgotten ) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered

    Special ArticleEthics and Electronic Health Information Technology: Challenges for Evidence-Based Medicine and the Physician–Patient Relationship

    Get PDF
    Objectives: The National Health Insurance Scheme (NHIS), and the National Identification Authority (NIA), pose ethical challenges to the physician-patient relationship due to interoperability. This paper explores (1) the national legislation on Electronic Health Information Technology (EHIT), (2) the ethics of information technology and public health and (3) the effect on the Physician-patient relationship. Method: This study consisted of systematic literature and internet review of the legislation, information technology, the national health insurance program, and the physician-patient relationship. Result: The result shows that (1) EHIT have eroded a big part of the confidentiality between the physician and patient; (2) The encroachment on privacy is an inevitable outcome of EHIT; (3) Legislation on privacy, the collection, storage and uses of electronic health information is needed and; (4) the nexus between EHIT, NHIS, NHA, Ethics, the physicianpatient relationship and privacy. Conclusion: The study highlights the lack of protection for physician-patient relationship as medical practice transitions from the conventional to the modern, information technology driven domain. Keywords: Physician-patient Relationship, Legislation, Public Health, National Health Insurance Scheme, National Identification Authority, Electronic Health InformationGhana Medical Journal, September 2011, Volume 45, Number

    The medical science DMZ: a network design pattern for data-intensive medical science

    Get PDF
    Abstract: Objective We describe a detailed solution for maintaining high-capacity, data-intensive network flows (eg, 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations. Materials and Methods High-end networking, packet-filter firewalls, network intrusion-detection systems. Results We describe a “Medical Science DMZ” concept as an option for secure, high-volume transport of large, sensitive datasets between research institutions over national research networks, and give 3 detailed descriptions of implemented Medical Science DMZs. Discussion The exponentially increasing amounts of “omics” data, high-quality imaging, and other rapidly growing clinical datasets have resulted in the rise of biomedical research “Big Data.” The storage, analysis, and network resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large datasets. Maintaining data-intensive flows that comply with the Health Insurance Portability and Accountability Act (HIPAA) and other regulations presents a new challenge for biomedical research. We describe a strategy that marries performance and security by borrowing from and redefining the concept of a Science DMZ, a framework that is used in physical sciences and engineering research to manage high-capacity data flows. Conclusion By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements

    The Consent Myth: Improving Choice for Patients of the Future

    Get PDF
    Consent has enjoyed a prominent position in the American privacy system since at least 1970, though historically, consent emerged from traditional notions of tort and contract. Largely because consent has an almost deferential power as a proxy for consumer choice, organizations increasingly use consent as a de facto standard for demonstrating privacy commitments. The Department of Health and Human Services and the Federal Trade Commission have integrated the concept of consent into health care, research, and general commercial activities. However, this de facto standard, while useful in some contexts, does not sufficiently promote individual patient interests within leading health technologies, including the Internet of Health Things and Artificial Intelligence. Despite consent’s prominence in United States law, this Article seeks to understand, more fully, consent’s role in modern health applications, then applies a philosophical-legal lens to clearly identify problems with consent in its current use. This Article identifies the principle issues with substituting consent for choice, the “consent myth,” a collection of five problems, then proposes principles for addressing these problems in contemporary health technologies

    The Health of Patient Privacy: The Patient\u27s Perspective on the HIPAA Protected Health Information

    Get PDF
    Problem As healthcare entities continue to focus on HIPAA compliance, they must enforce policies that require patients to sign and express understanding of the organization’s privacy policies. It appears the patient’s perspective on healthcare privacy has not been considered within the HIPAA privacy ruling. Patients are healthcare consumers, yet little research has been done on assessing the individual consumer’s perspective on what Protected Health Information (PHI) is actually important to protect and from whom it is important to protect it. Method A quantitative survey was developed and distributed to the participants of the Carnegie group, an independent insurance firm in Chicago, Illinois. Inferential and descriptive statistics were used to analyze the differences and interactions among the participants based on 4 independent variables and 17 selected dependent variables. Results The analysis showed that of the 17 PHI indicators, only 5 of them were identified as being important to protect from healthcare providers. A One-Way Analysis of Variance was used to test for significant differences among the age and gender groups for each PHI indicator. Analysis of the data on age showed the desire for privacy each respondent gave, and the data showed significance for the age group 31-45. This group desired more privacy than any other group. The age group 18-30 scored the lowest on privacy concerns for each PHI. Gender differences showed males desire more privacy than females. The analysis on financial commitment given by the patient for each PHI showed no respondents placed a high dollar value on protecting the PHI indicators. Two-Way Analysis of Variance was used to determine the main effect and interaction effect of age and authority on access of health information. The findings showed that the more authority granted to a doctor, the more likely a participant was willing to give healthcare information. Conclusion Overall, patients put little value in protecting the defined PHI as defined by the HIPAA privacy ruling from healthcare providers and are not willing to pay for privacy protection. Patients practice transparency with healthcare providers for much of the PHI, and only 5 PHI indicators were considered important enough to limit access by healthcare providers

    privacy matters: an introduction to personal information protection

    Get PDF
    https://egrove.olemiss.edu/aicpa_guides/1537/thumbnail.jp

    Big Data Ethics in Research

    Get PDF
    The main problems faced by scientists in working with Big Data sets, highlighting the main ethical issues, taking into account the legislation of the European Union. After a brief Introduction to Big Data, the Technology section presents specific research applications. There is an approach to the main philosophical issues in Philosophical Aspects, and Legal Aspects with specific ethical issues in the EU Regulation on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (Data Protection Directive - General Data Protection Regulation, "GDPR"). The Ethics Issues section details the specific aspects of Big Data. After a brief section of Big Data Research, I finalize my work with the presentation of Conclusions on research ethics in working with Big Data. CONTENTS: Abstract 1. Introduction - 1.1 Definitions - 1.2 Big Data dimensions 2. Technology - 2.1 Applications - - 2.1.1 In research 3. Philosophical aspects 4. Legal aspects - 4.1 GDPR - - Stages of processing of personal data - - Principles of data processing - - Privacy policy and transparency - - Purposes of data processing - - Design and implicit confidentiality - - The (legal) paradox of Big Data 5. Ethical issues - Ethics in research - Awareness - Consent - Control - Transparency - Trust - Ownership - Surveillance and security - Digital identity - Tailored reality - De-identification - Digital inequality - Privacy 6. Big Data research Conclusions Bibliography DOI: 10.13140/RG.2.2.11054.4640
    • …
    corecore