4 research outputs found

    Clinical Data Reuse or Secondary Use: Current Status and Potential Future Progress

    Get PDF
    Objective: To perform a review of recent research in clinical data reuse or secondary use, and envision future advances in this field. Methods: The review is based on a large literature search in MEDLINE (through PubMed), conference proceedings, and the ACM Digital Library, focusing only on research published between 2005 and early 2016. Each selected publication was reviewed by the authors, and a structured analysis and summarization of its content was developed. Results: The initial search produced 359 publications, reduced after a manual examination of abstracts and full publications. The following aspects of clinical data reuse are discussed: motivations and challenges, privacy and ethical concerns, data integration and interoperability, data models and terminologies, unstructured data reuse, structured data mining, clinical practice and research integration, and examples of clinical data reuse (quality measurement and learning healthcare systems). Conclusion: Reuse of clinical data is a fast-growing field recognized as essential to realize the potentials for high quality healthcare, improved healthcare management, reduced healthcare costs, population health management, and effective clinical research

    Anonymizing datasets with demographics and diagnosis codes in the presence of utility constraints

    Get PDF
    Publishing data about patients that contain both demographics and diagnosis codes is essential to perform large-scale, low-cost medical studies. However, preserving the privacy and utility of such data is challenging, because it requires: (i) guarding against identity disclosure (re-identification) attacks based on both demographics and diagnosis codes, (ii) ensuring that the anonymized data remain useful in intended analysis tasks, and (iii) minimizing the information loss, incurred by anonymization, to preserve the utility of general analysis tasks that are difficult to determine before data publishing. Existing anonymization approaches are not suitable for being used in this setting, because they cannot satisfy all three requirements. Therefore, in this work, we propose a new approach to deal with this problem. We enforce the requirement (i) by applying (k; k^m)-anonymity, a privacy principle that prevents re-identification from attackers who know the demographics of a patient and up to m of their diagnosis codes, where k and m are tunable parameters. To capture the requirement (ii), we propose the concept of utility constraint for both demographics and diagnosis codes. Utility constraints limit the amount of generalization and are specified by data owners (e.g., the healthcare institution that performs anonymization). We also capture requirement (iii), by employing well-established information loss measures for demographics and for diagnosiscodes. To realize our approach, we develop an algorithm that enforces (k; k^m)-anonymity on a dataset containing both demographics and diagnosis codes, in a way that satisfies the specified utility constraints and with minimal information loss, according to the measures. Our experiments with a large dataset containing more than 200; 000 electronic health recordsshow the effectiveness and efficiency of our algorithm

    A Computational Framework for Exploring and Mitigating Privacy Risks in Image-Based Emotion Recognition

    Get PDF
    Ambulatory devices and Image-based IoT devices have permeated our every-day life. Such technologies allow the continuous monitoring of individuals’ behavioral signals and expressions in every-day life, affording us new insights into their emotional states and transitions, thus paving the way to novel well-being and healthcare applications. Yet, due to the strong privacy concerns, the use of such technologies is met with strong skepticism as they deal with highly sensitive behavioral data, which regularly involve speech signals and facial images and current image-based emotion recognition systems relying on deep learning techniques tend to preserve substantial information related to the identity of the user which can be extracted or leaked to be used against the user itself. In this thesis, we examine the interplay between emotion-specific and user identity-specific information in image-based emotion recognition systems. We further propose a user anonymization approach that preserves emotion-specific information but eliminates user-dependent information from the convolutional kernel of convolutional neural networks (CNN), therefore reducing user re-identification risks. We formulate an iterative adversarial learning problem implemented with a multitask CNN, that minimizes emotion classification and maximizes user identification loss. The proposed system is evaluated on two datasets achieving moderate to high emotion recognition accuracy and poor user identity recognition accuracy, outperforming existing baseline approaches. Implications from this study can inform the design of privacy-aware behavioral recognition systems that preserve facets of human behavior, while concealing the identity of the user, and can be used in various IoT-empowered applications related to health, well-being, and education

    Utility-aware anonymization of diagnosis codes

    No full text
    The growing need for performing large-scale and low-cost biomedical studies has led organizations to promote the reuse of patient data. For instance, the National Institutes of Health in the US requires patient-specific data collected and analyzed in the context of Genome-Wide Association Studies (GWAS) to be deposited into a biorepository and broadly disseminated. While essential to comply with regulations, disseminating such data risks privacy breaches, because patients genomic sequences can be linked to their identities through diagnosis codes. This work proposes a novel approach that prevents this type of data linkage by modifying diagnosis codes to limit the probability of associating a patients identity to their genomic sequence. Our approach employs an effective algorithm that uses generalization and suppression of diagnosis codes to preserve privacy and takes into account the intended uses of the disseminated data to guarantee utility. We also present extensive experiments using several datasets derived from the Electronic Medical Record (EMR) system of the Vanderbilt University Medical Center, as well as a large-scale case-study using the EMRs of 79K patients, which are linked to DNA contained in the Vanderbilt University biobank. Our results verify that our approach generates anonymized data that permit accurate biomedical analysis in tasks including case count studies and GWAS
    corecore