4 research outputs found

    Preventing facial recognition when rendering MR images of the head in three dimensions

    Get PDF
    In the United States it is not allowed to make public any patient-specific information without the patient's consent. This ruling has led to difficulty for those interested in sharing three-dimensional (3D) images of the head and brain since a patient's face might be recognized from a 3D rendering of the skin surface. Approaches employed to date have included brain stripping and total removal of the face anterior to a cut plane, each of which lose potentially important anatomical information about the skull surface, air sinuses, and orbits. This paper describes a new approach that involves a) definition of a plane anterior to which the face lies, and b) an adjustable level of deformation of the skin surface anterior to that plane. On the basis of a user performance study using forced choices, we conclude that approximately 30% of individuals are at risk of recognition from 3D renderings of unaltered images and that truncation of the face below the level of the nose does not preclude facial recognition. Removal of the face anterior to a cut plane may interfere with accurate registration and may delete important anatomical information. Our new method alters little of the underlying anatomy and does not prevent effective registration into a common coordinate system. Although the methods presented here were not fully effective (one subject was consistently recognized under the forced choice study design even at the maximum deformation level employed) this paper may point a way toward solution of a difficult problem that has received little attention in the literature

    Pseudonymization of neuroimages and data protection: Increasing access to data while retaining scientific utility

    Get PDF
    open access articleFor a number of years, facial features removal techniques such as ‘defacing’, ‘skull stripping’ and ‘face masking/ blurring’, were considered adequate privacy preserving tools to openly share brain images. Scientifically, these measures were already a compromise between data protection requirements and research impact of such data. Now, recent advances in machine learning and deep learning that indicate an increased possibility of re- identifiability from defaced neuroimages, have increased the tension between open science and data protection requirements. Researchers are left pondering how best to comply with the different jurisdictional requirements of anonymization, pseudonymization or de-identification without compromising the scientific utility of neuroimages even further. In this paper, we present perspectives intended to clarify the meaning and scope of these concepts and highlight the privacy limitations of available pseudonymization and de-identification techniques. We also discuss possible technical and organizational measures and safeguards that can facilitate sharing of pseudonymized neuroimages without causing further reductions to the utility of the data
    corecore