37 research outputs found

    Comparison of clinical geneticist and computer visual attention in assessing genetic conditions

    Get PDF
    Artificial intelligence (AI) for facial diagnostics is increasingly used in the genetics clinic to evaluate patients with potential genetic conditions. Current approaches focus on one type of AI called Deep Learning (DL). While DL- based facial diagnostic platforms have a high accuracy rate for many conditions, less is understood about how this technology assesses and classifies (categorizes) images, and how this compares to humans. To compare human and computer attention, we performed eye-tracking analyses of geneticist clinicians (n = 22) and non-clinicians (n = 22) who viewed images of people with 10 different genetic conditions, as well as images of unaffected individuals. We calculated the Intersection-over-Union (IoU) and Kullback–Leibler divergence (KL) to compare the visual attentions of the two participant groups, and then the clinician group against the saliency maps of our deep learning classifier. We found that human visual attention differs greatly from DL model’s saliency results. Averaging over all the test images, IoU and KL metric for the successful (accurate) clinician visual attentions versus the saliency maps were 0.15 and 11.15, respectively. Individuals also tend to have a specific pattern of image inspection, and clinicians demonstrate different visual attention patterns than non-clinicians (IoU and KL of clinicians versus non-clinicians were 0.47 and 2.73, respectively). This study shows that humans (at different levels of expertise) and a computer vision model examine images differently. Understanding these differences can improve the design and use of AI tools, and lead to more meaningful interactions between clinicians and AI technologies

    GestaltMatcher Database - A global reference for facial phenotypic variability in rare human diseases

    Get PDF
    The most important factor that complicates the work of dysmorphologists is the significant phenotypic variability of the human face. Next-Generation Phenotyping (NGP) tools that assist clinicians with recognizing characteristic syndromic patterns are particularly challenged when confronted with patients from populations different from their training data. To that end, we systematically analyzed the impact of genetic ancestry on facial dysmorphism. For that purpose, we established the GestaltMatcher Database (GMDB) as a reference dataset for medical images of patients with rare genetic disorders from around the world. We collected 10,980 frontal facial images - more than a quarter previously unpublished - from 8,346 patients, representing 581 rare disorders. Although the predominant ancestry is still European (67%), data from underrepresented populations have been increased considerably via global collaborations (19% Asian and 7% African). This includes previously unpublished reports for more than 40% of the African patients. The NGP analysis on this diverse dataset revealed characteristic performance differences depending on the composition of training and test sets corresponding to genetic relatedness. For clinical use of NGP, incorporating non-European patients resulted in a profound enhancement of GestaltMatcher performance. The top-5 accuracy rate increased by +11.29%. Importantly, this improvement in delineating the correct disorder from a facial portrait was achieved without decreasing the performance on European patients. By design, GMDB complies with the FAIR principles by rendering the curated medical data findable, accessible, interoperable, and reusable. This means GMDB can also serve as data for training and benchmarking. In summary, our study on facial dysmorphism on a global sample revealed a considerable cross ancestral phenotypic variability confounding NGP that should be counteracted by international efforts for increasing data diversity. GMDB will serve as a vital reference database for clinicians and a transparent training set for advancing NGP technology.</p

    S2 Fig -

    No full text
    (a) Eye gaze heat map of the participants overlayed on a segmented version of the original BWS image (see S1 Text for original image sources). The leftmost image is the average heat map of the successful clinicians (top) and non-clinicians (bottom). We apply two different noise-thresholds: a low threshold to remove possibly spurious visual interests (middle), and then a high threshold that more clearly indicates facial regions with high visual attentions (right). (b) Saliency maps of image with key regions that affect the classifier accuracy. We apply a low (left) and high coverage-threshold (right) on the saliency maps so that the area covered is approximately the same as the low and high noise-threshold, respectively. (TIF)</p

    S7 Fig -

    No full text
    (a) Eye gaze heat map of the participants overlayed on a segmented version of the original PWS image (see S1 Text for original image sources). The leftmost image is the average heat map of the successful clinicians (top) and non-clinicians (bottom). We apply two different noise-thresholds: a low threshold to remove possibly spurious visual interests (middle), and then a high threshold that more clearly indicates facial regions with high visual attentions (right). (b) Saliency maps of image with key regions that affect the classifier accuracy. We apply a low (left) and high coverage-threshold (right) on the saliency maps so that the area covered is approximately the same as the low and high noise-threshold, respectively. (TIF)</p

    S8 Fig -

    No full text
    (a) Eye gaze heat map of the participants overlayed on a segmented version of the original RSTS1 image (see S1 Text for original image sources). The leftmost image is the average heat map of the successful clinicians (top) and non-clinicians (bottom). We apply two different noise-thresholds: a low threshold to remove possibly spurious visual interests (middle), and then a high threshold that more clearly indicates facial regions with high visual attentions (right). (b) Saliency maps of image with key regions that affect the classifier accuracy. We apply a low (left) and high coverage-threshold (right) on the saliency maps so that the area covered is approximately the same as the low and high noise-threshold, respectively. (TIF)</p

    S27 Fig -

    No full text
    We take the average of all the visual attention heat maps across all test images for the clinician (left) and non-clinician groups (right). We observed that most of the visual interests align with eyes, nose, and mouth areas. To account for normal human behavior when viewing an image, conditioned on the group expertise (clinician or non-clinician), we subtracted these common average areas from each individual heat map used for our analyses. This helps account for typical human behavior when viewing faces but does not cause us to ignore these areas of common facial attention. (TIF)</p

    S13 Fig -

    No full text
    Eye gaze heat map of the participants overlayed on the unaffected images. For each set, the left most image is the average heat map of the successful clinicians (top) and non-clinicians (bottom). We apply two different noise-thresholds: a low threshold to remove possibly spurious visual interests (middle), and then a higher threshold that more clearly indicates facial regions with high visual attentions (right). (TIF)</p

    Example image illustrating the preprocessing step of visual heat maps for subsequent analyses.

    No full text
    (a) Original image of an individual with Down syndrome (DS). (b) For this DS image, conditioned on the group of successful clinicians, we average the default attention heat maps from the eye-tracking experiments. (c) We removed the common visual signals (See S27 Fig). (d) Finally, we smooth the image with cv2.boxFilter and increase the color intensity. Here, compared to (b), in (d) we can better observe the high visual interest for a larger orbital region (not just the center of the eyes), as well as attention to the ears, which, though barely visible in this image, can be notable features on physical examination for this condition. The image is freely available for reuse without restriction, courtesy: National Human Genome Research Institute (www.genome.gov).</p

    S3 Fig -

    No full text
    (a) Eye gaze heat map of the participants overlayed on a segmented version of the original CdLS image (see S1 Text for original image sources). The leftmost image is the average heat map of the successful clinicians (top) and non-clinicians (bottom). We apply two different noise-thresholds: a low threshold to remove possibly spurious visual interests (middle), and then a high threshold that more clearly indicates facial regions with high visual attentions (right). (b) Saliency maps of image with key regions that affect the classifier accuracy. We apply a low (left) and high coverage-threshold (right) on the saliency maps so that the area covered is approximately the same as the low and high noise-threshold, respectively. (TIF)</p

    S16 Fig -

    No full text
    We defined the AOIs specific to the HPO-annotated features for the CdLS image (left image). A segmented version of the original image is shown, and the AOIs drawn on the original image may not perfectly match the segmented version shown (see S1 Text for original image sources). Boxplots compare duration-of-fixation and time-to-first-whole-fixation. (TIF)</p
    corecore