35 research outputs found
S7 Fig -
(a) Eye gaze heat map of the participants overlayed on a segmented version of the original PWS image (see S1 Text for original image sources). The leftmost image is the average heat map of the successful clinicians (top) and non-clinicians (bottom). We apply two different noise-thresholds: a low threshold to remove possibly spurious visual interests (middle), and then a high threshold that more clearly indicates facial regions with high visual attentions (right). (b) Saliency maps of image with key regions that affect the classifier accuracy. We apply a low (left) and high coverage-threshold (right) on the saliency maps so that the area covered is approximately the same as the low and high noise-threshold, respectively. (TIF)</p
S13 Fig -
Eye gaze heat map of the participants overlayed on the unaffected images. For each set, the left most image is the average heat map of the successful clinicians (top) and non-clinicians (bottom). We apply two different noise-thresholds: a low threshold to remove possibly spurious visual interests (middle), and then a higher threshold that more clearly indicates facial regions with high visual attentions (right). (TIF)</p
Example image illustrating the preprocessing step of visual heat maps for subsequent analyses.
(a) Original image of an individual with Down syndrome (DS). (b) For this DS image, conditioned on the group of successful clinicians, we average the default attention heat maps from the eye-tracking experiments. (c) We removed the common visual signals (See S27 Fig). (d) Finally, we smooth the image with cv2.boxFilter and increase the color intensity. Here, compared to (b), in (d) we can better observe the high visual interest for a larger orbital region (not just the center of the eyes), as well as attention to the ears, which, though barely visible in this image, can be notable features on physical examination for this condition. The image is freely available for reuse without restriction, courtesy: National Human Genome Research Institute (www.genome.gov).</p
S2 Fig -
(a) Eye gaze heat map of the participants overlayed on a segmented version of the original BWS image (see S1 Text for original image sources). The leftmost image is the average heat map of the successful clinicians (top) and non-clinicians (bottom). We apply two different noise-thresholds: a low threshold to remove possibly spurious visual interests (middle), and then a high threshold that more clearly indicates facial regions with high visual attentions (right). (b) Saliency maps of image with key regions that affect the classifier accuracy. We apply a low (left) and high coverage-threshold (right) on the saliency maps so that the area covered is approximately the same as the low and high noise-threshold, respectively. (TIF)</p
S8 Fig -
(a) Eye gaze heat map of the participants overlayed on a segmented version of the original RSTS1 image (see S1 Text for original image sources). The leftmost image is the average heat map of the successful clinicians (top) and non-clinicians (bottom). We apply two different noise-thresholds: a low threshold to remove possibly spurious visual interests (middle), and then a high threshold that more clearly indicates facial regions with high visual attentions (right). (b) Saliency maps of image with key regions that affect the classifier accuracy. We apply a low (left) and high coverage-threshold (right) on the saliency maps so that the area covered is approximately the same as the low and high noise-threshold, respectively. (TIF)</p
S27 Fig -
We take the average of all the visual attention heat maps across all test images for the clinician (left) and non-clinician groups (right). We observed that most of the visual interests align with eyes, nose, and mouth areas. To account for normal human behavior when viewing an image, conditioned on the group expertise (clinician or non-clinician), we subtracted these common average areas from each individual heat map used for our analyses. This helps account for typical human behavior when viewing faces but does not cause us to ignore these areas of common facial attention. (TIF)</p
S18 Fig -
We defined the AOIs specific to the HPO-annotated features for the KS image (left image). A segmented version of the original image is shown, and the AOIs drawn on the original image may not perfectly match the segmented version shown (see S1 Text for original image sources). Boxplots compare duration-of-fixation and time-to-first-whole-fixation. (TIF)</p
S9 Fig -
(a) Eye gaze heat map of the participants overlayed on a segmented version of the original WHS image (see S1 Text for original image sources). The leftmost image is the average heat map of the successful clinicians (top) and non-clinicians (bottom). We apply two different noise-thresholds: a low threshold to remove possibly spurious visual interests (middle), and then a high threshold that more clearly indicates facial regions with high visual attentions (right). (b) Saliency maps of image with key regions that affect the classifier accuracy. We apply a low (left) and high coverage-threshold (right) on the saliency maps so that the area covered is approximately the same as the low and high noise-threshold, respectively. (TIF)</p
S5 Fig -
(a) Eye gaze heat map of the participants overlayed on a segmented version of the original KS image (see S1 Text for original image sources). The leftmost image is the average heat map of the successful clinicians (top) and non-clinicians (bottom). We apply two different noise-thresholds: a low threshold to remove possibly spurious visual interests (middle), and then a high threshold that more clearly indicates facial regions with high visual attentions (right). (b) Saliency maps of image with key regions that affect the classifier accuracy. We apply a low (left) and high coverage-threshold (right) on the saliency maps so that the area covered is approximately the same as the low and high noise-threshold, respectively. (TIF)</p
S14 Fig -
We defined the AOIs specific to the HPO-annotated features for the 22q11DS image (left image). A segmented version of the original image is shown, and the AOIs drawn on the original image may not perfectly match the segmented version shown (see S1 Text for original image sources). Boxplots compare duration-of-fixation and time-to-first-whole-fixation. (TIF)</p