1 research outputs found
Twin identification over viewpoint change: A deep convolutional neural network surpasses humans
Deep convolutional neural networks (DCNNs) have achieved human-level accuracy
in face identification (Phillips et al., 2018), though it is unclear how
accurately they discriminate highly-similar faces. Here, humans and a DCNN
performed a challenging face-identity matching task that included identical
twins. Participants (N=87) viewed pairs of face images of three types:
same-identity, general imposter pairs (different identities from similar
demographic groups), and twin imposter pairs (identical twin siblings). The
task was to determine whether the pairs showed the same person or different
people. Identity comparisons were tested in three viewpoint-disparity
conditions: frontal to frontal, frontal to 45-degree profile, and frontal to
90-degree profile. Accuracy for discriminating matched-identity pairs from
twin-imposters and general imposters was assessed in each viewpoint-disparity
condition. Humans were more accurate for general-imposter pairs than
twin-imposter pairs, and accuracy declined with increased viewpoint disparity
between the images in a pair. A DCNN trained for face identification (Ranjan et
al., 2018) was tested on the same image pairs presented to humans. Machine
performance mirrored the pattern of human accuracy, but with performance at or
above all humans in all but one condition. Human and machine similarity scores
were compared across all image-pair types. This item-level analysis showed that
human and machine similarity ratings correlated significantly in six of nine
image-pair types [range r=0.38 to r=0.63], suggesting general accord between
the perception of face similarity by humans and the DCNN. These findings also
contribute to our understanding of DCNN performance for discriminating
high-resemblance faces, demonstrate that the DCNN performs at a level at or
above humans, and suggest a degree of parity between the features used by
humans and the DCNN