4 research outputs found

    Comparing human and automatic face recognition performance

    No full text
    Face recognition technologies have seen dramatic improvements in performance over the past decade, and such systems are now widely used for security and commercial applications. Since recognizing faces is a task that humans are understood to be very good at, it is common to want to compare automatic face recognition (AFR) and human face recognition (HFR) in terms of biometric performance. This paper addresses this question by: 1) conducting verification tests on volunteers (HFR) and commercial AFR systems and 2) developing statistical methods to support comparison of the performance of different biometric systems. HFR was tested by presenting face-image pairs and asking subjects to classify them on a scale of "Same," "Probably Same," "Not sure," "Probably Different," and "Different"; the same image pairs were presented to AFR systems, and the biometric match score was measured. To evaluate these results, two new statistical evaluation techniques are developed. The first is a new way to normalize match-score distributions, where a normalized match score t̂ is calculated as a function of the angle from a representation of [false match rate, false nonmatch rate] values in polar coordinates from some center. Using this normalization, we develop a second methodology to calculate an average detection error tradeoff (DET) curve and show that this method is equivalent to direct averaging of DET data along each angle from the center. This procedure is then applied to compare the performance of the best AFR algorithms available to us in the years 1999, 2001, 2003, 2005, and 2006, in comparison to human scores. Results show that algorithms have dramatically improved in performance over that time. In comparison to the performance of the best AFR system of 2006, 29.2% of human subjects performed better, while 37.5% performed worse

    Curvewise DET confidence regions and pointwise EER confidence intervals using radial sweep methodology

    No full text
    One methodology for evaluating the matching performance of biometric authentication systems is the detection error tradeoff (DET) curve. The DET curve graphically illustrates the relationship between false rejects and false accepts when varying a threshold across a genuine and an imposter match score distributions. This paper makes two contributions to the literature on the matching performance evaluation of biometric identification or bioauthentication systems. First, we create curvewise DET confidence regions using radial sweep methods. Second we use this same methodology to create pointwise confidence intervals for the equal error rate (EER). The EER is the rate at which the false accept rate and the false reject rate are identical. We utilize resampling or bootstrap methods to estimate the variability in both the DET and the EER. Our radial sweep is based on converting the false reject and false accept errors to polar coordinates. Application is made of these methods to data from three different biometric modalities and we discuss the results of these applications
    corecore