38 research outputs found

    Session 8: \u3cem\u3eStatistical Discrimination Methods for Forensic Source Interpretation of Aluminum Powders in Explosives\u3c/em\u3e

    Get PDF
    Aluminum (Al) powder is often used as a fuel in explosive devices; therefore, individuals attempting to make illegal improvised explosive devices often obtain it from legitimate commercial products or make it themselves using readily available Al starting materials. The characterization and differentiation between sources of Al powder for additional investigative and intelligence value has become increasingly important. Previous research modeled the distributions of micromorphometric features of Al powder particles within a subsample to support Al source discrimination. Since then, additional powder samples from a variety of different source types have been obtained and analyzed, providing a more comprehensive dataset for applying the two statistical methods for interpretation and discrimination of source. Here, we compare two different statistical techniques: one using linear discriminant analysis (LDA), and the other using a modification to the method used in ASTM E2927-16e1 and E2330-19. The LDA method results in an Al source classification for each questioned sample. Alternatively, our modification to the ASTM method uses an interval-based match criterion to associate or exclude each of the known sources as the actual source of a trace. Although the outcomes of these two statistical methods are fundamentally different, their performance with respect to the closed-set identification of source problem is compared. Additionally, the modified ASTM method will be adapted to provide a vector of scores in lieu of the binary decision as the first step towards a score-based likelihood ratio for interpreting Al powder micromorphometric measurement data

    Development of Strategies for Estimating a Response Surface to Characterize a Black-box Algorithm in Terms of a White-box Algorithm

    Get PDF
    In forensic identification of source problems, there is an increasing lack of explainability of the complex black-box algorithms for the assignment of evidential value. Generally speaking, black-box algorithms are designed with prediction in mind. Although the information fed into the algorithm and the features used to make the prediction are often known to the user, the complexity of the algorithm limits the ability of the end user to understand how the input features are used. On the other hand, more transparent algorithms (sometimes referred to as “white-box”) are typically less accurate even if they provide direct information on how the input object is directly used for predicting a class or outcome. In this work, we begin the development on a response surface that characterizes the output of a black-box algorithm with the output of a white-box algorithm. Using a set of handwriting samples, we use a complex black-box algorithm across multiple features to produce a set of pairwise scores and a simple, transparent algorithm that uses individual features to produce another set of pairwise scores. A generalized least squares method is used to test the null hypothesis that there is no relationship between the two types of scores. The outcome of the significance tests helps to determine which of the individual feature scores have an influence on the black-box scores

    Repeatability and Reproducibility of Decisions by Latent Fingerprint Examiners

    Get PDF
    The interpretation of forensic fingerprint evidence relies on the expertise of latent print examiners. We tested latent print examiners on the extent to which they reached consistent decisions. This study assessed intra-examiner repeatability by retesting 72 examiners on comparisons of latent and exemplar fingerprints, after an interval of approximately seven months; each examiner was reassigned 25 image pairs for comparison, out of total pool of 744 image pairs. We compare these repeatability results with reproducibility (inter-examiner) results derived from our previous study. Examiners repeated 89.1% of their individualization decisions, and 90.1% of their exclusion decisions; most of the changed decisions resulted in inconclusive decisions. Repeatability of comparison decisions (individualization, exclusion, inconclusive) was 90.0% for mated pairs, and 85.9% for nonmated pairs. Repeatability and reproducibility were notably lower for comparisons assessed by the examiners as “difficult” than for “easy” or “moderate” comparisons, indicating that examiners' assessments of difficulty may be useful for quality assurance. No false positive errors were repeated (n = 4); 30% of false negative errors were repeated. One percent of latent value decisions were completely reversed (no value even for exclusion vs. of value for individualization). Most of the inter- and intra-examiner variability concerned whether the examiners considered the information available to be sufficient to reach a conclusion; this variability was concentrated on specific image pairs such that repeatability and reproducibility were very high on some comparisons and very low on others. Much of the variability appears to be due to making categorical decisions in borderline cases

    Data on the interexaminer variation of minutia markup on latent fingerprints

    Get PDF
    The data in this article supports the research paper entitled “Interexaminer variation of minutia markup on latent fingerprints” [1]. The data in this article describes the variability in minutia markup during both analysis of the latents and comparison between latents and exemplars. The data was collected in the “White Box Latent Print Examiner Study,” in which each of 170 volunteer latent print examiners provided detailed markup documenting their examinations of latent-exemplar pairs of prints randomly assigned from a pool of 320 pairs. Each examiner examined 22 latent-exemplar pairs; an average of 12 examiners marked each latent. Keywords: Biometrics, Latent fingerprint examination, Fingermark, ACE-V, Repeatability, Reproducibilit

    Characterizing missed identifications and errors in latent fingerprint comparisons using eye-tracking data.

    No full text
    Latent fingerprint examiners sometimes come to different conclusions when comparing fingerprints, and eye-gaze behavior may help explain these outcomes. missed identifications (missed IDs) are inconclusive, exclusion, or No Value determinations reached when the consensus of other examiners is an identification. To determine the relation between examiner behavior and missed IDs, we collected eye-gaze data from 121 latent print examiners as they completed a total 1444 difficult (latent-exemplar) comparisons. We extracted metrics from the gaze data that serve as proxies for underlying perceptual and cognitive capacities. We used these metrics to characterize potential mechanisms of missed IDs: Cursory Comparison and Mislocalization. We find that missed IDs are associated with shorter comparison times, fewer regions visited, and fewer attempted correspondences between the compared images. Latent print comparisons resulting in erroneous exclusions (a subset of missed IDs) are also more likely to have fixations in different regions and less accurate correspondence attempts than those comparisons resulting in identifications. We also use our derived metrics to describe one atypical examiner who made six erroneous identifications, four of which were on comparisons intended to be straightforward exclusions. The present work helps identify the degree to which missed IDs can be explained using eye-gaze behavior, and the extent to which missed IDs depend on cognitive and decision-making factors outside the domain of eye-tracking methodologies

    Gaze behavior and cognitive states during fingerprint target group localization

    No full text
    Background: The comparison of fingerprints by expert latent print examiners generally involves repeating a process in which the examiner selects a small area of distinctive features in one print (a target group), and searches for it in the other print. In order to isolate this key element of fingerprint comparison, we use eye-tracking data to describe the behavior of latent fingerprint examiners on a narrowly defined “find the target” task. Participants were shown a fingerprint image with a target group indicated and asked to find the corresponding area of ridge detail in a second impression of the same finger and state when they found the target location. Target groups were presented on latent and plain exemplar fingerprint images, and as small areas cropped from the plain exemplars, to assess how image quality and the lack of surrounding visual context affected task performance and eye behavior. One hundred and seventeen participants completed a total of 675 trials. Results: The presence or absence of context notably affected the areas viewed and time spent in comparison; differences between latent and plain exemplar tasks were much less significant. In virtually all trials, examiners repeatedly looked back and forth between the images, suggesting constraints on the capacity of visual working memory. On most trials where context was provided, examiners looked immediately at the corresponding location: with context, median time to find the corresponding location was less than 0.3 s (second fixation); however, without context, median time was 1.9 s (five fixations). A few trials resulted in errors in which the examiner did not find the correct target location. Basic gaze measures of overt behaviors, such as speed, areas visited, and back-and-forth behavior, were used in conjunction with the known target area to infer the underlying cognitive state of the examiner. Conclusions: Visual context has a significant effect on the eye behavior of latent print examiners. Localization errors suggest how errors may occur in real comparisons: examiners sometimes compare an incorrect but similar target group and do not continue to search for a better candidate target group. The analytic methods and predictive models developed here can be used to describe the more complex behavior involved in actual fingerprint comparisons

    Measuring What Latent Fingerprint Examiners Consider Sufficient Information for Individualization Determinations

    No full text
    <div><p>Latent print examiners use their expertise to determine whether the information present in a comparison of two fingerprints (or palmprints) is sufficient to conclude that the prints were from the same source (individualization). When fingerprint evidence is presented in court, it is the examiner's determination—not an objective metric—that is presented. This study was designed to ascertain the factors that explain examiners' determinations of sufficiency for individualization. Volunteer latent print examiners (n = 170) were each assigned 22 pairs of latent and exemplar prints for examination, and annotated features, correspondence of features, and clarity. The 320 image pairs were selected specifically to control clarity and quantity of features. The predominant factor differentiating annotations associated with individualization and inconclusive determinations is the count of corresponding minutiae; other factors such as clarity provided minimal additional discriminative value. Examiners' counts of corresponding minutiae were strongly associated with their own determinations; however, due to substantial variation of both annotations and determinations among examiners, one examiner's annotation and determination on a given comparison is a relatively weak predictor of whether another examiner would individualize. The extensive variability in annotations also means that we must treat any individual examiner's minutia counts as interpretations of the (unknowable) information content of the prints: saying “the prints had N corresponding minutiae marked” is not the same as “the prints had N corresponding minutiae.” More consistency in annotations, which could be achieved through standardization and training, should lead to process improvements and provide greater transparency in casework.</p></div
    corecore