3,428 research outputs found

    Text-dependent Forensic Voice Comparison: Likelihood Ratio Estimation with the Hidden Markov Model (HMM) and Gaussian Mixture Model – Universal Background Model (GMMUBM) Approaches

    Get PDF
    Among the more typical forensic voice comparison (FVC) approaches, the acoustic-phonetic statistical approach is suitable for text-dependent FVC, but it does not fully exploit available time-varying information of speech in its modelling. The automatic approach, on the other hand, essentially deals with text-independent cases, which means temporal information is not explicitly incorporated in the modelling. Text-dependent likelihood ratio (LR)-based FVC studies, in particular those that adopt the automatic approach, are few. This preliminary LR-based FVC study compares two statistical models, the Hidden Markov Model (HMM) and the Gaussian Mixture Model (GMM), for the calculation of forensic LRs using the same speech data. FVC experiments were carried out using different lengths of Japanese short words under a forensically realistic, but challenging condition: only two speech tokens for model training and LR estimation. Log-likelihood-ratio cost (Cllr) was used as the assessment metric. The study demonstrates that the HMM system constantly outperforms the GMM system in terms of average Cllr values. However, words longer than three mora are needed if the advantage of the HMM is to become evident. With a seven-mora word, for example, the HMM outperformed the GMM by a Cllr value of 0.073

    Forensic authorship classification by paragraph vectors of speech transcriptions

    Get PDF
    In forensic comparison, document classification techniques are used mainly for authorship classification and author profiling. In the present study, we aim to introduce paragraph vector modelling (by Doc2Vec) into the likelihoodratio framework paradigm of forensic evidence comparison. Transcriptions of spontaneous speech recording are used as input to paragraph vector extraction model training. Logistic regression models are trained based on cosine distances of paragraph vector pairs to predict the same and different author origin probability. Results are evaluated according to different speaking styles (transcriptions of speech tasks available in the dataset). Cllr and equal error rate values (lowest ones are 0.47 and 0.11, respectively) show that the method can be useful as a feature for forensic authorship comparison and may extend the voice comparison methods for speaker verification

    Effects of language mismatch in automatic forensic voice comparison using deep learning embeddings

    Full text link
    In forensic voice comparison the speaker embedding has become widely popular in the last 10 years. Most of the pretrained speaker embeddings are trained on English corpora, because it is easily accessible. Thus, language dependency can be an important factor in automatic forensic voice comparison, especially when the target language is linguistically very different. There are numerous commercial systems available, but their models are mainly trained on a different language (mostly English) than the target language. In the case of a low-resource language, developing a corpus for forensic purposes containing enough speakers to train deep learning models is costly. This study aims to investigate whether a model pre-trained on English corpus can be used on a target low-resource language (here, Hungarian), different from the model is trained on. Also, often multiple samples are not available from the offender (unknown speaker). Therefore, samples are compared pairwise with and without speaker enrollment for suspect (known) speakers. Two corpora are applied that were developed especially for forensic purposes, and a third that is meant for traditional speaker verification. Two deep learning based speaker embedding vector extraction methods are used: the x-vector and ECAPA-TDNN. Speaker verification was evaluated in the likelihood-ratio framework. A comparison is made between the language combinations (modeling, LR calibration, evaluation). The results were evaluated by minCllr and EER metrics. It was found that the model pre-trained on a different language but on a corpus with a huge amount of speakers performs well on samples with language mismatch. The effect of sample durations and speaking styles were also examined. It was found that the longer the duration of the sample in question the better the performance is. Also, there is no real difference if various speaking styles are applied

    Information-theoretical assessment of the performance of likelihood ratio computation methods

    Full text link
    This is the accepted version of the following article: Ramos, D., Gonzalez-Rodriguez, J., Zadora, G. and Aitken, C. (2013), Information-Theoretical Assessment of the Performance of Likelihood Ratio Computation Methods. Journal of Forensic Sciences, 58: 1503–1518. doi: 10.1111/1556-4029.12233, which has been published in final form at http://onlinelibrary.wiley.com/doi/10.1111/1556-4029.12233/Performance of likelihood ratio (LR) methods for evidence evaluation has been represented in the past using, for example, Tippett plots. We propose empirical cross-entropy (ECE) plots as a metric of accuracy based on the statistical theory of proper scoring rules, interpretable as information given by the evidence according to information theory, which quantify calibration of LR values. We present results with a case example using a glass database from real casework, comparing performance with both Tippett and ECE plots. We conclude that ECE plots allow clearer comparisons of LR methods than previous metrics, allowing a theoretical criterion to determine whether a given method should be used for evidence evaluation or not, which is an improvement over Tippett plots. A set of recommendations for the use of the proposed methodology by practitioners is also given.Supported by the Spanish Ministry of Science and Innovation under project TEC2009-14719-C02-01 and co-funded by the Universidad Autonoma de Madrid and the Comunidad Autonoma de Madrid under project CCG10-UAM/TIC-5792

    From biometric scores to forensic likelihood ratios

    Get PDF
    In this chapter, we describe the issue of the interpretation of forensic evidence from scores computed by a biometric system. This is one of themost important topics into the so-called area of forensic biometrics.We will show the importance of the topic, introducing some of the key concepts of forensic science with respect to the interpretation of results prior to their presentation in court, which is increasingly addressed by the computation of likelihood ratios (LR). We will describe the LR methodology, and will illustrate it with an example of the evaluation of fingerprint evidence in forensic conditions, by means of a fingerprint biometric system.</p

    A Likelihood-Ratio Based Forensic Voice Comparison in Standard Thai

    Get PDF
    This research uses a likelihood ratio (LR) framework to assess the discriminatory power of a range of acoustic parameters extracted from speech samples produced by male speakers of Standard Thai. The thesis aims to answer two main questions: 1) to what extent the tested linguistic-phonetic segments of Standard Thai perform in forensic voice comparison (FVC); and 2) how such linguistic-phonetic segments are profitably combined through logistic regression using the FoCal Toolkit (Brümmer, 2007). The segments focused on in this study are the four consonants /s, ʨh, n, m/ and the two diphthongs [ɔi, ai]. First of all, using the alveolar fricative /s/, two different sets of features were compared in terms of their performance in FVC. The first comprised the spectrum-based distributional features of four spectral moments, namely mean, variance, skew and kurtosis; the second consisted of the coefficients of the Discrete Cosine Transform (DCTs) applied to a spectrum. As DCTs were found to perform better, they were subsequently used to model the consonant spectrum of the remaining consonants. The consonant spectrum was extracted at the center point of the /s, ʨh, n, m/ consonants with a Hamming window of 31.25 msec. For the diphthongs [ɔi] - [nɔi L] and [ai] - [mai HL], the cubic polynomials fitted to the F2 and F1-F3 formants were tested separately. The quadratic polynomials fitted to the tonal F0 contours of [ɔi] - [nɔi L] and [ai] - [mai HL] were tested as well. Long-term F0 distribution (LTF0) was also trialed. The results show the promising discriminatory power of the Standard Thai acoustic features and segments tested in this thesis. The main findings are as follows. 1. The fricative /s/ performed better with the DCTs (Cllr = 0.70) than with the spectral moments (Cllr = 0.92). 2. The nasals /n, m/ (Cllr = 0.47) performed better than the affricate /tɕh/ (Cllr = 0.54) and the fricative /s/ (Cllr = 0.70) when their DCT coefficients were parameterized. 3. F1-F3 trajectories (Cllr = 0.42 and Cllr = 0.49) outperformed F2 trajectory (Cllr = 0.69 and Cllr = 0.67) for both diphthongs [ɔi] and [ai]. 4. F1-F3 trajectories of the diphthong [ɔi] (Cllr = 0.42) outperformed those of [ai] (Cllr = 0.49). 5. Tonal F0 (Cllr = 0.52) outperformed LTF0 (Cllr = 0.74). 6. Overall, better results were obtained when DCTs of /n/ - [na: HL] and /n/ - [nɔi L] were fused. (Cllr = 0.40 with the largest consistent-with-fact SSLog10LR = 2.53). In light of the findings, we can conclude that Standard Thai is generally amenable to FVC, especially when linguistic-phonetic segments are being combined; it is recommended that the latter procedure be followed when dealing with forensically realistic casework

    Face comparison in forensics:A deep dive into deep learning and likelihood rations

    Get PDF
    This thesis explores the transformative potential of deep learning techniques in the field of forensic face recognition. It aims to address the pivotal question of how deep learning can advance this traditionally manual field, focusing on three key areas: forensic face comparison, face image quality assessment, and likelihood ratio estimation. Using a comparative analysis of open-source automated systems and forensic experts, the study finds that automated systems excel in identifying non-matches in low-quality images, but lag behind experts in high-quality settings. The thesis also investigates the role of calibration methods in estimating likelihood ratios, revealing that quality score-based and feature-based calibrations are more effective than naive methods. To enhance face image quality assessment, a multi-task explainable quality network is proposed that not only gauges image quality, but also identifies contributing factors. Additionally, a novel images-to-video recognition method is introduced to improve the estimation of likelihood ratios in surveillance settings. The study employs multiple datasets and software systems for its evaluations, aiming for a comprehensive analysis that can serve as a cornerstone for future research in forensic face recognition

    EVALUATION OF SCIENTIFIC EVIDENCE : A PROPOSAL ON ONTOLOGICAL AND EPISTEMOLOGICAL BASES, AND SOME STATISTICAL APPLICATIONS

    Get PDF
    corecore