2,479 research outputs found

    A protocol for sound localization testing with young and aging normal hearing subjects

    Get PDF
    An important aspect of processing auditory stimulus is the ability to localize the source of a sound within the environment. Localization has been defined as the ability to determine the direction of sound (Tonning, 1975; Cranford, Boose, & Moore, 1990; Middlebrooks & Green, 1991; Cranford Andres, Piatz, & Reissig, 1993; Lorenzi Gatehouse, & Lever, 1999; Abel, Giguere, Consoli, & Papsin, 2000). Previous researchers have used a variety of test stimuli, test environments, loudspeaker arrays, and ages and numbers of subjects to measure the ability to localize sounds. Despite the obvious need for individuals to identify the specific location of a sound source, the variety of approaches suggests that there is no standard process for measuring localization abilities. As there is no gold standard for localization assessment, it is difficult to compare previous studies of localization. Therefore, a standardized protocol for measuring localization abilities would attenuate the need to vary testing procedures while determining an individual\u27s ability to identify sound sources as part of a complete audiological assessment. Forty male and 40 female adults, ages 21 to 60 years with normal hearing sensitivity and normal temporal processing abilities, will be used as subjects in this study. A protocol for localization testing developed by the primary investigator will be used to measure the subjects\u27 localization abilities. All testing will be completed in an IAC sound treated room, using eight sound field speakers. Each speaker will be arranged symmetrically on the wall, positioned within the horizontal plane with 45 degree intervals between each. The Central Institute for the Deaf Everyday Sentences (Alpiner & Schow, 2000; Healy & Montgomery, 2006) will be used as test stimuli. Five test conditions, one quiet and four noisy listening conditions, will be used to identify elicit localization. Of particular interest is the direction at which a subject can localize a speech stimulus accurately in the presence of background noise

    Frame-level features conveying phonetic information for language and speaker recognition

    Get PDF
    150 p.This Thesis, developed in the Software Technologies Working Group of the Departmentof Electricity and Electronics of the University of the Basque Country, focuseson the research eld of spoken language and speaker recognition technologies.More specically, the research carried out studies the design of a set of featuresconveying spectral acoustic and phonotactic information, searches for the optimalfeature extraction parameters, and analyses the integration and usage of the featuresin language recognition systems, and the complementarity of these approacheswith regard to state-of-the-art systems. The study reveals that systems trained onthe proposed set of features, denoted as Phone Log-Likelihood Ratios (PLLRs), arehighly competitive, outperforming in several benchmarks other state-of-the-art systems.Moreover, PLLR-based systems also provide complementary information withregard to other phonotactic and acoustic approaches, which makes them suitable infusions to improve the overall performance of spoken language recognition systems.The usage of this features is also studied in speaker recognition tasks. In this context,the results attained by the approaches based on PLLR features are not as remarkableas the ones of systems based on standard acoustic features, but they still providecomplementary information that can be used to enhance the overall performance ofthe speaker recognition systems

    Real-time human ambulation, activity, and physiological monitoring:taxonomy of issues, techniques, applications, challenges and limitations

    Get PDF
    Automated methods of real-time, unobtrusive, human ambulation, activity, and wellness monitoring and data analysis using various algorithmic techniques have been subjects of intense research. The general aim is to devise effective means of addressing the demands of assisted living, rehabilitation, and clinical observation and assessment through sensor-based monitoring. The research studies have resulted in a large amount of literature. This paper presents a holistic articulation of the research studies and offers comprehensive insights along four main axes: distribution of existing studies; monitoring device framework and sensor types; data collection, processing and analysis; and applications, limitations and challenges. The aim is to present a systematic and most complete study of literature in the area in order to identify research gaps and prioritize future research directions

    Leveraging ASR Pretrained Conformers for Speaker Verification through Transfer Learning and Knowledge Distillation

    Full text link
    This paper explores the use of ASR-pretrained Conformers for speaker verification, leveraging their strengths in modeling speech signals. We introduce three strategies: (1) Transfer learning to initialize the speaker embedding network, improving generalization and reducing overfitting. (2) Knowledge distillation to train a more flexible speaker verification model, incorporating frame-level ASR loss as an auxiliary task. (3) A lightweight speaker adaptor for efficient feature conversion without altering the original ASR Conformer, allowing parallel ASR and speaker verification. Experiments on VoxCeleb show significant improvements: transfer learning yields a 0.48% EER, knowledge distillation results in a 0.43% EER, and the speaker adaptor approach, with just an added 4.92M parameters to a 130.94M-parameter model, achieves a 0.57% EER. Overall, our methods effectively transfer ASR capabilities to speaker verification tasks

    The Effect Of Acoustic Variability On Automatic Speaker Recognition Systems

    Get PDF
    This thesis examines the influence of acoustic variability on automatic speaker recognition systems (ASRs) with three aims. i. To measure ASR performance under 5 commonly encountered acoustic conditions; ii. To contribute towards ASR system development with the provision of new research data; iii. To assess ASR suitability for forensic speaker comparison (FSC) application and investigative/pre-forensic use. The thesis begins with a literature review and explanation of relevant technical terms. Five categories of research experiments then examine ASR performance, reflective of conditions influencing speech quantity (inhibitors) and speech quality (contaminants), acknowledging quality often influences quantity. Experiments pertain to: net speech duration, signal to noise ratio (SNR), reverberation, frequency bandwidth and transcoding (codecs). The ASR system is placed under scrutiny with examination of settings and optimum conditions (e.g. matched/unmatched test audio and speaker models). Output is examined in relation to baseline performance and metrics assist in informing if ASRs should be applied to suboptimal audio recordings. Results indicate that modern ASRs are relatively resilient to low and moderate levels of the acoustic contaminants and inhibitors examined, whilst remaining sensitive to higher levels. The thesis provides discussion on issues such as the complexity and fragility of the speech signal path, speaker variability, difficulty in measuring conditions and mitigation (thresholds and settings). The application of ASRs to casework is discussed with recommendations, acknowledging the different modes of operation (e.g. investigative usage) and current UK limitations regarding presenting ASR output as evidence in criminal trials. In summary, and in the context of acoustic variability, the thesis recommends that ASRs could be applied to pre-forensic cases, accepting extraneous issues endure which require governance such as validation of method (ASR standardisation) and population data selection. However, ASRs remain unsuitable for broad forensic application with many acoustic conditions causing irrecoverable speech data loss contributing to high error rates

    Secondary implementation of interactive engagement teaching techniques: Choices and challenges in a Gulf Arab context

    Full text link
    We report on a "Collaborative Workshop Physics" instructional strategy to deliver the first IE calculus-based physics course at Khalifa University, UAE. To these authors' knowledge, this is the first such course on the Arabian Peninsula using PER-based instruction. A brief history of general university and STEM teaching in the UAE is given. We present this secondary implementation (SI) as a case study of a novel context and use it to determine if PER-based instruction can be successfully implemented far from the cultural context of the primary developer and, if so, how might such SIs differ from SIs within the US. With these questions in view, a pre-reform baseline of MPEX, FCI, course exam and English language proficiency data are used to design a hybrid implementation of Cooperative Group Problem Solving. We find that for students with high English proficiency, normalized gain on FCI improves from = 0.16+/-0.10 pre- to = 0.47+/-0.08 post-reform, indicating successful SI. We also find that is strongly modulated by language proficiency and discuss likely causes. Regardless of language skill, problem-solving skill is also improved and course DFW rates drop from 50% to 24%. In particular, we find evidence in post-reform student interviews that prior classroom experiences, and not broader cultural expectations about education, are the more significant cause of expectations at odds with the classroom norms of well-functioning PER-based instruction. This result is evidence that PER-based innovations can be implemented across great changes in cultural context, provided that the method is thoughtfully adapted in anticipation of context and culture-specific student expectations. This case study should be valuable for future reforms at other institutions, both in the Gulf Region and developing world, facing similar challenges involving SI of PER-based instruction outside the US.Comment: v1: 28 pages, 9 figures. v2: 19 pages, 6 figures, includes major reorganization and revisions based on anonymous peer review. v3: 19 pages, 6 figures, minor revisions based on anonymous peer revie

    Technology for the Future: In-Space Technology Experiments Program, part 2

    Get PDF
    The purpose of the Office of Aeronautics and Space Technology (OAST) In-Space Technology Experiments Program In-STEP 1988 Workshop was to identify and prioritize technologies that are critical for future national space programs and require validation in the space environment, and review current NASA (In-Reach) and industry/ university (Out-Reach) experiments. A prioritized list of the critical technology needs was developed for the following eight disciplines: structures; environmental effects; power systems and thermal management; fluid management and propulsion systems; automation and robotics; sensors and information systems; in-space systems; and humans in space. This is part two of two parts and contains the critical technology presentations for the eight theme elements and a summary listing of critical space technology needs for each theme

    Speaker Recognition in Unconstrained Environments

    Get PDF
    Speaker recognition is applied in smart home devices, interactive voice response systems, call centers, online banking and payment solutions as well as in forensic scenarios. This dissertation is concerned with speaker recognition systems in unconstrained environments. Before this dissertation, research on making better decisions in unconstrained environments was insufficient. Aside from decision making, unconstrained environments imply two other subjects: security and privacy. Within the scope of this dissertation, these research subjects are regarded as both security against short-term replay attacks and privacy preservation within state-of-the-art biometric voice comparators in the light of a potential leak of biometric data. The aforementioned research subjects are united in this dissertation to sustain good decision making processes facing uncertainty from varying signal quality and to strengthen security as well as preserve privacy. Conventionally, biometric comparators are trained to classify between mated and non-mated reference,--,probe pairs under idealistic conditions but are expected to operate well in the real world. However, the more the voice signal quality degrades, the more erroneous decisions are made. The severity of their impact depends on the requirements of a biometric application. In this dissertation, quality estimates are proposed and employed for the purpose of making better decisions on average in a formalized way (quantitative method), while the specifications of decision requirements of a biometric application remain unknown. By using the Bayesian decision framework, the specification of application-depending decision requirements is formalized, outlining operating points: the decision thresholds. The assessed quality conditions combine ambient and biometric noise, both of which occurring in commercial as well as in forensic application scenarios. Dual-use (civil and governmental) technology is investigated. As it seems unfeasible to train systems for every possible signal degradation, a low amount of quality conditions is used. After examining the impact of degrading signal quality on biometric feature extraction, the extraction is assumed ideal in order to conduct a fair benchmark. This dissertation proposes and investigates methods for propagating information about quality to decision making. By employing quality estimates, a biometric system's output (comparison scores) is normalized in order to ensure that each score encodes the least-favorable decision trade-off in its value. Application development is segregated from requirement specification. Furthermore, class discrimination and score calibration performance is improved over all decision requirements for real world applications. In contrast to the ISOIEC 19795-1:2006 standard on biometric performance (error rates), this dissertation is based on biometric inference for probabilistic decision making (subject to prior probabilities and cost terms). This dissertation elaborates on the paradigm shift from requirements by error rates to requirements by beliefs in priors and costs. Binary decision error trade-off plots are proposed, interrelating error rates with prior and cost beliefs, i.e., formalized decision requirements. Verbal tags are introduced to summarize categories of least-favorable decisions: the plot's canvas follows from Bayesian decision theory. Empirical error rates are plotted, encoding categories of decision trade-offs by line styles. Performance is visualized in the latent decision subspace for evaluating empirical performance regarding changes in prior and cost based decision requirements. Security against short-term audio replay attacks (a collage of sound units such as phonemes and syllables) is strengthened. The unit-selection attack is posed by the ASVspoof 2015 challenge (English speech data), representing the most difficult to detect voice presentation attack of this challenge. In this dissertation, unit-selection attacks are created for German speech data, where support vector machine and Gaussian mixture model classifiers are trained to detect collage edges in speech representations based on wavelet and Fourier analyses. Competitive results are reached compared to the challenged submissions. Homomorphic encryption is proposed to preserve the privacy of biometric information in the case of database leakage. In this dissertation, log-likelihood ratio scores, representing biometric evidence objectively, are computed in the latent biometric subspace. Conventional comparators rely on the feature extraction to ideally represent biometric information, latent subspace comparators are trained to find ideal representations of the biometric information in voice reference and probe samples to be compared. Two protocols are proposed for the the two-covariance comparison model, a special case of probabilistic linear discriminant analysis. Log-likelihood ratio scores are computed in the encrypted domain based on encrypted representations of the biometric reference and probe. As a consequence, the biometric information conveyed in voice samples is, in contrast to many existing protection schemes, stored protected and without information loss. The first protocol preserves privacy of end-users, requiring one public/private key pair per biometric application. The latter protocol preserves privacy of end-users and comparator vendors with two key pairs. Comparators estimate the biometric evidence in the latent subspace, such that the subspace model requires data protection as well. In both protocols, log-likelihood ratio based decision making meets the requirements of the ISOIEC 24745:2011 biometric information protection standard in terms of unlinkability, irreversibility, and renewability properties of the protected voice data

    Intelligent interface agents for biometric applications

    Get PDF
    This thesis investigates the benefits of applying the intelligent agent paradigm to biometric identity verification systems. Multimodal biometric systems, despite their additional complexity, hold the promise of providing a higher degree of accuracy and robustness. Multimodal biometric systems are examined in this work leading to the design and implementation of a novel distributed multi-modal identity verification system based on an intelligent agent framework. User interface design issues are also important in the domain of biometric systems and present an exceptional opportunity for employing adaptive interface agents. Through the use of such interface agents, system performance may be improved, leading to an increase in recognition rates over a non-adaptive system while producing a more robust and agreeable user experience. The investigation of such adaptive systems has been a focus of the work reported in this thesis. The research presented in this thesis is divided into two main parts. Firstly, the design, development and testing of a novel distributed multi-modal authentication system employing intelligent agents is presented. The second part details design and implementation of an adaptive interface layer based on interface agent technology and demonstrates its integration with a commercial fingerprint recognition system. The performance of these systems is then evaluated using databases of biometric samples gathered during the research. The results obtained from the experimental evaluation of the multi-modal system demonstrated a clear improvement in the accuracy of the system compared to a unimodal biometric approach. The adoption of the intelligent agent architecture at the interface level resulted in a system where false reject rates were reduced when compared to a system that did not employ an intelligent interface. The results obtained from both systems clearly express the benefits of combining an intelligent agent framework with a biometric system to provide a more robust and flexible application
    • …
    corecore