556 research outputs found

    Maximum Likelihood Pitch Estimation Using Sinusoidal Modeling

    Get PDF
    The aim of the work presented in this thesis is to automatically extract the fundamental frequency of a periodic signal from noisy observations, a task commonly referred to as pitch estimation. An algorithm for optimal pitch estimation using a maximum likelihood formulation is presented. The speech waveform is modeled using sinusoidal basis functions that are harmonically tied together to explicitly capture the periodic structure of voiced speech. The problem of pitch estimation is casted as a model selection problem and the Akaike Information Criterion is used to estimate the pitch. The algorithm is compared with several existing pitch detection algorithms (PDAs) on a reference pitch database. The results indicate the superior performance of the algorithm in comparison with most of the PDAs. The application of parametric modeling in single channel speech segregation and the use of mel-frequency cepstral coefficients for sequential grouping are analyzed in the speech separation challenge database

    Biometrics

    Get PDF
    Biometrics uses methods for unique recognition of humans based upon one or more intrinsic physical or behavioral traits. In computer science, particularly, biometrics is used as a form of identity access management and access control. It is also used to identify individuals in groups that are under surveillance. The book consists of 13 chapters, each focusing on a certain aspect of the problem. The book chapters are divided into three sections: physical biometrics, behavioral biometrics and medical biometrics. The key objective of the book is to provide comprehensive reference and text on human authentication and people identity verification from both physiological, behavioural and other points of view. It aims to publish new insights into current innovations in computer systems and technology for biometrics development and its applications. The book was reviewed by the editor Dr. Jucheng Yang, and many of the guest editors, such as Dr. Girija Chetty, Dr. Norman Poh, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park, Dr. Sook Yoon and so on, who also made a significant contribution to the book

    Verificaciónn de firma y gráficos manuscritos: Características discriminantes y nuevos escenarios de aplicación biométrica

    Full text link
    Tesis doctoral inédita leída en la Escuela Politécnica Superior, Departamento de Tecnología Electrónica y de las Comunicaciones. Fecha de lectura: Febrero 2015The proliferation of handheld devices such as smartphones and tablets brings a new scenario for biometric authentication, and in particular to automatic signature verification. Research on signature verification has been traditionally carried out using signatures acquired on digitizing tablets or Tablet-PCs. This PhD Thesis addresses the problem of user authentication on handled devices using handwritten signatures and graphical passwords based on free-form doodles, as well as the effects of biometric aging on signatures. The Thesis pretends to analyze: (i) which are the effects of mobile conditions on signature and doodle verification, (ii) which are the most distinctive features in mobile conditions, extracted from the pen or fingertip trajectory, (iii) how do different similarity computation (i.e. matching) algorithms behave with signatures and graphical passwords captured on mobile conditions, and (iv) what is the impact of aging on signature features and verification performance. Two novel datasets have been presented in this Thesis. A database containing free-form graphical passwords drawn with the fingertip on a smartphone is described. It is the first publicly available graphical password database to the extent of our knowledge. A dataset containing signatures from users captured over a period 15 months is also presented, aimed towards the study of biometric aging. State-of-the-art local and global matching algorithms are used, namely Hidden Markov Models, Gaussian Mixture Models, Dynamic Time Warping and distance-based classifiers. A large proportion of features presented in the research literature is considered in this Thesis. The experimental contribution of this Thesis is divided in three main topics: signature verification on handheld devices, the effects of aging on signature verification, and free-form graphical password-based authentication. First, regarding signature verification in mobile conditions, we use a database captured both on a handheld device and digitizing tablet in an office-like scenario. We analyze the discriminative power of both global and local features using discriminant analysis and feature selection techniques. The effects of the lack of pen-up trajectories on handheld devices (when the stylus tip is not in contact with the screen) are also studied. We then analyze the effects of biometric aging on the signature trait. Using three different matching algorithms, Hidden Markov Models (HMM), Dynamic Time Warping (DTW), and distance-based classifiers, the impact in verification performance is studied. We also study the effects of aging on individual users and individual signature features. Template update techniques are analyzed as a way of mitigating the negative impact of aging. Regarding graphical passwords, the DooDB graphical password database is first presented. A statistical analysis is performed comparing the database samples (free-form doodles and simplified signatures) with handwritten signatures. The sample variability (inter-user, intra-user and inter-session) is also analyzed, as well as the learning curve for each kind of trait. Benchmark results are also reported using state of the art classifiers. Graphical password verification is afterwards studied using features and matching algorithms from the signature verification state of the art. Feature selection is also performed and the resulting feature sets are analyzed. The main contributions of this work can be summarized as follows. A thorough analysis of individual feature performance has been carried out, both for global and local features and on signatures acquired using pen tablets and handheld devices. We have found which individual features are the most robust and which have very low discriminative potential (pen inclination and pressure among others). It has been found that feature selection increases verification performance dramatically, from example from ERRs (Equal Error Rates) over 30% using all available local features, in the case of handheld devices and skilled forgeries, to rates below 20% after feature selection. We study the impact of the lack of trajectory information when the pen tip is not in contact with the acquisition device surface (which happens when touchscreens are used for signature acquisitions), and we have found that the lack of pen-up trajectories negatively affects verification performance. As an example, the EER for the local system increases from 9.3% to 12.1% against skilled forgeries when pen-up trajectories are not available. We study the effects of biometric aging on signature verification and study a number of ways to compensate the observed performance degradation. It is found that aging does not affect equally all the users in the database and that features related to signature dynamics are more degraded than static features. Comparing the performance using test signatures from the first months with the last months, a variable effect of aging on the EER against random forgeries is observed in the three systems that are evaluated, from 0.0% to 0.5% in the DTW system, from 1.0% to 5.0% in the distance-based system using global features, and from 3.2% to 27.8% in the HMM system. A new graphical password database has been acquired and made publicly available. Verification algorithms for finger-drawn graphical passwords and simplified signatures are compared and feature analysis is performed. We have found that inter-session variability has a highly negative impact on verification performance, but this can be mitigated performing feature selection and applying fusion of different matchers. It has also been found that some feature types are prevalent in the optimal feature vectors and that classifiers have a very different behavior against skilled and random forgeries. An EER of 3.4% and 22.1% against random and skilled forgeries is obtained for free-form doodles, which is a promising performance

    Development and Evaluation of a Real-Time Framework for a Portable Assistive Hearing Device

    Get PDF
    Testing and verification of digital hearing aid devices, and the embedded software and algorithms can prove to be a challenging task especially taking into account time-to-market considerations. This thesis describes a PC based, real-time, highly configurable framework for the evaluation of audio algorithms. Implementation of audio processing algorithms on such a platform can provide hearing aid designers and manufacturers the ability to test new and existing processing techniques and collect data about their performance in real-life situations, and without the need to develop a prototype device. The platform is based on the Eurotech Catalyst development kit and the Fedora Linux OS, and it utilizes the JACK audio engine to facilitate reliable real-time performance Additionally, we demonstrate the capabilities of this platform by implementing an audio processing chain targeted at improving speech intelligibility for people suffering from auditory neuropathy. Evaluation is performed for both noisy and noise-free environments. Subjective evaluation of the results, using normal hearing listeners and an auditory neuropathy simulator, demonstrates improvement in some conditions

    From truth conditions to processes: how to model the processing difficulty of quantified sentences based on semantic theory

    Get PDF
    The present dissertation is concerned with the processing difficulty of quantified sentences and how it can be modeled based on semantic theory. Processing difficulty of quantified sentences is assessed using psycholinguistic methods such as systematically collecting truth-value judgments or recording eye movements during reading. Predictions are derived from semantic theory via parsimonious processing assumptions, taking into account automata theory, signal detection theory and computational complexity. Chapter 1 provides introductory discussion and overview. Chapter 2 introduces basic theoretical concepts that are used throughout the rest of the dissertation. In chapter 3, processing difficulty is approached on an abstract level. The difficulty of the truth evaluation of reciprocal sentences with generalized quantifiers as antecedents is classified using computational complexity theory. This is independent of the actual algorithms or procedures that are used to evaluate the sentences. One production and one sentence picture verification experiment are reported which tested whether cognitive capacities are limited to those functions that are computationally tractable. The results indicate that intractable interpretations occur in language comprehension but also that their verification rapidly exceeds cognitive capacities in case the verification problem cannot be solved using simple heuristics. Chapter 4 discusses two common approaches to model the canonical verification procedures associated with quantificational sentences. The first is based on the semantic automata model which conceives of quantifiers as decision problems and characterizes the computational resources that are needed to solve them. The second approach is based on the interface transparency thesis, which stipulates a transparent interface between semantic representations and the realization of verification procedures in the general cognitive architecture. Both approaches are evaluated against experimental data. Chapter 5 focuses on a test case that is challenging for both of these approaches. In particular, increased processing difficulty of `more than n‘ as compared to `fewer than n‘ is investigated. A processing model is proposed which integrates insights from formal semantics with models from cognitive psychology. This model can be seen as implementation and extension of the interface transparency thesis. The truth evaluation process is conceived of as a stochastic process as described in sequential sampling models of decision making. The increased difficulty of `fewer than n’ as compared to `more than n’ is attributed to an extra processing step of scale-reversal that precedes the actual decision process. Predictions of the integrated processing model are tested and confirmed in two sentence-picture verification experiments. Chapter 6 discusses whether and how the integrated processing model can be extended to other quantifiers. An extension to proportional comparative quantifiers, like `fewer than half’ and `more than half’ is proposed and discussed in the light of existing experimental data. Moreover, it is shown that what are called empty-set effects can be naturally derived from the model. Chapter 7 presents data from two eye tracking experiments that show that `fewer than’ leads to increased difficulty as compared to `more than’ already during reading. Moreover, this effect is magnified if such quantifiers are combined with overt negation. Potential accounts of these findings are discussed. Conclusions are summarized in chapter 8

    Automatic acoustic analysis of waveform perturbations

    Get PDF

    Advanced user authentification for mobile devices

    Get PDF
    Access to the full-text thesis is no longer available at the author's request, due to 3rd party copyright restrictions. Access removed on 28.11.2016 by CS (TIS).Metadata merged with duplicate record ( http://hdl.handle.net/10026.1/1101 - now deleted) on 20.12.2016 by CS (TIS).Recent years have witnessed widespread adoption of mobile devices. Whereas initial popularity was driven by voice telephony services, capabilities are now broadening to allow an increasing range of data orientated services. Such services serve to extend the range of sensitive data accessible through such devices and will in turn increase the requirement for reliable authentication of users. This thesis considers the authentication requirements of mobile devices and proposes novel mechanisms to improve upon the current state of the art. The investigation begins with an examination of existing authentication techniques, and illustrates a wide range of drawbacks. A survey of end-users reveals that current methods are frequently misused and considered inconvenient, and that enhanced methods of security are consequently required. To this end, biometric approaches are identified as a potential means of overcoming the perceived constraints, offering an opportunity for security to be maintained beyond pointof- entry, in a continuous and transparent fashion. The research considers the applicability of different biometric approaches for mobile device implementation, and identifies keystroke analysis as a technique that can offer significant potential within mobile telephony. Experimental evaluations reveal the potential of the technique when applied to a Personal Identification Number (PIN), telephone number and text message, with best case equal error rates (EER) of 9%, 8% and 18% respectively. In spite of the success of keystroke analysis for many users, the results demonstrate the technique is not uniformly successful across the whole of a given population. Further investigation suggests that the same will be true for other biometrics, and therefore that no single authentication technique could be relied upon to account for all the users in all interaction scenarios. As such, a novel authentication architecture is specified, which is capable of utilising the particular hardware configurations and computational capabilities of devices to provide a robust, modular and composite authentication mechanism. The approach, known as IAMS (Intelligent Authentication Management System), is capable of utilising a broad range of biometric and secret knowledge based approaches to provide a continuous confidence measure in the identity of the user. With a high confidence, users are given immediate access to sensitive services and information, whereas with lower levels of confidence, restrictions can be placed upon access to sensitive services, until subsequent reassurance of a user's identity. The novel architecture is validated through a proof-of-concept prototype. A series of test scenarios are used to illustrate how IAMS would behave, given authorised and impostor authentication attempts. The results support the use of a composite authentication approach to enable the non-intrusive authentication of users on mobile devices.Orange Personal Communication Services Ltd

    Knowledge-based pitch detection

    Get PDF
    Originally presented as author's thesis (Ph. D.--Massachusetts Institute of Technology), 1986.Bibliography: p. 209-213.Supported in part by the Advanced Research Projects Agency monitored by ONR under contract no. N00014-81-K-0742 Supported in part by the National Science Foundation under grant ECS-8407285Webster P. Dove
    • …
    corecore