12,843 research outputs found

    A Review of Audio Features and Statistical Models Exploited for Voice Pattern Design

    Full text link
    Audio fingerprinting, also named as audio hashing, has been well-known as a powerful technique to perform audio identification and synchronization. It basically involves two major steps: fingerprint (voice pattern) design and matching search. While the first step concerns the derivation of a robust and compact audio signature, the second step usually requires knowledge about database and quick-search algorithms. Though this technique offers a wide range of real-world applications, to the best of the authors' knowledge, a comprehensive survey of existing algorithms appeared more than eight years ago. Thus, in this paper, we present a more up-to-date review and, for emphasizing on the audio signal processing aspect, we focus our state-of-the-art survey on the fingerprint design step for which various audio features and their tractable statistical models are discussed.Comment: http://www.iaria.org/conferences2015/PATTERNS15.html ; Seventh International Conferences on Pervasive Patterns and Applications (PATTERNS 2015), Mar 2015, Nice, Franc

    Panako: a scalable acoustic fingerprinting system handling time-scale and pitch modification

    Get PDF
    In this paper a scalable granular acoustic fingerprinting system robust against time and pitch scale modification is presented. The aim of acoustic fingerprinting is to identify identical, or recognize similar, audio fragments in a large set using condensed representations of audio signals, i.e. fingerprints. A robust fingerprinting system generates similar fingerprints for perceptually similar audio signals. The new system, presented here, handles a variety of distortions well. It is designed to be robust against pitch shifting, time stretching and tempo changes, while remaining scalable. After a query, the system returns the start time in the reference audio, and the amount of pitch shift and tempo change that has been applied. The design of the system that offers this unique combination of features is the main contribution of this research. The fingerprint itself consists of a combination of key points in a Constant-Q spectrogram. The system is evaluated on commodity hardware using a freely available reference database with fingerprints of over 30.000 songs. The results show that the system responds quickly and reliably on queries, while handling time and pitch scale modifications of up to ten percent

    A Fingerprint Matching Model using Unsupervised Learning Approach

    Get PDF
    The increase in the number of interconnected information systems and networks to the Internet has led to an increase in different security threats and violations such as unauthorised remote access. The existing network technologies and communication protocols are not well designed to deal with such problems. The recent explosive development in the Internet allowed unwelcomed visitors to gain access to private information and various resources such as financial institutions, hospitals, airports ... etc. Those resources comprise critical-mission systems and information which rely on certain techniques to achieve effective security. With the increasing use of IT technologies for managing information, there is a need for stronger authentication mechanisms such as biometrics which is expected to take over many of traditional authentication and identification solutions. Providing appropriate authentication and identification mechanisms such as biometrics not only ensures that the right users have access to resources and giving them the right privileges, but enables cybercrime forensics specialists to gather useful evidence whenever needed. Also, critical-mission resources and applications require mechanisms to detect when legitimate users try to misuse their privileges; certainly biometrics helps to provide such services. This paper investigates the field of biometrics as one of the recent developed mechanisms for user authentication and evidence gathering despite its limitations. A biometric-based solution model is proposed using various statistical-based unsupervised learning approaches for fingerprint matching. The proposed matching algorithm is based on three various similarity measures, Cosine similarity measure, Manhattan distance measure and Chebyshev distance measure. In this paper, we introduce a model which uses those similarity measures to compute a fingerprint’s matching factor. The calculated matching factor is based on a certain threshold value which could be used by a forensic specialist for deciding whether a suspicious user is actually the person who claims to be or not. A freely available fingerprint biometric SDK has been used to develop and implement the suggested algorithm. The major findings of the experiments showed promising and interesting results in terms of the performance of all the proposed similarity measures.Final Accepted Versio

    The study of probability model for compound similarity searching

    Get PDF
    Information Retrieval or IR system main task is to retrieve relevant documents according to the users query. One of IR most popular retrieval model is the Vector Space Model. This model assumes relevance based on similarity, which is defined as the distance between query and document in the concept space. All currently existing chemical compound database systems have adapt the vector space model to calculate the similarity of a database entry to a query compound. However, it assumes that fragments represented by the bits are independent of one another, which is not necessarily true. Hence, the possibility of applying another IR model is explored, which is the Probabilistic Model, for chemical compound searching. This model estimates the probabilities of a chemical structure to have the same bioactivity as a target compound. It is envisioned that by ranking chemical structures in decreasing order of their probability of relevance to the query structure, the effectiveness of a molecular similarity searching system can be increased. Both fragment dependencies and independencies assumption are taken into consideration in achieving improvement towards compound similarity searching system. After conducting a series of simulated similarity searching, it is concluded that PM approaches really did perform better than the existing similarity searching. It gave better result in all evaluation criteria to confirm this statement. In terms of which probability model performs better, the BD model shown improvement over the BIR model
    corecore