947 research outputs found

    Offline speaker segmentation using genetic algorithms and mutual information

    Get PDF
    We present an evolutionary approach to speaker segmentation, an activity that is especially important prior to speaker recognition and audio content analysis tasks. Our approach consists of a genetic algorithm (GA), which encodes possible segmentations of an audio record, and a measure of mutual information between the audio data and possible segmentations, which is used as fitness function for the GA. We introduce a compact encoding of the problem into the GA which reduces the length of the GA individuals and improves the GA convergence properties. Our algorithm has been tested on the segmentation of real audio data, and its performance has been compared with several existing algorithms for speaker segmentation, obtaining very good results in all test problems.This work was supported in part by the Universidad de Alcalá under Project UAH PI2005/078

    Analytic Predictive of Hepatitis using The Regression Logic Algorithm

    Get PDF
    Hepatitis is an inflammation of the liver which is one of the diseases that affects the health of millions of people in the world of all ages. Predicting the outcome of this disease can be said to be quite challenging, where the main challenge for public health care services itself is due to a limited clinical diagnosis at an early stage. So by utilizing machine learning techniques on existing data, namely by concluding diagnostic rules to see trends in hepatitis patient data and see what factors are affecting patients with hepatitis, can make the diagnosis process more reliable to improve their health care. The approach that can be used to carry out this prediction process is a regression technique. The regression itself provides a relationship between the independent variable and the dependent variable. By using the hepatitis disease dataset from UCI Machine Learning, this study applies a logistic regression model that provides analysis results with an accuracy rate of 83.33

    Unsupervised video indexing on audiovisual characterization of persons

    Get PDF
    Cette thèse consiste à proposer une méthode de caractérisation non-supervisée des intervenants dans les documents audiovisuels, en exploitant des données liées à leur apparence physique et à leur voix. De manière générale, les méthodes d'identification automatique, que ce soit en vidéo ou en audio, nécessitent une quantité importante de connaissances a priori sur le contenu. Dans ce travail, le but est d'étudier les deux modes de façon corrélée et d'exploiter leur propriété respective de manière collaborative et robuste, afin de produire un résultat fiable aussi indépendant que possible de toute connaissance a priori. Plus particulièrement, nous avons étudié les caractéristiques du flux audio et nous avons proposé plusieurs méthodes pour la segmentation et le regroupement en locuteurs que nous avons évaluées dans le cadre d'une campagne d'évaluation. Ensuite, nous avons mené une étude approfondie sur les descripteurs visuels (visage, costume) qui nous ont servis à proposer de nouvelles approches pour la détection, le suivi et le regroupement des personnes. Enfin, le travail s'est focalisé sur la fusion des données audio et vidéo en proposant une approche basée sur le calcul d'une matrice de cooccurrence qui nous a permis d'établir une association entre l'index audio et l'index vidéo et d'effectuer leur correction. Nous pouvons ainsi produire un modèle audiovisuel dynamique des intervenants.This thesis consists to propose a method for an unsupervised characterization of persons within audiovisual documents, by exploring the data related for their physical appearance and their voice. From a general manner, the automatic recognition methods, either in video or audio, need a huge amount of a priori knowledge about their content. In this work, the goal is to study the two modes in a correlated way and to explore their properties in a collaborative and robust way, in order to produce a reliable result as independent as possible from any a priori knowledge. More particularly, we have studied the characteristics of the audio stream and we have proposed many methods for speaker segmentation and clustering and that we have evaluated in a french competition. Then, we have carried a deep study on visual descriptors (face, clothing) that helped us to propose novel approches for detecting, tracking, and clustering of people within the document. Finally, the work was focused on the audiovisual fusion by proposing a method based on computing the cooccurrence matrix that allowed us to establish an association between audio and video indexes, and to correct them. That will enable us to produce a dynamic audiovisual model for each speaker

    Influence Distribution Training Data on Performance Supervised Machine Learning Algorithms

    Get PDF
    Almost all fields of life need Banknote. Even particular fields of life require banknotes in large quantities such as banks, transportation companies, and casinos. Therefore Banknotes are an essential component in carrying out all activities every day, especially those related to finance. Through technological advancements such as scanners and copy machine, it can provide the opportunity for anyone to commit a crime. The crime is like a counterfeit banknote. Many people still find it difficult to distinguish between a genuine banknote ad counterfeit Banknote, that is because counterfeit Banknote produced have a high degree of resemblance to the genuine Banknote. Based on that background, authors want to do a classification process to distinguish between genuine Banknote and counterfeit Banknote. The classification process use methods Supervised Learning and compares the level of accuracy based on the distribution of training data. The methods of supervised Learning used are Support Vector Machine (SVM), K-Nearest Neighbor (K-NN), and Naïve Bayes. K-NN method is a method that has the highest specificity, sensitivity, and accuracy of the three methods used by the authors both in the training data of 30%, 50%, and 80%. Where in the training data 30% and 50% value specificity: 0.99, sensitivity: 1.00, accuracy: 0.99. While the 80% training data value specificity: 1.00, sensitivity: 1.00, accuracy: 1.00. This means that the distribution of training data influences the performance of the Supervised Machine Learning algorithm. In the KNN method, the greater the training data, the better the accuracy

    Comparative Analysis of DDoS Detection Techniques Based on Machine Learning in OpenFlow Network

    Get PDF
    Software Defined Network (SDN) allows the separation of a control layer and data forwarding at two different layers. However, centralized control systems in SDN is vulnerable to attacks namely distributed denial of service (DDoS). Therefore, it is necessary for developing a solution based on reactive applications that can identify, detect, as well as mitigate the attacks comprehensively. In this paper, an application has been built based on machine learning methods including, Support Vector Machine (SVM) using Linear and Radial Basis Function kernel, K-Nearest Neighbor (KNN), Decision Tree (DTC), Random Forest (RFC), Multi-Layer Perceptron (MLP), and Gaussian Naïve Bayes (GNB). The paper also proposed a new scheme of DDOS dataset in SDN by gathering considerably static data form using the port statistic. SVM became the most efficient method for identifying DDoS attack successfully proved by the accuracy, precision, and recall approximately 100% which could be considered as the primary algorithm for detecting DDoS. In term of the promptness, KNN had the slowest rate for the whole process, while the fastest was depicted by GNB

    Towards an automatic speech recognition system for use by deaf students in lectures

    Get PDF
    According to the Royal National Institute for Deaf people there are nearly 7.5 million hearing-impaired people in Great Britain. Human-operated machine transcription systems, such as Palantype, achieve low word error rates in real-time. The disadvantage is that they are very expensive to use because of the difficulty in training operators, making them impractical for everyday use in higher education. Existing automatic speech recognition systems also achieve low word error rates, the disadvantages being that they work for read speech in a restricted domain. Moving a system to a new domain requires a large amount of relevant data, for training acoustic and language models. The adopted solution makes use of an existing continuous speech phoneme recognition system as a front-end to a word recognition sub-system. The subsystem generates a lattice of word hypotheses using dynamic programming with robust parameter estimation obtained using evolutionary programming. Sentence hypotheses are obtained by parsing the word lattice using a beam search and contributing knowledge consisting of anti-grammar rules, that check the syntactic incorrectness’ of word sequences, and word frequency information. On an unseen spontaneous lecture taken from the Lund Corpus and using a dictionary containing "2637 words, the system achieved 815% words correct with 15% simulated phoneme error, and 73.1% words correct with 25% simulated phoneme error. The system was also evaluated on 113 Wall Street Journal sentences. The achievements of the work are a domain independent method, using the anti- grammar, to reduce the word lattice search space whilst allowing normal spontaneous English to be spoken; a system designed to allow integration with new sources of knowledge, such as semantics or prosody, providing a test-bench for determining the impact of different knowledge upon word lattice parsing without the need for the underlying speech recognition hardware; the robustness of the word lattice generation using parameters that withstand changes in vocabulary and domain

    Video surveillance systems-current status and future trends

    Get PDF
    Within this survey an attempt is made to document the present status of video surveillance systems. The main components of a surveillance system are presented and studied thoroughly. Algorithms for image enhancement, object detection, object tracking, object recognition and item re-identification are presented. The most common modalities utilized by surveillance systems are discussed, putting emphasis on video, in terms of available resolutions and new imaging approaches, like High Dynamic Range video. The most important features and analytics are presented, along with the most common approaches for image / video quality enhancement. Distributed computational infrastructures are discussed (Cloud, Fog and Edge Computing), describing the advantages and disadvantages of each approach. The most important deep learning algorithms are presented, along with the smart analytics that they utilize. Augmented reality and the role it can play to a surveillance system is reported, just before discussing the challenges and the future trends of surveillance
    • …
    corecore