1,858 research outputs found

    Fractal based speech recognition and synthesis

    Get PDF
    Transmitting a linguistic message is most often the primary purpose of speech com­munication and the recognition of this message by machine that would be most useful. This research consists of two major parts. The first part presents a novel and promis­ing approach for estimating the degree of recognition of speech phonemes and makes use of a new set of features based fractals. The main methods of computing the frac­tal dimension of speech signals are reviewed and a new speaker-independent speech recognition system developed at De Montfort University is described in detail. Fi­nally, a Least Square Method as well as a novel Neural Network algorithm is employed to derive the recognition performance of the speech data. The second part of this work studies the synthesis of speech words, which is based mainly on the fractal dimension to create natural sounding speech. The work shows that by careful use of the fractal dimension together with the phase of the speech signal to ensure consistent intonation contours, natural-sounding speech synthesis is achievable with word level speech. In order to extend the flexibility of this framework, we focused on the filtering and the compression of the phase to maintain and produce natural sounding speech. A ‘naturalness level’ is achieved as a result of the fractal characteristic used in the synthesis process. Finally, a novel speech synthesis system based on fractals developed at De Montfort University is discussed. Throughout our research simulation experiments were performed on continuous speech data available from the Texas Instrument Massachusetts institute of technology ( TIMIT) database, which is designed to provide the speech research community with a standarised corpus for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition system

    Identification of persons via voice imprint

    Get PDF
    Tato práce se zabývá textově závislým rozpoznáváním řečníků v systémech, kde existuje pouze omezené množství trénovacích vzorků. Pro účel rozpoznávání je navržen otisk hlasu založený na různých příznacích (např. MFCC, PLP, ACW atd.). Na začátku práce je zmíněn způsob vytváření řečového signálu. Některé charakteristiky řeči, důležité pro rozpoznávání řečníků, jsou rovněž zmíněny. Další část práce se zabývá analýzou řečového signálu. Je zde zmíněno předzpracování a také metody extrakce příznaků. Následující část popisuje proces rozpoznávání řečníků a zmiňuje způsoby ohodnocení používaných metod: identifikace a verifikace řečníků. Poslední teoreticky založená část práce se zabývá klasifikátory vhodnými pro textově závislé rozpoznávání. Jsou zmíněny klasifikátory založené na zlomkových vzdálenostech, dynamickém borcení časové osy, vyrovnávání rozptylu a vektorové kvantizaci. Tato práce pokračuje návrhem a realizací systému, který hodnotí všechny zmíněné klasifikátory pro otisk hlasu založený na různých příznacích.This work deals with the text-dependent speaker recognition in systems, where just a few training samples exist. For the purpose of this recognition, the voice imprint based on different features (e.g. MFCC, PLP, ACW etc.) is proposed. At the beginning, there is described the way, how the speech signal is produced. Some speech characteristics important for speaker recognition are also mentioned. The next part of work deals with the speech signal analysis. There is mentioned the preprocessing and also the feature extraction methods. The following part describes the process of speaker recognition and mentions the evaluation of the used methods: speaker identification and verification. Last theoretically based part of work deals with the classifiers which are suitable for the text-dependent recognition. The classifiers based on fractional distances, dynamic time warping, dispersion matching and vector quantization are mentioned. This work continues by design and realization of system, which evaluates all described classifiers for voice imprint based on different features.

    Frequency specific deficits in schizophrenia

    Get PDF
    Coherence estimation is one of the methods for understanding functional connectivity deficits and frequency specific deficits in schizophrenia. Coherence between different lobes of the brain from task related data at different frequency bands was investigated in patients with Schizophrenia (SP) and Healthy Normal Volunteers (HNV). The task was aimed to study the neural mechanisms underlying auditory and visual integration in patients with schizophrenia relative to healthy controls, which requires intact connectivity between the lobes of the brain in order to recombine the sensory information into a complete percept of the external world. Coherence was calculated from the processed magneto-encephalography (MEG) data for each pair of lobes of left temporal and parietal, left temporal and occipital, right temporal and parietal, right temporal and occipital, parietal and occipital at the frequency bands of delta (0 to 4 Hz), theta (4 -8 Hz), alpha (8-13 Hz), beta (13 -30 Hz) and gamma (30-100 Hz). Analysis Of Variance (ANOVA) was performed on the coherence data of 30 subjects comprised of 15 patients and 15 controls. There was a significant interaction between frequency and diagnosis with age. Significant differences were found between patients and controls at the Delta frequency band, which was confirmed with Bonferroni-corrected -t-tests at the delta frequency range in each pair of regions. It was found that patients had higher coherence than controls in the delta frequency band and it was significant across lobes which suggest an abnormal MEG coherence during evoked activity in schizophrenic patients

    A Comprehensive Review on Audio based Musical Instrument Recognition: Human-Machine Interaction towards Industry 4.0

    Get PDF
    Over the last two decades, the application of machine technology has shifted from industrial to residential use. Further, advances in hardware and software sectors have led machine technology to its utmost application, the human-machine interaction, a multimodal communication. Multimodal communication refers to the integration of various modalities of information like speech, image, music, gesture, and facial expressions. Music is the non-verbal type of communication that humans often use to express their minds. Thus, Music Information Retrieval (MIR) has become a booming field of research and has gained a lot of interest from the academic community, music industry, and vast multimedia users. The problem in MIR is accessing and retrieving a specific type of music as demanded from the extensive music data. The most inherent problem in MIR is music classification. The essential MIR tasks are artist identification, genre classification, mood classification, music annotation, and instrument recognition. Among these, instrument recognition is a vital sub-task in MIR for various reasons, including retrieval of music information, sound source separation, and automatic music transcription. In recent past years, many researchers have reported different machine learning techniques for musical instrument recognition and proved some of them to be good ones. This article provides a systematic, comprehensive review of the advanced machine learning techniques used for musical instrument recognition. We have stressed on different audio feature descriptors of common choices of classifier learning used for musical instrument recognition. This review article emphasizes on the recent developments in music classification techniques and discusses a few associated future research problems

    Temporal lobe white matter asymmetry and language laterality in epilepsy patients.

    Get PDF
    Recent studies using diffusion tensor imaging (DTI) have advanced our knowledge of the organization of white matter subserving language function. It remains unclear, however, how DTI may be used to predict accurately a key feature of language organization: its asymmetric representation in one cerebral hemisphere. In this study of epilepsy patients with unambiguous lateralization on Wada testing (19 left and 4 right lateralized subjects; no bilateral subjects), the predictive value of DTI for classifying the dominant hemisphere for language was assessed relative to the existing standard-the intra-carotid Amytal (Wada) procedure. Our specific hypothesis is that language laterality in both unilateral left- and right-hemisphere language dominant subjects may be predicted by hemispheric asymmetry in the relative density of three white matter pathways terminating in the temporal lobe implicated in different aspects of language function: the arcuate (AF), uncinate (UF), and inferior longitudinal fasciculi (ILF). Laterality indices computed from asymmetry of high anisotropy AF pathways, but not the other pathways, classified the majority (19 of 23) of patients using the Wada results as the standard. A logistic regression model incorporating information from DTI of the AF, fMRI activity in Broca\u27s area, and handedness was able to classify 22 of 23 (95.6%) patients correctly according to their Wada score. We conclude that evaluation of highly anisotropic components of the AF alone has significant predictive power for determining language laterality, and that this markedly asymmetric distribution in the dominant hemisphere may reflect enhanced connectivity between frontal and temporal sites to support fluent language processes. Given the small sample reported in this preliminary study, future research should assess this method on a larger group of patients, including subjects with bi-hemispheric dominance

    Electrophysiological methods

    No full text

    Development of a sensory substitution API

    Get PDF
    2018 Summer.Includes bibliographical references.Sensory substitution – or the practice of mapping information from one sensory modality to another – has been shown to be a viable technique for non-invasive sensory replacement and augmentation. With the rise in popularity, ubiquity, and capability of mobile devices and wearable electronics, sensory substitution research has seen a resurgence in recent years. Due to the standard features of mobile/wearable electronics such as Bluetooth, multicore processing, and audio recording, these devices can be used to drive sensory substitution systems. Therefore, there exists a need for a flexible, extensible software package capable of performing the required real-time data processing for sensory substitution, on modern mobile devices. The primary contribution of this thesis is the development and release of an Open Source Application Programming Interface (API) capable of managing an audio stream from the source of sound to a sensory stimulus interface on the body. The API (named Tactile Waves) is written in the Java programming language and packaged as both a Java library (JAR) and Android library (AAR). The development and design of the library is presented, and its primary functions are explained. Implementation details for each primary function are discussed. Performance evaluation of all processing routines is performed to ensure real-time capability, and the results are summarized. Finally, future improvements to the library and additional applications of sensory substitution are proposed
    corecore