60 research outputs found

    An Overview of Indian Spoken Language Recognition from Machine Learning Perspective

    Get PDF
    International audienceAutomatic spoken language identification (LID) is a very important research field in the era of multilingual voice-command-based human-computer interaction (HCI). A front-end LID module helps to improve the performance of many speech-based applications in the multilingual scenario. India is a populous country with diverse cultures and languages. The majority of the Indian population needs to use their respective native languages for verbal interaction with machines. Therefore, the development of efficient Indian spoken language recognition systems is useful for adapting smart technologies in every section of Indian society. The field of Indian LID has started gaining momentum in the last two decades, mainly due to the development of several standard multilingual speech corpora for the Indian languages. Even though significant research progress has already been made in this field, to the best of our knowledge, there are not many attempts to analytically review them collectively. In this work, we have conducted one of the very first attempts to present a comprehensive review of the Indian spoken language recognition research field. In-depth analysis has been presented to emphasize the unique challenges of low-resource and mutual influences for developing LID systems in the Indian contexts. Several essential aspects of the Indian LID research, such as the detailed description of the available speech corpora, the major research contributions, including the earlier attempts based on statistical modeling to the recent approaches based on different neural network architectures, and the future research trends are discussed. This review work will help assess the state of the present Indian LID research by any active researcher or any research enthusiasts from related fields

    Affect Recognition in Human Emotional Speech using Probabilistic Support Vector Machines

    Get PDF
    The problem of inferring human emotional state automatically from speech has become one of the central problems in Man Machine Interaction (MMI). Though Support Vector Machines (SVMs) were used in several worksfor emotion recognition from speech, the potential of using probabilistic SVMs for this task is not explored. The emphasis of the current work is on how to use probabilistic SVMs for the efficient recognition of emotions from speech. Emotional speech corpuses for two Dravidian languages- Telugu & Tamil- were constructed for assessing the recognition accuracy of Probabilistic SVMs. Recognition accuracy of the proposed model is analyzed using both Telugu and Tamil emotional speech corpuses and compared with three of the existing works. Experimental results indicated that the proposed model is significantly better compared with the existing methods

    The phonetics and phonology of retroflexes : Fonetiek en fonologie van retroflexen (met een samenvatting in het Nederlands)

    Get PDF
    At the outset of this dissertation one might pose the question why retroflex consonants should still be of interest for phonetics and for phonological theory since ample work on this segmental class already exists. Bhat (1973) conducted a quite extensive study on retroflexion that treated the geographical spread of this class, some phonological processes its members can undergo, and the phonetic motivation for these processes. Furthermore, several phonological representations of retroflexes have been proposed in the framework of Feature Geometry, as in work by Sagey (1986), Pulleyblank (1989), Gnanadesikan (1993), and Clements (2001). Most recently, Steriade (1995, 2001) has discussed the perceptual cues of retroflexes and has argued that the distribution of these cues can account for the phonotactic restrictions on retroflexes and their assimilatory behaviour. Purely phonetically oriented studies such as Dixit (1990) and Simonsen, Moen & Cowen (2000) have shown the large articulatory variation that can be found for retroflexes and hint at the insufficiency of existing definitions

    Is Attention always needed? A Case Study on Language Identification from Speech

    Full text link
    Language Identification (LID) is a crucial preliminary process in the field of Automatic Speech Recognition (ASR) that involves the identification of a spoken language from audio samples. Contemporary systems that can process speech in multiple languages require users to expressly designate one or more languages prior to utilization. The LID task assumes a significant role in scenarios where ASR systems are unable to comprehend the spoken language in multilingual settings, leading to unsuccessful speech recognition outcomes. The present study introduces convolutional recurrent neural network (CRNN) based LID, designed to operate on the Mel-frequency Cepstral Coefficient (MFCC) characteristics of audio samples. Furthermore, we replicate certain state-of-the-art methodologies, specifically the Convolutional Neural Network (CNN) and Attention-based Convolutional Recurrent Neural Network (CRNN with attention), and conduct a comparative analysis with our CRNN-based approach. We conducted comprehensive evaluations on thirteen distinct Indian languages and our model resulted in over 98\% classification accuracy. The LID model exhibits high-performance levels ranging from 97% to 100% for languages that are linguistically similar. The proposed LID model exhibits a high degree of extensibility to additional languages and demonstrates a strong resistance to noise, achieving 91.2% accuracy in a noisy setting when applied to a European Language (EU) dataset.Comment: Accepted for publication in Natural Language Engineerin

    An exploration of the rhythm of Malay

    Get PDF
    In recent years there has been a surge of interest in speech rhythm. However we still lack a clear understanding of the nature of rhythm and rhythmic differences across languages. Various metrics have been proposed as means for measuring rhythm on the phonetic level and making typological comparisons between languages (Ramus et al, 1999; Grabe & Low, 2002; Dellwo, 2006) but the debate is ongoing on the extent to which these metrics capture the rhythmic basis of speech (Arvaniti, 2009; Fletcher, in press). Furthermore, cross linguistic studies of rhythm have covered a relatively small number of languages and research on previously unclassified languages is necessary to fully develop the typology of rhythm. This study examines the rhythmic features of Malay, for which, to date, relatively little work has been carried out on aspects rhythm and timing. The material for the analysis comprised 10 sentences produced by 20 speakers of standard Malay (10 males and 10 females). The recordings were first analysed using rhythm metrics proposed by Ramus et. al (1999) and Grabe & Low (2002). These metrics (∆C, %V, rPVI, nPVI) are based on durational measurements of vocalic and consonantal intervals. The results indicated that Malay clustered with other so-called syllable-timed languages like French and Spanish on the basis of all metrics. However, underlying the overall findings for these metrics there was a large degree of variability in values across speakers and sentences, with some speakers having values in the range typical of stressed-timed languages like English. Further analysis has been carried out in light of Fletcher’s (in press) argument that measurements based on duration do not wholly reflect speech rhythm as there are many other factors that can influence values of consonantal and vocalic intervals, and Arvaniti’s (2009) suggestion that other features of speech should also be considered in description of rhythm to discover what contributes to listeners’ perception of regularity. Spectrographic analysis of the Malay recordings brought to light two parameters that displayed consistency and regularity for all speakers and sentences: the duration of individual vowels and the duration of intervals between intensity minima. This poster presents the results of these investigations and points to connections between the features which seem to be consistently regulated in the timing of Malay connected speech and aspects of Malay phonology. The results are discussed in light of current debate on the descriptions of rhythm

    Multimodal Based Audio-Visual Speech Recognition for Hard-of-Hearing: State of the Art Techniques and Challenges

    Get PDF
    Multimodal Integration (MI) is the study of merging the knowledge acquired by the nervous system using sensory modalities such as speech, vision, touch, and gesture. The applications of MI expand over the areas of Audio-Visual Speech Recognition (AVSR), Sign Language Recognition (SLR), Emotion Recognition (ER), Bio Metrics Applications (BMA), Affect Recognition (AR), Multimedia Retrieval (MR), etc. The fusion of modalities such as hand gestures- facial, lip- hand position, etc., are mainly used sensory modalities for the development of hearing-impaired multimodal systems. This paper encapsulates an overview of multimodal systems available within literature towards hearing impaired studies. This paper also discusses some of the studies related to hearing-impaired acoustic analysis. It is observed that very less algorithms have been developed for hearing impaired AVSR as compared to normal hearing. Thus, the study of audio-visual based speech recognition systems for the hearing impaired is highly demanded for the people who are trying to communicate with natively speaking languages.  This paper also highlights the state-of-the-art techniques in AVSR and the challenges faced by the researchers for the development of AVSR systems

    The phonetics and phonology of retroflexes

    Get PDF
    This dissertation investigates the phonetic realization and phonological behaviour of the class of retroflexes, i.e. sounds that are articulated with the tongue tip or the underside of the tongue tip against the postalveolar or palatal region. On the basis of four articulatory properties, a new definition of retroflexes is proposed. These properties are apicality, posteriority, sublingual cavity, and retraction; the latter is shown to imply that retroflexes are incompatible with secondary palatalization. The phonetic section gives an overview of the factors responsible for the large articulatory variation of retroflexes and discusses putative counterexamples of palatalized retroflexes. In addition, it describes the acoustic realization of retroflexes and proposes the common characteristic of a low third fomant. The phonological section discusses processes involving retroflexes from a large number of typologically diverse languages. These processes are shown to be grounded in the similar articulatory and acoustic properties of the retroflex class. Furthermore, this section gives a phonological analysis of the processes involving retroflexes in an Optimally Theoretic framework with underlying perceptual representations, based on Boersma s Functional Phonology. Evidence is presented for the non-universality of the retroflex class, and for the non-necessity of innate phonological features. This study is of interest to phonologists and phoneticians, especially to those working on the phonetics-phonology interface

    Rhotics.New Data and Perspectives

    Get PDF
    This book provides an insight into the patterns of variation and change of rhotics in different languages and from a variety of perspectives. It sheds light on the phonetics, the phonology, the socio-linguistics and the acquisition of /r/-sounds in languages as diverse as Dutch, English, French, German, Greek, Hebrew, Italian, Kuikuro, Malayalam, Romanian, Slovak, Tyrolean and Washili Shingazidja thus contributing to the discussion on the unity and uniqueness of this group of sounds
    corecore