14 research outputs found

    Malay articulation system for early screening diagnostic using hidden markov model and genetic algorithm

    Get PDF
    Speech recognition is an important technology and can be used as a great aid for individuals with sight or hearing disabilities today. There are extensive research interest and development in this area for over the past decades. However, the prospect in Malaysia regarding the usage and exposure is still immature even though there is demand from the medical and healthcare sector. The aim of this research is to assess the quality and the impact of using computerized method for early screening of speech articulation disorder among Malaysian such as the omission, substitution, addition and distortion in their speech. In this study, the statistical probabilistic approach using Hidden Markov Model (HMM) has been adopted with newly designed Malay corpus for articulation disorder case following the SAMPA and IPA guidelines. Improvement is made at the front-end processing for feature vector selection by applying the silence region calibration algorithm for start and end point detection. The classifier had also been modified significantly by incorporating Viterbi search with Genetic Algorithm (GA) to obtain high accuracy in recognition result and for lexical unit classification. The results were evaluated by following National Institute of Standards and Technology (NIST) benchmarking. Based on the test, it shows that the recognition accuracy has been improved by 30% to 40% using Genetic Algorithm technique compared with conventional technique. A new corpus had been built with verification and justification from the medical expert in this study. In conclusion, computerized method for early screening can ease human effort in tackling speech disorders and the proposed Genetic Algorithm technique has been proven to improve the recognition performance in terms of search and classification task

    English spelling and the computer

    Get PDF
    The first half of the book is about spelling, the second about computers. Chapter Two describes how English spelling came to be in the state that it’s in today. In Chapter Three I summarize the debate between those who propose radical change to the system and those who favour keeping it as it is, and I show how computerized correction can be seen as providing at least some of the benefits that have been claimed for spelling reform. Too much of the literature on computerized spellcheckers describes tests based on collections of artificially created errors; Chapter Four looks at the sorts of misspellings that people actually make, to see more clearly the problems that a spellchecker has to face. Chapter Five looks more closely at the errors that people make when they don’t know how to spell a word, and Chapter Six at the errors that people make when they know perfectly well how to spell a word but for some reason write or type something else. Chapter Seven begins the second part of the book with a description of the methods that have been devised over the last thirty years for getting computers to detect and correct spelling errors. Its conclusion is that spellcheckers have some way to go before they can do the job we would like them to do. Chapters Eight to Ten describe a spellchecker that I have designed which attempts to address some of the remaining problems, especially those presented by badly spelt text. In 1982, when I began this research, there were no spellcheckers that would do anything useful with a sentence such as, ‘You shud try to rember all ways to youz a lifejacket when yotting.’ That my spellchecker corrects this perfectly (which it does) is less impressive now, I have to admit, than it would have been then, simply because there are now a few spellcheckers on the market which do make a reasonable attempt at errors of that kind. My spellchecker does, however, handle some classes of errors that other spellcheckers do not perform well on, and Chapter Eleven concludes the book with the results of some comparative tests, a few reflections on my spellchecker’s shortcomings and some speculations on possible developments

    Early American Phonology.

    Get PDF

    Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future

    Get PDF
    Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)

    Segmentation and recognition of phonetic features in handwritten Pitman shorthand

    No full text
    There is a wish to be able to enter text into mobile computing devices at the speed of speech. Only handwritten shorthand schemes can achieve this data recording rate. A new, overall solution to the segmentation and recognition of phonetic features in Pitman shorthand is proposed in this paper. Approaches to the recognition of consonant outlines, vowel and diphthong symbols and shortforms, which are different components of Pitman shorthand, are presented. A new rule is introduced to solve the issue of smooth junctions in the consonant outlines which was normally the bottleneck for recognition. Experiments with a set of 1127 consonant outlines, 2039 vowels and diphthongs and 841 shortforms from three shorthand writers have demonstrated that the proposed solution is quite promising. The recognition accuracies for consonant outlines, vowels and diphthongs, and shortforms achieved 75.33%, 96.86% and 91.86%, respectively. From the evaluation of 461 outlines with smooth junction, the introduction of the new rule has a great positive effect on the performance of the solution. The recognition accuracy of smooth junction improves from 37.53% to 93.41% given a writing time increase of 14.42%

    German Ethnography in Australia

    Get PDF
    The contribution of German ethnography to Australian anthropological scholarship on Aboriginal societies and cultures has been limited, primarily because few people working in the field read German. But it has also been neglected because its humanistic concerns with language, religion and mythology contrasted with the mainstream British social anthropological tradition that prevailed in Australia until the late 1960s. The advent of native title claims, which require drawing on the earliest ethnography for any area, together with an increase in research on rock art of the Kimberley region, has stimulated interest in this German ethnography, as have some recent book translations. Even so, several major bodies of ethnography, such as the 13 volumes on the cultures of northeastern South Australia and the seven volumes on the Aranda of the Alice Springs region, remain inaccessible, along with many ethnographically rich articles and reports in mission archives. In 18 chapters, this book introduces and reviews the significance of this neglected work, much of it by missionaries who first wrote on Australian Aboriginal cultures in the 1840s. Almost all of these German speakers, in particular the missionaries, learnt an Aboriginal language in order to be able to document religious beliefs, mythology and songs as a first step to conversion. As a result, they produced an enormously valuable body of work that will greatly enrich regional ethnographies

    A comparison of the CAR and DAGAR spatial random effects models with an application to diabetics rate estimation in Belgium

    Get PDF
    When hierarchically modelling an epidemiological phenomenon on a finite collection of sites in space, one must always take a latent spatial effect into account in order to capture the correlation structure that links the phenomenon to the territory. In this work, we compare two autoregressive spatial models that can be used for this purpose: the classical CAR model and the more recent DAGAR model. Differently from the former, the latter has a desirable property: its ρ parameter can be naturally interpreted as the average neighbor pair correlation and, in addition, this parameter can be directly estimated when the effect is modelled using a DAGAR rather than a CAR structure. As an application, we model the diabetics rate in Belgium in 2014 and show the adequacy of these models in predicting the response variable when no covariates are available
    corecore