290 research outputs found

    A serious game for children with speech disorders and hearing problems

    Get PDF
    Tezin basılısı İstanbul Şehir Üniversitesi Kütüphanesi'ndedir.Speechimpedimentaffectingchildrenwithhearingdifficultiesandspeechdisordersrequires speech therapy and much practice to overcome. In fact, speech therapy via serious games gives an opportunity to children with speech disorders and hearing problems to overcome their problems. As far as children are more inclined to play games, so we intend to learn them by entertainments like serious games. In this thesis, we have designed and implemented a serious game that can be used both as a therapy and as a tool to measure the performance of children with speech impediments in which children will learn to speak specific words that they are expected to know before the age of 7. And then we will teach them how to make sentences. The game consists of three steps. The first step provides information for parents or therapists to decide if their child needs speech therapy or not. In the second step, the child starts to learn specific words while playing the game. The third step aims to measure the performance of the child and evaluate how much the child has learned at the end of the game. The game has an avatar which can be controlled by the child through speech, with the objective of moving the avatar around the environment to earn coins. The avatar is controlled by both voice commands such as Jump, Ahead, Back, Left, Right, and arrow keys of the keyboard. The child will be guided by an arrow during the game instead of getting help from a therapist or a teacher to guide the child to the next goal. This allows the child to practice longer hours, compared to clinical approaches under the supervision of a therapist, which are time-limited. Our preliminary performance measurements indicate an improvement of 40% for children who play our game at least 5 times and a specific period of time.Declaration of Authorship ii Abstract iv Öz v Acknowledgments vii List of Figures x List of Tables xi Abbreviations xii 1 Introduction 1 1.1 Introduction ................................... 1 1.1.1 Learning definition ........................... 1 1.1.2 Does gamification work? ........................ 2 1.2 Introduction to Serious Games: ........................ 4 1.2.1 What is serious games? ........................ 4 1.2.2 First Serious Game ........................... 5 1.2.3 Background on Serious Games .................... 5 1.3 Research Problems ............................... 7 1.4 Motivation.................................... 8 1.5 Research Contributions............................. 9 1.5.1 Research Publications ......................... 9 1.6 Thesis Outline ................................. 9 2 Background 11 2.1 Related Works ................................. 11 2.2 An overview of Serious Games in health ................... 13 2.3 Does speech therapy and language recovery work? .............. 14 2.4 A literature survey of serious games for speech disorder ........... 14 2.5 Main Characteristics of Into the Forest Game ................ 15 3 Proposed System 19 3.1 Game engine analysis .............................. 19 3.2 Avatar ...................................... 20 3.3 Proposed Game ................................. 21 4 Implementation 30 4.1 Preliminary Testing............................... 30 4.2 Testing ...................................... 32 5 Conclusion and Future Work 37 5.1 Conclusion.................................... 37 5.1.1 Future Work .............................. 38 Bibliography 3

    How can health literacy and client recall/memory of clinical information be maximised in the field of Speech-Language Pathology? : an exploratory study of clients and therapists in the Western Cape

    Get PDF
    Includes abstract.Includes bibliographical references (leaves [157]-178).The aims of this study were to (a) explore health literacy and information recall/memory of clients receiving Speech-Language Pathology treatment in Cape Town, and (b) to explore ways of maximising these factors in clients with dysphagia, voice disorders (including laryngectomies) and cleft lip and/or palate

    Visualising articulation: real-time ultrasound visual biofeedback and visual articulatory models and their use in treating speech sound disorders associated with submucous cleft palate

    Get PDF
    Background: Ultrasound Tongue Imaging (UTI) is growing increasingly popular for assessing and treating Speech Sound Disorders (SSDs) and has more recently been used to qualitatively investigate compensatory articulations in speakers with cleft palate (CP). However, its therapeutic application for speakers with CP remains to be tested. A different set of developments, Visual Articulatory Models (VAMs), provide an offline dynamic model with context for lingual patterns. However, unlike UTI, they do not provide real-time biofeedback. Commercially available VAMs, such as Speech Trainer 3D, are available on iDevices, yet their clinical application remains to be tested. Aims: This thesis aims to test the diagnostic use of ultrasound, and investigate the effectiveness of both UTI and VAMs for the treatment of SSDs associated with submucous cleft palate (SMCP). Method: Using a single-subject multiple baseline design, two males with repaired SMCP, Andrew (aged 9;2) and Craig (aged 6;2), received six assessment sessions and two blocks of therapy, following a motor-based therapy approach, using VAMs and UTI. Three methods were used to measure therapy outcomes. Firstly, percent target consonant correct scores, derived from phonetic transcriptions provide outcomes comparable to those used in typical practice. Secondly, a multiplephonetically trained listener perceptual evaluation, using a two-alternative multiple forced choice design, to measure listener agreement provides a more objective measure. Thirdly, articulatory analysis, using qualitative and quantitative measures provides an additional perspective able to reveal covert errors. Results and Conclusions: There was overall improvement in the speech for both speakers, with a greater rate of change in therapy block one (VAMs) and listener agreement in the perceptual evaluation. Articulatory analysis supplemented phonetic transcriptions and detected covert articulations and covert contrast as well as supporting the improvements in auditory outcome scores. Both VAMs and UTI show promise as a clinical tool for the treatment of SSDs associated with CP

    Silent Speech Interfaces for Speech Restoration: A Review

    Get PDF
    This work was supported in part by the Agencia Estatal de Investigacion (AEI) under Grant PID2019-108040RB-C22/AEI/10.13039/501100011033. The work of Jose A. Gonzalez-Lopez was supported in part by the Spanish Ministry of Science, Innovation and Universities under Juan de la Cierva-Incorporation Fellowship (IJCI-2017-32926).This review summarises the status of silent speech interface (SSI) research. SSIs rely on non-acoustic biosignals generated by the human body during speech production to enable communication whenever normal verbal communication is not possible or not desirable. In this review, we focus on the first case and present latest SSI research aimed at providing new alternative and augmentative communication methods for persons with severe speech disorders. SSIs can employ a variety of biosignals to enable silent communication, such as electrophysiological recordings of neural activity, electromyographic (EMG) recordings of vocal tract movements or the direct tracking of articulator movements using imaging techniques. Depending on the disorder, some sensing techniques may be better suited than others to capture speech-related information. For instance, EMG and imaging techniques are well suited for laryngectomised patients, whose vocal tract remains almost intact but are unable to speak after the removal of the vocal folds, but fail for severely paralysed individuals. From the biosignals, SSIs decode the intended message, using automatic speech recognition or speech synthesis algorithms. Despite considerable advances in recent years, most present-day SSIs have only been validated in laboratory settings for healthy users. Thus, as discussed in this paper, a number of challenges remain to be addressed in future research before SSIs can be promoted to real-world applications. If these issues can be addressed successfully, future SSIs will improve the lives of persons with severe speech impairments by restoring their communication capabilities.Agencia Estatal de Investigacion (AEI) PID2019-108040RB-C22/AEI/10.13039/501100011033Spanish Ministry of Science, Innovation and Universities under Juan de la Cierva-Incorporation Fellowship IJCI-2017-3292

    Pharmacovigilance of pregnancy exposures to medicinal products focusing on the risk of orofacial clefts

    Get PDF
    Background: It is important to obtain robust scientific information on possible safety concerns related to the use of drugs during pregnancy in post-approval settings. Since pregnant women are actively excluded from trials in the clinical development of most products, at the time of the drug entry in the market meaningful human data on the effects of that drug during pregnancy are rarely available. There are approximately 5 million pregnancies in the EU each year, and about 1 in every 10 women of childbearing age is pregnant each year. Insufficient information for management of maternal disease during pregnancy can have teratogenic impact on fetus. Aim and objectives: This reach comprises three studies, in the first study; the goal was to evaluate the maternal use of medicines and the associated risks of cleft lip and/or palate in fetus and to link this to the accuracy and currency of safety information available in prescribing information. The second area of research was aimed at identifying and exploring social and digital media to understand patients’ experiences regarding medicine use during pregnancy. Last, but not least, I contributed to the development of an enhanced pharmacovigilance programme for analysing drug exposure during pregnancy and outcomes in neonate. Method: Firstly, I identified medication-induced risk factors for oral clefts with safety signal detection and safety signal evaluation techniques. Then I assessed the completeness of the safety information for pregnancy exposures in the Summary of Product Characteristics and the Patient Information in the UK and the US. In second study, the content of posts concerning pregnancy and use of medicines in online pregnancy forums was analysed using artificial intelligence in the form of natural language processing and machine learning algorithms. Third, the PRIM (PRegnancy outcomes Intensive Monitoring) system was developed as an enhanced pharmacovigilance data collection method. This was used to improve the quality and content of prospective case reports using sets of targeted checklists, structured follow-up, a rigorous process of data entry and data quality control, and programmed aggregate analysis. Results: For 12 antiepileptic drugs studied there was a statistical disproportionality in individual case safety reports indicative of an increased risk of cleft lip and/or palate. There are inconsistencies between the UK and US safety labels, despite the same evidence being available for assessment. The second study showed that in social media forums many pregnant women with MS shared profound uncertainties and specific concerns about taking medicines during the reproductive period. There was evidence of concealment of information with health care professionals; however, the same evidence was shared with a peer group. The PRIM method of enhanced pharmacovigilance has yielded substantially more information on the safety of fingolimod exposure during pregnancy than has been achieved via the regulatory authority-mandated pregnancy registry. Conclusion: Use of medicines during pregnancy is an important topic for public health. There is a significant need to provide inclusive, unbiased, up to- date information to prescribers and women of childbearing age concerning the use of medicines in pregnancy and postpartum during breastfeeding. Information must be provided in a timely manner by a trusted source and patients should have access to health care professionals with the relevant expertise and knowledge. It is important that the full anonymised data set, along with evidence-based conclusions are made publicly available to inform decision-making

    Continuing the Vision

    Get PDF
    Table of Contents 4 | Continuing the visionConstruction soon to begin on major new campus landmark 10 | Cameroon odysseySIMS students participate in first community health fair sponsored by Batouri Adventist Hospital 14 | Celebrated centenarian101-year-old Marge Jetton gives credence to the value of a healthy lifestyle—and attitude 18 | Just add loveGrover Wilcox found the missing ingredient to life through a devastating disease 22 | The emergence of researchHistorical insights from the emergence of research at Loma Linda University 32 | Newscope Compiled by Patricia K.Thio 44 | Alumni notes Compiled by RichardW.Weismeyerhttps://scholarsrepository.llu.edu/scope/1015/thumbnail.jp

    Interfaces de fala silenciosa multimodais para português europeu com base na articulação

    Get PDF
    Doutoramento conjunto MAPi em InformáticaThe concept of silent speech, when applied to Human-Computer Interaction (HCI), describes a system which allows for speech communication in the absence of an acoustic signal. By analyzing data gathered during different parts of the human speech production process, Silent Speech Interfaces (SSI) allow users with speech impairments to communicate with a system. SSI can also be used in the presence of environmental noise, and in situations in which privacy, confidentiality, or non-disturbance are important. Nonetheless, despite recent advances, performance and usability of Silent Speech systems still have much room for improvement. A better performance of such systems would enable their application in relevant areas, such as Ambient Assisted Living. Therefore, it is necessary to extend our understanding of the capabilities and limitations of silent speech modalities and to enhance their joint exploration. Thus, in this thesis, we have established several goals: (1) SSI language expansion to support European Portuguese; (2) overcome identified limitations of current SSI techniques to detect EP nasality (3) develop a Multimodal HCI approach for SSI based on non-invasive modalities; and (4) explore more direct measures in the Multimodal SSI for EP acquired from more invasive/obtrusive modalities, to be used as ground truth in articulation processes, enhancing our comprehension of other modalities. In order to achieve these goals and to support our research in this area, we have created a multimodal SSI framework that fosters leveraging modalities and combining information, supporting research in multimodal SSI. The proposed framework goes beyond the data acquisition process itself, including methods for online and offline synchronization, multimodal data processing, feature extraction, feature selection, analysis, classification and prototyping. Examples of applicability are provided for each stage of the framework. These include articulatory studies for HCI, the development of a multimodal SSI based on less invasive modalities and the use of ground truth information coming from more invasive/obtrusive modalities to overcome the limitations of other modalities. In the work here presented, we also apply existing methods in the area of SSI to EP for the first time, noting that nasal sounds may cause an inferior performance in some modalities. In this context, we propose a non-invasive solution for the detection of nasality based on a single Surface Electromyography sensor, conceivable of being included in a multimodal SSI.O conceito de fala silenciosa, quando aplicado a interação humano-computador, permite a comunicação na ausência de um sinal acústico. Através da análise de dados, recolhidos no processo de produção de fala humana, uma interface de fala silenciosa (referida como SSI, do inglês Silent Speech Interface) permite a utilizadores com deficiências ao nível da fala comunicar com um sistema. As SSI podem também ser usadas na presença de ruído ambiente, e em situações em que privacidade, confidencialidade, ou não perturbar, é importante. Contudo, apesar da evolução verificada recentemente, o desempenho e usabilidade de sistemas de fala silenciosa tem ainda uma grande margem de progressão. O aumento de desempenho destes sistemas possibilitaria assim a sua aplicação a áreas como Ambientes Assistidos. É desta forma fundamental alargar o nosso conhecimento sobre as capacidades e limitações das modalidades utilizadas para fala silenciosa e fomentar a sua exploração conjunta. Assim, foram estabelecidos vários objetivos para esta tese: (1) Expansão das linguagens suportadas por SSI com o Português Europeu; (2) Superar as limitações de técnicas de SSI atuais na deteção de nasalidade; (3) Desenvolver uma abordagem SSI multimodal para interação humano-computador, com base em modalidades não invasivas; (4) Explorar o uso de medidas diretas e complementares, adquiridas através de modalidades mais invasivas/intrusivas em configurações multimodais, que fornecem informação exata da articulação e permitem aumentar a nosso entendimento de outras modalidades. Para atingir os objetivos supramencionados e suportar a investigação nesta área procedeu-se à criação de uma plataforma SSI multimodal que potencia os meios para a exploração conjunta de modalidades. A plataforma proposta vai muito para além da simples aquisição de dados, incluindo também métodos para sincronização de modalidades, processamento de dados multimodais, extração e seleção de características, análise, classificação e prototipagem. Exemplos de aplicação para cada fase da plataforma incluem: estudos articulatórios para interação humano-computador, desenvolvimento de uma SSI multimodal com base em modalidades não invasivas, e o uso de informação exata com origem em modalidades invasivas/intrusivas para superar limitações de outras modalidades. No trabalho apresentado aplica-se ainda, pela primeira vez, métodos retirados do estado da arte ao Português Europeu, verificando-se que sons nasais podem causar um desempenho inferior de um sistema de fala silenciosa. Neste contexto, é proposta uma solução para a deteção de vogais nasais baseada num único sensor de eletromiografia, passível de ser integrada numa interface de fala silenciosa multimodal

    Determining normal and abnormal lip shapes during movement for use as a surgical outcome measure

    Get PDF
    Craniofacial assessment for diagnosis, treatment planning and outcome has traditionally relied on imaging techniques that provide a static image of the facial structure. Objective measures of facial movement are however becoming increasingly important for clinical interventions where surgical repositioning of facial structures can influence soft tissue mobility. These applications include the management of patients with cleft lip, facial nerve palsy and orthognathic surgery. Although technological advances in medical imaging have now enabled three-dimensional (3D) motion scanners to become commercially available their clinical application to date has been limited. Therefore, the aim of this study is to determine normal and abnormal lip shapes during movement for use as a clinical outcome measure using such a scanner. Lip movements were captured from an average population using a 3D motion scanner. Consideration was given to the type of facial movement captured (i.e. verbal or non-verbal) and also the method of feature extraction (i.e. manual or semi-automatic landmarking). Statistical models of appearance (Active Shape Models) were used to convert the video motion sequences into linear data and identify reproducible facial movements via pattern recognition. Average templates of lip movement were created based on the most reproducible lip movements using Geometric Morphometrics (GMM) incorporating Generalised Procrustes Analysis (GPA) and Principal Component Analysis (PCA). Finally lip movement data from a patient group undergoing orthognathic surgery was incorporated into the model and Discriminant Analysis (DA) employed in an attempt to statistically distinguish abnormal lip movement. The results showed that manual landmarking was the preferred method of feature extraction. Verbal facial gestures (i.e. words) were significantly more reproducible/repeatable over time when compared to non-verbal gestures (i.e. facial expressions). It was possible to create average templates of lip movement from the control group, which acted as an outcome measure, and from which abnormalities in movement could be discriminated pre-surgery. These abnormalities were found to normalise post-surgery. The concepts of this study form the basis of analysing facial movement in the clinical context. The methods are transferrable to other patient groups. Specifically, patients undergoing orthognathic surgery have differences in lip shape/movement when compared to an average population. Correcting the position of the basal bones in this group of patients appears to normalise lip mobility

    Assessing Facial Symmetry and Attractiveness using Augmented Reality

    Get PDF
    Facial symmetry is a key component in quantifying the perception of beauty. In this paper, we propose a set of facial features computed from facial landmarks which can be extracted at a low computational cost. We quantitatively evaluated our proposed features for predicting perceived attractiveness from human portraits on four benchmark datasets (SCUT-FBP, SCUT-FBP5500, FACES and Chicago Face Database). Experimental results showed that the performance of our features is comparable to those extracted from a set with much denser facial landmarks. The computation of facial features was also implemented as an Augmented Reality (AR) app developed on Android OS. The app overlays four types of measurements and guide lines over a live video stream, while the facial measurements are computed from the tracked facial landmarks at run-time. The developed app can be used to assist plastic surgeons in assessing facial symmetry when planning reconstructive facial surgeries
    corecore