9,301 research outputs found

    Construction of a corpus of elderly Japanese speech for analysis and recognition

    Get PDF
    Tokushima UniversityAichi Prefectural UniversityUniversity of YamanashiLREC 2018 Special Speech Sessions "Speech Resources Collection in Real-World Situations"; Phoenix Seagaia Conference Center, Miyazaki; 2018-05-09We have constructed a new speech data corpus using the utterances of 100 elderly Japanese people, in order to improve the accuracy of automatic recognition of the speech of older people. Humanoid robots are being developed for use in elder care nursing facilities because interaction with such robots is expected to help clients maintain their cognitive abilities, as well as provide them with companionship. In order for these robots to interact with the elderly through spoken dialogue, a high performance speech recognition system for the speech of elderly people is needed. To develop such a system, we recorded speech uttered by 100 elderly Japanese who had an average age of 77.2, most of them living in nursing homes. Another corpus of elderly Japanese speech called S-JNAS (Seniors-Japanese Newspaper Article Sentences) has been developed previously, but the average age of the participants was 67.6. Since the target age for nursing home care is around 75, much higher than that of most of the S-JNAS samples, we felt a more representative corpus was needed. In this study we compare the performance of our new corpus with both the Japanese read speech corpus JNAS (Japanese Newspaper Article Speech), which consists of adult speech, and with the S-JNAS, the senior version of JNAS, by conducting speech recognition experiments. Data from the JNAS, S-JNAS and CSJ (Corpus of Spontaneous Japanese) was used as training data for the acoustic models, respectively. We then used our new corpus to adapt the acoustic models to elderly speech, but we were unable to achieve sufficient performance when attempting to recognize elderly speech. Based on our experimental results, we believe that development of a corpus of spontaneous elderly speech and/or special acoustic adaptation methods will likely be necessary to improve the recognition performance of dialog systems for the elderly

    A new speech corpus of super-elderly Japanese for acoustic modeling

    Get PDF
    The development of accessible speech recognition technology will allow the elderly to more easily access electronically stored information. However, the necessary level of recognition accuracy for elderly speech has not yet been achieved using conventional speech recognition systems, due to the unique features of the speech of elderly people. To address this problem, we have created a new speech corpus named EARS (Elderly Adults Read Speech), consisting of the recorded read speech of 123 super-elderly Japanese people (average age: 83.1), as a resource for training automated speech recognition models for the elderly. In this study, we investigated the acoustic features of super-elderly Japanese speech using our new speech corpus. In comparison to the speech of less elderly Japanese speakers, we observed a slower speech rate and extended vowel duration for both genders, a slight increase in fundamental frequency for males, and a slight decrease in fundamental frequency for females. To demonstrate the efficacy of our corpus, we also conducted speech recognition experiments using two different acoustic models (DNN-HMM and transformer-based), trained with a combination of data from our corpus and speech data from three conventional Japanese speech corpora. When using the DNN-HMM trained with EARS and speech data from existing corpora, the character error rate (CER) was reduced by 7.8% (to just over 9%), compared to a CER of 16.9% when using only the baseline training corpora. We also investigated the effect of training the models with various amounts of EARS data, using a simple data expansion method. The acoustic models were also trained for various numbers of epochs without any modifications. When using the Transformer-based end-to-end speech recognizer, the character error rate was reduced by 3.0% (to 11.4%) by using a doubled EARS corpus with the baseline data for training, compared to a CER of 13.4% when only data from the baseline training corpora were used

    The EEE corpus: socio-affective "glue" cues in elderly-robot interactions in a Smart Home with the EmOz platform

    No full text
    International audienceThe aim of this preliminary study of feasibility is to give a glance at interactions in a Smart Home prototype between the elderly and a companion robot that is having some socio-affective language primitives as the only vector of communication. The paper particularly focuses on the methodology and the scenario made to collect a spontaneous corpus of human-robot interactions. Through a Wizard of Oz platform (EmOz), which was specifically developed for this issue, a robot is introduced as an intermediary between the technological environment and some elderly who have to give vocal commands to the robot to control the Smart Home. The robot vocal productions increases progressively by adding prosodic levels: (1) no speech, (2) pure prosodic mouth noises supposed to be the "glue's" tools, (3) lexicons with supposed "glue" prosody and (4) subject's commands imitations with supposed "glue" prosody. The elderly subjects' speech behaviours confirm the hypothesis that the socio-affective "glue" effect increase towards the prosodic levels, especially for socio-isolated people. The actual corpus is still on recording process and is motivated to collect data from socio-isolated elderly in real need

    The EEE corpus: socio-affective "glue" cues in elderly-robot interactions in a Smart Home with the EmOz platform

    No full text
    International audienceThe aim of this preliminary study of feasibility is to give a glance at interactions in a Smart Home prototype between the elderly and a companion robot that is having some socio-affective language primitives as the only vector of communication. The paper particularly focuses on the methodology and the scenario made to collect a spontaneous corpus of human-robot interactions. Through a Wizard of Oz platform (EmOz), which was specifically developed for this issue, a robot is introduced as an intermediary between the technological environment and some elderly who have to give vocal commands to the robot to control the Smart Home. The robot vocal productions increases progressively by adding prosodic levels: (1) no speech, (2) pure prosodic mouth noises supposed to be the "glue's" tools, (3) lexicons with supposed "glue" prosody and (4) subject's commands imitations with supposed "glue" prosody. The elderly subjects' speech behaviours confirm the hypothesis that the socio-affective "glue" effect increase towards the prosodic levels, especially for socio-isolated people. The actual corpus is still on recording process and is motivated to collect data from socio-isolated elderly in real need

    Proceedings of the LREC 2018 Special Speech Sessions

    Get PDF
    LREC 2018 Special Speech Sessions "Speech Resources Collection in Real-World Situations"; Phoenix Seagaia Conference Center, Miyazaki; 2018-05-0

    Spontaneous speech resources in Japan

    Get PDF
    National Institute for Japanese Language and LinguisticsNational Institute of InformaticsLREC 2018 Special Speech Sessions "Speech Resources Collection in Real-World Situations"; Phoenix Seagaia Conference Center, Miyazaki; 2018-05-09In this paper, we introduce representative corpora of spontaneous speech, which have been provided publically in Japan. A large amount of spontaneous speech data is required for research on various themes in speech studies such as speech analysis, speech recognition systems, and natural language processing in recent years. However, it is difficult to collect spontaneous speech data, and few corpora of spontaneous speech are available. Considering the diversity of speech in real-world situations, the data remain insufficient. We show the characteristics of spontaneous Japanese speech corpora gathered and distributed by two organizations: the Speech Resources Consortium at the National Institute of Informatics, and the National Institute for Japanese Language and Linguistics. Then, we describe prospects for the development of spontaneous speech resources

    ์ฃผ์š” ์šฐ์šธ ์žฅ์• ์˜ ์Œ์„ฑ ๊ธฐ๋ฐ˜ ๋ถ„์„: ์—ฐ์†์ ์ธ ๋ฐœํ™”์˜ ์Œํ–ฅ์  ๋ณ€ํ™”๋ฅผ ์ค‘์‹ฌ์œผ๋กœ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ์œตํ•ฉ๊ณผํ•™๊ธฐ์ˆ ๋Œ€ํ•™์› ์œตํ•ฉ๊ณผํ•™๋ถ€(๋””์ง€ํ„ธ์ •๋ณด์œตํ•ฉ์ „๊ณต), 2023. 2. ์ด๊ต๊ตฌ.Major depressive disorder (commonly referred to as depression) is a common disorder that affects 3.8% of the world's population. Depression stems from various causes, such as genetics, aging, social factors, and abnormalities in the neurotransmitter system; thus, early detection and monitoring are essential. The human voice is considered a representative biomarker for observing depression; accordingly, several studies have developed an automatic depression diagnosis system based on speech. However, constructing a speech corpus is a challenge, studies focus on adults under 60 years of age, and there are insufficient medical hypotheses based on the clinical findings of psychiatrists, limiting the evolution of the medical diagnostic tool. Moreover, the effect of taking antipsychotic drugs on speech characteristics during the treatment phase is overlooked. Thus, this thesis studies a speech-based automatic depression diagnosis system at the semantic level (sentence). First, to analyze depression among the elderly whose emotional changes do not adequately reflect speech characteristics, it developed the mood-induced sentence to build the elderly depression speech corpus and designed an automatic depression diagnosis system for the elderly. Second, it constructed an extrapyramidal symptom speech corpus to investigate the extrapyramidal symptoms, a typical side effect that can appear from an antipsychotic drug overdose. Accordingly, there is a strong correlation between the antipsychotic dose and speech characteristics. The study paved the way for a comprehensive examination of the automatic diagnosis system for depression.์ฃผ์š” ์šฐ์šธ ์žฅ์•  ์ฆ‰ ํ”ํžˆ ์šฐ์šธ์ฆ์ด๋ผ๊ณ  ์ผ์ปฌ์–ด์ง€๋Š” ๊ธฐ๋ถ„ ์žฅ์• ๋Š” ์ „ ์„ธ๊ณ„์ธ ์ค‘ 3.8%์— ๋‹ฌํ•˜๋Š” ์‚ฌ๋žŒ๋“ค์ด ๊ฒช์€๋ฐ” ์žˆ๋Š” ๋งค์šฐ ํ”ํ•œ ์งˆ๋ณ‘์ด๋‹ค. ์œ ์ „, ๋…ธํ™”, ์‚ฌํšŒ์  ์š”์ธ, ์‹ ๊ฒฝ์ „๋‹ฌ๋ฌผ์งˆ ์ฒด๊ณ„์˜ ์ด์ƒ๋“ฑ ๋‹ค์–‘ํ•œ ์›์ธ์œผ๋กœ ๋ฐœ์ƒํ•˜๋Š” ์šฐ์šธ์ฆ์€ ์กฐ๊ธฐ ๋ฐœ๊ฒฌ ๋ฐ ์ผ์ƒ ์ƒํ™œ์—์„œ์˜ ๊ด€๋ฆฌ๊ฐ€ ๋งค์šฐ ์ค‘์š”ํ•˜๋‹ค๊ณ  ํ•  ์ˆ˜ ์žˆ๋‹ค. ์ธ๊ฐ„์˜ ์Œ์„ฑ์€ ์šฐ์šธ์ฆ์„ ๊ด€์ฐฐํ•˜๊ธฐ์— ๋Œ€ํ‘œ์ ์ธ ๋ฐ”์ด์˜ค๋งˆ์ปค๋กœ ์—ฌ๊ฒจ์ ธ ์™”์œผ๋ฉฐ, ์Œ์„ฑ ๋ฐ์ดํ„ฐ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœํ•œ ์ž๋™ ์šฐ์šธ์ฆ ์ง„๋‹จ ์‹œ์Šคํ…œ ๊ฐœ๋ฐœ์„ ์œ„ํ•œ ์—ฌ๋Ÿฌ ์—ฐ๊ตฌ๋“ค์ด ์ง„ํ–‰๋˜์–ด ์™”๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์Œ์„ฑ ๋ง๋ญ‰์น˜ ๊ตฌ์ถ•์˜ ์–ด๋ ค์›€๊ณผ 60์„ธ ์ดํ•˜์˜ ์„ฑ์ธ๋“ค์—๊ฒŒ ์ดˆ์ ์ด ๋งž์ถ”์–ด์ง„ ์—ฐ๊ตฌ, ์ •์‹ ๊ณผ ์˜์‚ฌ๋“ค์˜ ์ž„์ƒ ์†Œ๊ฒฌ์„ ๋ฐ”ํƒ•์œผ๋กœํ•œ ์˜ํ•™์  ๊ฐ€์„ค ์„ค์ •์˜ ๋ฏธํก๋“ฑ์˜ ํ•œ๊ณ„์ ์„ ๊ฐ€์ง€๊ณ  ์žˆ์œผ๋ฉฐ, ์ด๋Š” ์˜๋ฃŒ ์ง„๋‹จ ๊ธฐ๊ตฌ๋กœ ๋ฐœ์ „ํ•˜๋Š”๋ฐ ํ•œ๊ณ„์ ์ด๋ผ๊ณ  ํ•  ์ˆ˜ ์žˆ๋‹ค. ๋˜ํ•œ, ํ•ญ์ •์‹ ์„ฑ ์•ฝ๋ฌผ์˜ ๋ณต์šฉ์ด ์Œ์„ฑ ํŠน์ง•์— ๋ฏธ์น  ์ˆ˜ ์žˆ๋Š” ์˜ํ–ฅ ๋˜ํ•œ ๊ฐ„๊ณผ๋˜๊ณ  ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์œ„์˜ ํ•œ๊ณ„์ ๋“ค์„ ๋ณด์™„ํ•˜๊ธฐ ์œ„ํ•œ ์˜๋ฏธ๋ก ์  ์ˆ˜์ค€ (๋ฌธ์žฅ ๋‹จ์œ„)์—์„œ์˜ ์Œ์„ฑ ๊ธฐ๋ฐ˜ ์ž๋™ ์šฐ์šธ์ฆ ์ง„๋‹จ์— ๋Œ€ํ•œ ์—ฐ๊ตฌ๋ฅผ ์‹œํ–‰ํ•˜๊ณ ์ž ํ•œ๋‹ค. ์šฐ์„ ์ ์œผ๋กœ ๊ฐ์ •์˜ ๋ณ€ํ™”๊ฐ€ ์Œ์„ฑ ํŠน์ง•์„ ์ž˜ ๋ฐ˜์˜๋˜์ง€ ์•Š๋Š” ๋…ธ์ธ์ธต์˜ ์šฐ์šธ์ฆ ๋ถ„์„์„ ์œ„ํ•ด ๊ฐ์ • ๋ฐœํ™” ๋ฌธ์žฅ์„ ๊ฐœ๋ฐœํ•˜์—ฌ ๋…ธ์ธ ์šฐ์šธ์ฆ ์Œ์„ฑ ๋ง๋ญ‰์น˜๋ฅผ ๊ตฌ์ถ•ํ•˜๊ณ , ๋ฌธ์žฅ ๋‹จ์œ„์—์„œ์˜ ๊ด€์ฐฐ์„ ํ†ตํ•ด ๋…ธ์ธ ์šฐ์šธ์ฆ ๊ตฐ์—์„œ ๊ฐ์ • ๋ฌธ์žฅ ๋ฐœํ™”๊ฐ€ ๋ฏธ์น˜๋Š” ์˜ํ–ฅ๊ณผ ๊ฐ์ • ์ „์ด๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ์œผ๋ฉฐ, ๋…ธ์ธ์ธต์˜ ์ž๋™ ์šฐ์šธ์ฆ ์ง„๋‹จ ์‹œ์Šคํ…œ์„ ์„ค๊ณ„ํ•˜์˜€๋‹ค. ์ตœ์ข…์ ์œผ๋กœ ํ•ญ์ •์‹ ๋ณ‘ ์•ฝ๋ฌผ์˜ ๊ณผ๋ณต์šฉ์œผ๋กœ ๋‚˜ํƒ€๋‚  ์ˆ˜ ์žˆ๋Š” ๋Œ€ํ‘œ์ ์ธ ๋ถ€์ž‘์šฉ์ธ ์ถ”์ฒด์™ธ๋กœ ์ฆ์ƒ์„ ์กฐ์‚ฌํ•˜๊ธฐ ์œ„ํ•ด ์ถ”์ฒด์™ธ๋กœ ์ฆ์ƒ ์Œ์„ฑ ๋ง๋ญ‰์น˜๋ฅผ ๊ตฌ์ถ•ํ•˜์˜€๊ณ , ํ•ญ์ •์‹ ๋ณ‘ ์•ฝ๋ฌผ์˜ ๋ณต์šฉ๋Ÿ‰๊ณผ ์Œ์„ฑ ํŠน์ง•๊ฐ„์˜ ์ƒ๊ด€๊ด€๊ณ„๋ฅผ ๋ถ„์„ํ•˜์—ฌ ์šฐ์šธ์ฆ์˜ ์น˜๋ฃŒ ๊ณผ์ •์—์„œ ํ•ญ์ •์‹ ๋ณ‘ ์•ฝ๋ฌผ์ด ์Œ์„ฑ์— ๋ฏธ์น  ์ˆ˜ ์žˆ๋Š” ์˜ํ–ฅ์— ๋Œ€ํ•ด์„œ ์กฐ์‚ฌํ•˜์˜€๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ์ฃผ์š” ์šฐ์šธ ์žฅ์• ์˜ ์˜์—ญ์— ๋Œ€ํ•œ ํฌ๊ด„์ ์ธ ์—ฐ๊ตฌ๋ฅผ ์ง„ํ–‰ํ•˜์˜€๋‹ค.Chapter 1 Introduction 1 1.1 Research Motivations 3 1.1.1 Bridging the Gap Between Clinical View and Engineering 3 1.1.2 Limitations of Conventional Depressed Speech Corpora 4 1.1.3 Lack of Studies on Depression Among the Elderly 4 1.1.4 Depression Analysis on Semantic Level 6 1.1.5 How Antipsychotic Drug Affects the Human Voice? 7 1.2 Thesis objectives 9 1.3 Outline of the thesis 10 Chapter 2 Theoretical Background 13 2.1 Clinical View of Major Depressive Disorder 13 2.1.1 Types of Depression 14 2.1.2 Major Causes of Depression 15 2.1.3 Symptoms of Depression 17 2.1.4 Diagnosis of Depression 17 2.2 Objective Diagnostic Markers of Depression 19 2.3 Speech in Mental Disorder 19 2.4 Speech Production and Depression 21 2.5 Automatic Depression Diagnostic System 23 2.5.1 Acoustic Feature Representation 24 2.5.2 Classification / Prediction 27 Chapter 3 Developing Sentences for New Depressed Speech Corpus 31 3.1 Introduction 31 3.2 Building Depressed Speech Corpus 32 3.2.1 Elements of Speech Corpus Production 32 3.2.2 Conventional Depressed Speech Corpora 35 3.2.3 Factors Affecting Depressed Speech Characteristics 39 3.3 Motivations 40 3.3.1 Limitations of Conventional Depressed Speech Corpora 40 3.3.2 Attitude of Subjects to Depression: Masked Depression 43 3.3.3 Emotions in Reading 45 3.3.4 Objectives of this Chapter 45 3.4 Proposed Methods 46 3.4.1 Selection of Words 46 3.4.2 Structure of Sentence 47 3.5 Results 49 3.5.1 Mood-Inducing Sentences (MIS) 49 3.5.2 Neutral Sentences for Extrapyramidal Symptom Analysis 49 3.6 Summary 51 Chapter 4 Screening Depression in The Elderly 52 4.1 Introduction 52 4.2 Korean Elderly Depressive Speech Corpus 55 4.2.1 Participants 55 4.2.2 Recording Procedure 57 4.2.3 Recording Specification 58 4.3 Proposed Methods 59 4.3.1 Voice-based Screening Algorithm for Depression 59 4.3.2 Extraction of Acoustic Features 59 4.3.3 Feature Selection System and Distance Computation 62 4.3.4 Classification and Statistical Analyses 63 4.4 Results 65 4.5 Discussion 69 4.6 Summary 74 Chapter 5 Correlation Analysis of Antipsychotic Dose and Speech Characteristics 75 5.1 Introduction 75 5.2 Korean Extrapyramidal Symptoms Speech Corpus 78 5.2.1 Participants 78 5.2.2 Recording Process 79 5.2.3 Extrapyramidal Symptoms Annotation and Equivalent Dose Calculations 80 5.3 Proposed Methods 81 5.3.1 Acoustic Feature Extraction 81 5.3.2 Speech Characteristics Analysis recording to Eq.dose 83 5.4 Results 83 5.5 Discussion 87 5.6 Summary 90 Chapter 6 Conclusions and Future Work 91 6.1 Conclusions 91 6.2 Future work 95 Bibliography 97 ์ดˆ ๋ก 121๋ฐ•

    Acoustic Features of Different Types of Laughter in North Sami Conversational Speech

    Get PDF
    Peer reviewe
    • โ€ฆ
    corecore