83 research outputs found

    トルコ語の音節構造一覧

    Get PDF
    The purpose of this research is to show the list of Turkish syllable structures. The analysis subject is the headword of Türkçe Sözlük 10baskı (Turkish dictionary 10th edition). The number of aggregated headword is 62,722, syllable 223,539, and syllable pattern 3,450. The list is shown in alphabetical order, frequency order and by syllable structure. The percentage of basic Turkish syllable structure is CV 48.0%, V 3.8%, CVC 42.2%, VC 3.7%, CVCC 1.5%, VCC 0.2% and others 0.6%

    IPA ノ テンジカ ニ タイスル テイゲン

    Get PDF
    久部(1999)において、国際音声記号(以下IPA)を点字化する際の、具体的な案が提案された。IPAを点字に転字(transliteration)するにあたって、久部(ibid.)からは、数多くの苦労の跡が感じとれた。 ..

    トルコ語109/2000文

    Get PDF
    This paper is an interim report on Turkish corpus based on “Le livre des deux milles phrase (Frei 1953)”. ”Le livre des deux milles phrases” is a dictionary of sentences containing 2,000 lexical items as headings to indicate example sentences corresponding to those headings. It was published for the purpose of verifying differences among languages by translating these example sentences into other languages in an attempt to indicate not only differences among vocabularies but also differences based on contexts. According to this purpose, it was translated into nine languages including British English, American English and Chinese at a relatively early stage, while it has not been translated in other areas. As for SOV-type languages, it was applied to Mongolian, Japanese,Tokunoshima dialect and Osaka dialect in the past. "Nihongo nisenbun” was published in 1971 as a translated version of Frei (1953). In this Japanese translated version, 2,000 example sentences indicated in a common Japanese language were translated into multiple example sentences by a Turkish person via a face-to-face survey. The author and survey subject reviewed these example sentences to choose the best example sentence, which was noted by the author. Careful confirmation of each item with the face-to-face survey required more time than necessary, resulting in only 109 example sentences in this uncompleted interim report. It was yet determined that these example sentences could be utilized by other researchers

    トルコ語とトルクメン語における音節数増加と時間長との相関性

    Get PDF
    The aim of this paper is to verify the correlation between the increase in the number of syllables and duration in Turkmen. As a result of measuring the 2,185 words of Turkmen are as follows. The relationship between syllable number and duration is a direct relationship. The result was similar to the results by Fukumori (2015), including Arabic, Chinese, English, Filipino, French, German, Indonesian, Japanese (Tokyo dialect and Osaka dialect), Khmer, Korean (Seoul dialect), Mongolian, Portuguese (European and Brazilian), Russian, Spanish, Thai, Vietnamese and Turkish

    トルコ語のアクセントに呼気流量は左右されるのか?

    Get PDF
    Studies on expiratory flow show a certain result for vowels and consonants but examinations for suprasegmentals have not been conducted so much. In this study, maximal expiratory flow of each syllable was measured with phono-laryngograph to examine whether intonation by Turkish accent influences the expiratory flow or not. Analysis materials include a word group where the end of a word becomes higher by a word tone, a cardinal accent form, and a word group where it becomes higher by a falling tone, an exceptional accent form. As a result, all the syllable structures such as word tone, declining accent, and heavy syllable are not considered to be actively related to the fluctuations of expiratory flow, and it was confirmed that a tone of an accent does not have influence on the expiratory flow

    Environmental sound synthesis from vocal imitations and sound event labels

    Full text link
    One way of expressing an environmental sound is using vocal imitations, which involve the process of replicating or mimicking the rhythm and pitch of sounds by voice. We can effectively express the features of environmental sounds, such as rhythm and pitch, using vocal imitations, which cannot be expressed by conventional input information, such as sound event labels, images, or texts, in an environmental sound synthesis model. In this paper, we propose a framework for environmental sound synthesis from vocal imitations and sound event labels based on a framework of a vector quantized encoder and the Tacotron2 decoder. Using vocal imitations is expected to control the pitch and rhythm of the synthesized sound, which only sound event labels cannot control. Our objective and subjective experimental results show that vocal imitations effectively control the pitch and rhythm of synthesized sounds.Comment: Submitted to ICASSP202

    Onoma-to-wave: Environmental sound synthesis from onomatopoeic words

    Get PDF
    In this paper, we propose a framework for environmental sound synthesis from onomatopoeic words. As one way of expressing an environmental sound, we can use an onomatopoeic word, which is a character sequence for phonetically imitating a sound. An onomatopoeic word is effective for describing diverse sound features. Therefore, using onomatopoeic words for environmental sound synthesis will enable us to generate diverse environmental sounds. To generate diverse sounds, we propose a method based on a sequence-to-sequence framework for synthesizing environmental sounds from onomatopoeic words. We also propose a method of environmental sound synthesis using onomatopoeic words and sound event labels. The use of sound event labels in addition to onomatopoeic words enables us to capture each sound event's feature depending on the input sound event label. Our subjective experiments show that our proposed methods achieve higher diversity and naturalness than conventional methods using sound event labels

    Shouted speech detection using hidden markov model with rahmonic and mel-frequency cepstrum coefficients

    Get PDF
    In recent years, crime prevention systems have been developed to detect various hazardous situations. In general, the systems utilize the image information recorded by a camera to monitor the situations. It is however difficult to detect them in the blind area. To address the problem, it is required to utilize not only image information but also acoustic information occurred in such situations. Our previous study showed that two acoustic features including rahmonic and mel-frequency cepstrum coefficients (MFCCs) are effective for detecting the shouted speech. Rahmonic shows a subharmonic of fundamental frequency in the cepstrum domain, and MFCCs represent coefficients that collectively make up mel-frequency cepstrum. In this method, a shouted speech model is constructed from these features by using a gaussian mixture model (GMM). However, the previous method with GMM has difficulty in representing temporal changes of the speech features. In this study, we further expand the previous method using hidden Markov model (HMM) which has state transition to represent the temporal changes. Through objective experiments, the proposed method using HMM could achieve higher detection performance of the shouted speech than the conventional method using GMM
    corecore