16 research outputs found

    금속 나노 입자를 가지는 유기물 쌍안정 장치의 메모리 특성과 스위칭 원리

    No full text
    Thesis(master`s)--서울대학교 대학원 :재료공학부,2007.Maste

    Deep Drawing 加工한 冷間壓延鋼板의 爐 溶接 特性에 관한 硏究

    No full text
    학위논문(석사)--아주대학교 산업대학원 :기계공학과,2002Maste

    Preliminary construction of interdisciplinary integrated unit based on KDB model

    No full text

    다계층 자료 선택 기법을 이용한 화자 적응 기반의 음성 감정 인식

    No full text
    학위논문(석사) - 한국과학기술원 : 전산학과, 2011.2, [ vi, 43 p. ]Nowadays, devices are regarded as partners rather than simple machines as users are able to personalize the devices. This tendency is being consolidated since mobile devices such as smart phones and tablet personal computers provide more advanced features which can understand a user`s intention and emotional states by analyzing voice and facial expressions. Understanding the emotional states plays such an important role in Human-Computer Interaction (HCI) since it enables a user to feel more comfortable and friendly interaction and appropriate responses from the devices depending on the emotional states of a user. The emotional information can be obtained from speech, facial expressions, gestures, biological features and so forth. Among these indicators, speech is a relatively natural and intuitive interface for interaction with devices. For these reasons, Speech Emotion Recognition (SER) can be an effective technology required for HCI along with speech recognition. Many researchers have introduced various approaches for SER tasks, but unfortunately, they have failed to achieve satisfactory performance due to two critical factors. First, different speakers rarely express emotional states in the same way. Second, several pairs of emotions, such as sadness and boredom, have acoustically similar characteristics, and this ambiguity causes unreliable recognition results. This dissertation aims at increasing the SER performance by resolving the domain-oriented characteristics. To deal with the large inter-speaker variations, speaker adaptation techniques is applied to SER. In this approach, Speaker Independent (SI) models are adapted to a relatively small amount of data collected from a specific speaker, and then the adapted models represent the acoustic characteristics of a target speaker. This dissertation focuses on unsupervised adaptation which does not require pre-define emotion labels since manual labeling is unpractical and somehow unreliable. The proposed ...한국과학기술원 : 전산학과
    corecore