6 research outputs found

    アクセント成分を用いた講演の強調検出

    Get PDF
    We propose a method for detecting a prominence in Japanese presentations. The prominence is not clearly defined enoughto detect the prominence quantitatively in Japanese, because it is covered only in phonetics and Japanese language educationqualitatively. In order to quantify the prominence and propose features for detecting the prominence, we reviewed the literature and we understood how words are emphasizedin Japanese sentences. In addition, we analyzed acoustic features (e.g., F0, energy, accent component, pause and speech rate) in a data of utterances including an emphasized word based on the knowledge. As a result, we propose using the accent component and ∆ accent as a feature for detecting prominence. In an evaluation experiment to detect prominence, we used the intensity of the accent component and its delta features. The experimental results show that a detectionaccuracy of 0.82 was obtained, which is higher than that achieved in an experiment using features proposed in a method for prominence detection in stress accent language. In an evaluation experiment to detect prominence using every features, ∆ accent was most effective. This result dovetailedwith a knowledge that Japanese accent is pitch and a word is emphasized by suppressing pitch accent of words before/after the prominence. Therefore, it was suggestedthat the proposed method is a one of efficient methods for detecting the prominence

    Prosodic Representations of Prominence Classification Neural Networks and Autoencoders Using Bottleneck Features

    Get PDF
    Prominence perception has been known to correlate with a complex interplay of the acoustic features of energy, fundamental frequency, spectral tilt, and duration. The contribution and importance of each of these features in distinguishing between prominent and non-prominent units in speech is not always easy to determine, and more so, the prosodic representations that humans and automatic classifiers learn have been difficult to interpret. This work focuses on examining the acoustic prosodic representations that binary prominence classification neural networks and autoencoders learn for prominence. We investigate the complex features learned at different layers of the network as well as the 10-dimensional bottleneck features (BNFs), for the standard acoustic prosodic correlates of prominence separately and in combination. We analyze and visualize the BNFs obtained from the prominence classification neural networks as well as their network activations. The experiments are conducted on a corpus of Dutch continuous speech with manually annotated prominence labels. Our results show that the prosodic representations obtained from the BNFs and higher-dimensional non-BNFs provide good separation of the two prominence categories, with, however, different partitioning of the BNF space for the distinct features, and the best overall separation obtained for F0.Peer reviewe

    Predicting Prosodic Prominence from Text with Pre-trained Contextualized Word Representations

    Get PDF
    In this paper we introduce a new natural language processing dataset and benchmark for predicting prosodic prominence from written text. To our knowledge this will be the largest publicly available dataset with prosodic labels. We describe the dataset construction and the resulting benchmark dataset in detail and train a number of different models ranging from feature-based classifiers to neural network systems for the prediction of discretized prosodic prominence. We show that pre-trained contextualized word representations from BERT outperform the other models even with less than 10% of the training data. Finally we discuss the dataset in light of the results and point to future research and plans for further improving both the dataset and methods of predicting prosodic prominence from text. The dataset and the code for the models are publicly available.Peer reviewe

    Cross-linguistic Influences on Sentence Accent Detection in Background Noise.

    Get PDF
    This paper investigates whether sentence accent detection in a non-native language is dependent on (relative) similarity between prosodic cues to accent between the non-native and the native language, and whether cross-linguistic differences in the use of local and more widely distributed (i.e., non-local) cues to sentence accent detection lead to differential effects of the presence of background noise on sentence accent detection in a non-native language. We compared Dutch, Finnish, and French non-native listeners of English, whose cueing and use of prosodic prominence is gradually further removed from English, and compared their results on a phoneme monitoring task in different levels of noise and a quiet condition to those of native listeners. Overall phoneme detection performance was high for the native and the non-native listeners, but deteriorated to the same extent in the presence of background noise. Crucially, relative similarity between the prosodic cues to sentence accent of one's native language compared to that of a non-native language does not determine the ability to perceive and use sentence accent for speech perception in that non-native language. Moreover, proficiency in the non-native language is not a straightforward predictor of sentence accent perception performance, although high proficiency in a non-native language can seemingly overcome certain differences at the prosodic level between the native and non-native language. Instead, performance is determined by the extent to which listeners rely on local cues (English and Dutch) versus cues that are more distributed (Finnish and French), as more distributed cues survive the presence of background noise better
    corecore