1 research outputs found

    Effective semi-supervised learning strategies for automatic sentence segmentation

    No full text
    The primary objective of sentence segmentation process is to determine the sentence boundaries of a stream of words output by the automatic speech recognizers. Statistical methods developed for sentence segmentation requires a significant amount of labeled data which is time-consuming, labor intensive and expensive. In this work, we propose new multi-view semi-supervised learning strategies for sentence boundary classification problem using lexical, prosodic, and morphological information. The aim is to find effective semi-supervised machine learning strategies when only small sets of sentence boundary labeled data are available. We primarily investigate two semi-supervised learning approaches, called self-training and co-training. Different example selection strategies were also used for co-training, namely, agreement, disagreement and self-combined. Furthermore, we propose three-view and committee-based algorithms incorporating with agreement, disagreement and self-combined strategies using three disjoint feature sets. We present comparative results of different learning strategies on the sentence segmentation task. The experimental results show that the sentence segmentation performance can be highly improved using multi-view learning strategies that we proposed since data sets can be represented by three redundantly sufficient and disjoint feature sets. We show that the proposed strategies substantially improve the average baseline F-measure of 67.66% to 75.15% and 64.84% to 66.32% when only a small set of manually labeled data is available for Turkish and English spoken languages, respectively.This material is based upon work supported by the Scientific and Technological Research Council of Turkey (TUBITAK) (Project Number: 107E182 and Project Number: 111E228), Isik University Scientific Research Projects Fund (Project Number: 09A301 and Project Number: 14A201), TUBITAK BIDEB and J. William Fulbright Post-Doctoral Research Fellowship, USA fundings at SRI-International, Speech Technology and Research (STAR) Lab., Menlo Park, CA, USA and International Computer Science Institute (ICSI) Speech Group, University of California at Berkeley, CA, USA. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding agencies. The authors thank Gokhan Tur, Dilek Hakkani- Tur, Benoit Favre, Sebastien Cuendet, Murat Saraclar, Siddika Parlak, Erinc Dikici, Izel D. Revidi, Cenk Demiroglu and Fatih Ozaydin and Bogazici University Signal and Image Processing (BUSIM) Group for many helpful discussions. The authors also thank the anonymous reviewers for their useful comments on an earlier version of this paper.Publisher's Versio
    corecore