1,664 research outputs found

    Neural Word Segmentation with Rich Pretraining

    Full text link
    Neural word segmentation research has benefited from large-scale raw texts by leveraging them for pretraining character and word embeddings. On the other hand, statistical segmentation research has exploited richer sources of external information, such as punctuation, automatic segmentation and POS. We investigate the effectiveness of a range of external training sources for neural word segmentation by building a modular segmentation model, pretraining the most important submodule using rich external sources. Results show that such pretraining significantly improves the model, leading to accuracies competitive to the best methods on six benchmarks.Comment: Accepted by ACL 201

    Efficient Multi-Template Learning for Structured Prediction

    Full text link
    Conditional random field (CRF) and Structural Support Vector Machine (Structural SVM) are two state-of-the-art methods for structured prediction which captures the interdependencies among output variables. The success of these methods is attributed to the fact that their discriminative models are able to account for overlapping features on the whole input observations. These features are usually generated by applying a given set of templates on labeled data, but improper templates may lead to degraded performance. To alleviate this issue, in this paper, we propose a novel multiple template learning paradigm to learn structured prediction and the importance of each template simultaneously, so that hundreds of arbitrary templates could be added into the learning model without caution. This paradigm can be formulated as a special multiple kernel learning problem with exponential number of constraints. Then we introduce an efficient cutting plane algorithm to solve this problem in the primal, and its convergence is presented. We also evaluate the proposed learning paradigm on two widely-studied structured prediction tasks, \emph{i.e.} sequence labeling and dependency parsing. Extensive experimental results show that the proposed method outperforms CRFs and Structural SVMs due to exploiting the importance of each template. Our complexity analysis and empirical results also show that our proposed method is more efficient than OnlineMKL on very sparse and high-dimensional data. We further extend this paradigm for structured prediction using generalized pp-block norm regularization with p>1p>1, and experiments show competitive performances when p∈[1,2)p \in [1,2)

    Speaker Dependent Voice Recognition with Word-Tense Association and Part-of-Speech Tagging

    Get PDF
    Extensive Research has been conducted on speech recognition and Speaker Recognition over the past few decades. Speaker recognition deals with identifying the speaker from multiple speakers and the ability to filter out the voice of an individual from the background for computational understanding. The more commonly researched method, speech recognition, deals only with computational linguistics. This thesis deals with speaker recognition and natural language processing. The most common speaker recognition systems are Text-Dependent and identify the speaker after a key word/phrase is uttered. This thesis presents Text-Independent Speaker recognition systems that incorporate the collaborative effort and research of noise-filtering, Speech Segmentation, Feature extraction, speaker verification and finally, Partial Language Modelling. The filtering process was accomplished using 4th order Butterworth Band-pass filters to dampen ambient noise outside normal speech frequencies of 300Hzto3000Hz. Speech segmentation utilizes Hamming windows to segment the speech, after which speech detection occurs by calculating the Short time Energy and Zero-crossing rates over a particular time period and identifying voiced from unvoiced using a threshold. Audio data collected from different people is run consecutively through a Speaker Training and Recognition Algorithm which uses neural networks to create a training group and target group for the recognition process. The output of the segmentation module is then processed by the neural network to recognize the speaker. Though not implemented here due to database and computational requirements, the last module suggests a new model for the Part of Speech tagging process that involves a combination of Artificial Neural Networks (ANN) and Hidden Markov Models (HMM) in a series configuration to achieve higher accuracy. This differs from existing research by diverging from the usual single model approach or the creation of hybrid ANN and HMM models
    • …
    corecore