34 research outputs found

    Unsupervised Dialogue Act Induction using Gaussian Mixtures

    Full text link
    This paper introduces a new unsupervised approach for dialogue act induction. Given the sequence of dialogue utterances, the task is to assign them the labels representing their function in the dialogue. Utterances are represented as real-valued vectors encoding their meaning. We model the dialogue as Hidden Markov model with emission probabilities estimated by Gaussian mixtures. We use Gibbs sampling for posterior inference. We present the results on the standard Switchboard-DAMSL corpus. Our algorithm achieves promising results compared with strong supervised baselines and outperforms other unsupervised algorithms.Comment: Accepted to EACL 201

    Modulator for 4-level Flying Capacitor Converter with Balancing Control in the Closed Loop

    Get PDF
    This paper presents a modulator with an active voltage balancing control for the three-phase four-level FLC converter based electric motor drive for applications supplied directly from a 6 kV ac-grid. It describes a modulation algorithm of the FLC converter by using the phase shifted PWM modulation with balancing voltage of the flying capacitor by using P controllers in the closed loop. The proposed control was verified by experiments carried out on the down-scale drive prototype of the rated power of 35 kVA

    HPS: High precision stemmer

    Get PDF
    Abstract Research into unsupervised ways of stemming has resulted, in the past few years, in the development of methods that are reliable and perform well. Our approach further shifts the boundaries of the state of the art by providing more accurate stemming results. The idea of the approach consists in building a stemmer in two stages. In the first stage, a stemming algorithm based upon clustering, which exploits the lexical and semantic information of words, is used to prepare large-scale training data for the second-stage algorithm. The second-stage algorithm uses a maximum entropy classifier. The stemming-specific features help the classifier decide when and how to stem a particular word. In our research, we have pursued the goal of creating a multi-purpose stemming tool. Its design opens up possibilities of solving non-traditional tasks such as approximating lemmas or improving language modeling. However, we still aim at very good results in the traditional task of information retrieval. The conducted tests reveal exceptional performance in all the above mentioned tasks. Our stemming method is compared with three state-of-the-art statistical algorithms and one rule-based algorithm. We used corpora in the Czech, Slovak, Polish, Hungarian, Spanish and English languages. In the tests, our algorithm excels in stemming previously unseen words (the words that are not present in the training set). Moreover, it was discovered that our approach demands very little text data for training when compared with competing unsupervised algorithms

    Unsupervised joint PoS tagging and stemming for agglutinative languages

    Get PDF
    This is an accepted manuscript of an article published by Association for Computing Machinery (ACM) in ACM Transactions on Asian and Low-Resource Language Information Processing on 25/01/2019, available online: https://doi.org/10.1145/3292398 The accepted version of the publication may differ from the final published version.The number of possible word forms is theoretically infinite in agglutinative languages. This brings up the out-of-vocabulary (OOV) issue for part-of-speech (PoS) tagging in agglutinative languages. Since inflectional morphology does not change the PoS tag of a word, we propose to learn stems along with PoS tags simultaneously. Therefore, we aim to overcome the sparsity problem by reducing word forms into their stems. We adopt a Bayesian model that is fully unsupervised. We build a Hidden Markov Model for PoS tagging where the stems are emitted through hidden states. Several versions of the model are introduced in order to observe the effects of different dependencies throughout the corpus, such as the dependency between stems and PoS tags or between PoS tags and affixes. Additionally, we use neural word embeddings to estimate the semantic similarity between the word form and stem. We use the semantic similarity as prior information to discover the actual stem of a word since inflection does not change the meaning of a word. We compare our models with other unsupervised stemming and PoS tagging models on Turkish, Hungarian, Finnish, Basque, and English. The results show that a joint model for PoS tagging and stemming improves on an independent PoS tagger and stemmer in agglutinative languages.This research is supported by the Scientific and Technological Research Council of Turkey (TUBITAK) with the project number EEEAG-115E464.Published versio

    A resource-light method for cross-lingual semantic textual similarity

    Full text link
    [EN] Recognizing semantically similar sentences or paragraphs across languages is beneficial for many tasks, ranging from cross-lingual information retrieval and plagiarism detection to machine translation. Recently proposed methods for predicting cross-lingual semantic similarity of short texts, however, make use of tools and resources (e.g., machine translation systems, syntactic parsers or named entity recognition) that for many languages (or language pairs) do not exist. In contrast, we propose an unsupervised and a very resource-light approach for measuring semantic similarity between texts in different languages. To operate in the bilingual (or multilingual) space, we project continuous word vectors (i.e., word embeddings) from one language to the vector space of the other language via the linear translation model. We then align words according to the similarity of their vectors in the bilingual embedding space and investigate different unsupervised measures of semantic similarity exploiting bilingual embeddings and word alignments. Requiring only a limited-size set of word translation pairs between the languages, the proposed approach is applicable to virtually any pair of languages for which there exists a sufficiently large corpus, required to learn monolingual word embeddings. Experimental results on three different datasets for measuring semantic textual similarity show that our simple resource-light approach reaches performance close to that of supervised and resource-intensive methods, displaying stability across different language pairs. Furthermore, we evaluate the proposed method on two extrinsic tasks, namely extraction of parallel sentences from comparable corpora and cross-lingual plagiarism detection, and show that it yields performance comparable to those of complex resource-intensive state-of-the-art models for the respective tasks. (C) 2017 Published by Elsevier B.V.Part of the work presented in this article was performed during second author's research visit to the University of Mannheim, supported by Contact Fellowship awarded by the DAAD scholarship program "STIBET Doktoranden". The research of the last author has been carried out in the framework of the SomEMBED project (TIN2015-71147-C2-1-P). Furthermore, this work was partially funded by the Junior-professor funding programme of the Ministry of Science, Research and the Arts of the state of Baden-Wurttemberg (project "Deep semantic models for high-end NLP application").Glavas, G.; Franco-Salvador, M.; Ponzetto, SP.; Rosso, P. (2018). A resource-light method for cross-lingual semantic textual similarity. Knowledge-Based Systems. 143:1-9. https://doi.org/10.1016/j.knosys.2017.11.041S1914

    Control unit 8bit and 16bit platform microcontroller PIC

    No full text
    Práce se zabývá návrhem a realizací řídicí jednotky na bázi dvou 8 bitových mikrokontrolérů PIC s vestavěným PLC automatem. Zařízení je navrženo primárně pro řízení trojfázového asynchronního motoru.Katedra elektromechaniky a výkonové elektronikyObhájenoThis work deals with design and implementation control unit on base two 8 bit microcontrollers with included programmable logic controller. Device is designed primary for control three phase asynchronous machine

    One-time and regular investments through insurance products

    No full text
    The essay analyzes the products on the Czech insurance market for investment opportunities and solve different scenarios of potential clients of insurance companies. Describes the formation, history and purpose of insurance, types of insurance products and their characteristics, a description of selected investment products of insurance companies and compare the performance of selected investment portfolios

    Unsupervised methods for language modeling: technical report no. DCSE/TR-2012-03

    No full text
    Language models are crucial for many tasks in NLP and N-grams are the best way to build them. Huge e ort is being invested in improving n-gram language models. By introducing external information (morphology, syntax, partitioning into documents, etc.) into the models a signi cant improvement can be achieved. The models can however be improved with no external information and smoothing is an excellent example of such an improvement. Thesis summarizes the state-of-the-art approaches to unsupervised language modeling with emphases on the in ectional languages, which are particularly hard to model. It is focused on methods that can discover hidden patterns that are already in a training corpora. These patterns can be very useful for enhancing the performance of language modeling, moreover they do not require additional information sources

    Lineární transformace pro kroslinguální sémantickou podobnost textů

    No full text
    Cross-lingual semantic textual similarity systems estimate the degree of the meaning similarity between two sentences, each in a different language. State-of-the-art algorithms usually employ machine translation and combine vast amount of features, making the approach strongly supervised, resource rich, and difficult to use for poorly-resourced languages. In this paper, we study linear transformations, which project monolingual semantic spaces into a shared space using bilingual dictionaries. We propose a novel transformation, which builds on the best ideas from prior works. We experiment with unsupervised techniques for sentence similarity based only on semantic spaces and we show they can be significantly improved by the word weighting. Our transformation outperforms other methods and together with word weighting leads to very promising results on several datasets in different languages.Systémy pro kroslinguální sémantickou podobnost textů odhadují stupeň podobnosti významů mezi dvěma větami v různých jazycích. Nejnovější algoritmy obvykle používají strojový překlad a kombinují obrovské množství nejrůznějších nástrojů. To způsobuje, že tento přístup je silně supervizovaný, náročný na zdroje, a obtížný pro použití na okrajových jazycích. V tomto článku studujeme lineární transformace, které převádí monolinguální sémantické prostory do sdíleného prostoru pomocí bilinguálních slovníků. Představujeme novou transformaci, která je založena na nejlepších publikovaných přístupech. Experimentujeme s nesupervizovanými technikami pro podobnost vět založených výhradně na sémantických prostorech a ukazujeme, že tento přístup může být dále vylepšen pomocí vážení slov. Naše transformace překonává ostatní metody a společně s vážením slov vede k velmi slibným výsledkům na několika datasetech v různých jazycích
    corecore