3,945 research outputs found

    Structure Learning of Partitioned Markov Networks

    Full text link
    We learn the structure of a Markov Network between two groups of random variables from joint observations. Since modelling and learning the full MN structure may be hard, learning the links between two groups directly may be a preferable option. We introduce a novel concept called the \emph{partitioned ratio} whose factorization directly associates with the Markovian properties of random variables across two groups. A simple one-shot convex optimization procedure is proposed for learning the \emph{sparse} factorizations of the partitioned ratio and it is theoretically guaranteed to recover the correct inter-group structure under mild conditions. The performance of the proposed method is experimentally compared with the state of the art MN structure learning methods using ROC curves. Real applications on analyzing bipartisanship in US congress and pairwise DNA/time-series alignments are also reported.Comment: Camera Ready for ICML 2016. Fixed some minor typo

    Text Normalisation of Dialectal Finnish

    Get PDF
    Tekstin normalisointi on prosessi, jossa epästandardia kirjoitettua kieltä muutetaan standardisoituun muotoon. Murteet ovat yksi esimerkki epästandardista kielestä, joka voi poiketa huomattavastikin standardisoidusta yleiskielestä. Lisäksi suomen kieli on ortografialtaan varsin pitkälti foneemista, minkä ansiosta myös puhutun kielen ominaispiirteet on mahdollista tuoda esille kirjoitetussa muodossa. Etenkin epävirallisilla alustoilla ja arkikielisessä kontekstissa, kuten sosiaalisessa mediassa, suomen kielen puhujat saattavat kirjoittaa sanat kuten ääntäisivät ne normaalisti puhuessaan. Tällaista epästandardista kielestä koostuvaa aineistoa voi löytää myös luonnollisen kielen käsittelyn tarpeisiin esimerkiksi Twitteristä. Perinteiselle yleiskieliselle tekstiaineistolle suunnatut luonnollisen kielen käsittelyn työkalut eivät kuitenkaan välttämättä saavuta toivottavia tuloksia puhekieliselle aineistolle sovellettuna, jolloin ratkaisuna voidaan käyttää välivaiheena tekstin normalisointia. Normalisointiprosessissa syötteenä käytettävä puhekielinen tai muutoin epästandardia kieltä sisältävä teksti muutetaan standardisoituun kirjoitusasuun, jota luonnollisen kielen käsittelyn työkalut paremmin ymmärtävät. Tämä työ pohjaa aiempaan tutkimukseen, jota on tehty suomen murteiden normalisoinnin parissa. Aiemmissa tutkimuksissa on todettu, että merkkipohjaiset BRNN-neuroverkkomallit (Bidirectional Recurrent Neural Nerwork) saavuttavat hyviä tuloksia suomen kielen murteiden normalisoinnissa, kun syötteenä käytetään sanoja kolmen kappaleen lohkoissa. Tämä tarkoittaa, että järjestelmä saa syötteenä kerrallaan kolmen sanan joukon, ja jokainen sana on edelleen pilkottu välilyönnein eroteltuihin kirjoitusmerkkeihin. Tässä työssä pyrittiin käyttämään samoja metodeja ja aineistoa kuin aiemmassa tutkimuksessa, jotta tulokset olisivat vertailukelpoisia. Aineistona on käytetty Kotimaisten kielten keskuksen ylläpitämää Suomen kielen näytteitä -korpusta, ja normalisointiin on käytetty OpenNMT-nimistä avoimen lähdekoodin kirjastoa. Työssä toteutetuista kokeiluista saadut tulokset näyttävät vahvistavan aiempien tutkimustulosten pohjalta tehdyt löydökset, mutta lisäksi on viitteitä siitä, että neuroverkkomallit saattaisivat pidemmistä lohkoista koostuvista syötteistä. BRNN-mallin lisäksi työssä kokeillaan myös muita neuroverkkoarkkitehtuureja, mutta vertailtaessa sanavirheiden suhdelukua mittaavaa WER-arvoa (Word Error Rate) voidaan todeta, että BRNN-malli suoriutuu normalisointitehtävästä muita neuroverkkoarkkitehtuureja paremmin

    Analyzing and Visualizing State Sequences in R with TraMineR

    Get PDF
    This article describes the many capabilities offered by the TraMineR toolbox for categorical sequence data. It focuses more specifically on the analysis and rendering of state sequences. Addressed features include the description of sets of sequences by means of transversal aggregated views, the computation of longitudinal characteristics of individual sequences and the measure of pairwise dissimilarities. Special emphasis is put on the multiple ways of visualizing sequences. The core element of the package is the state se- quence object in which we store the set of sequences together with attributes such as the alphabet, state labels and the color palette. The functions can then easily retrieve this information to ensure presentation homogeneity across all printed and graphical displays. The article also demonstrates how TraMineRâÂÂs outcomes give access to advanced analyses such as clustering and statistical modeling of sequence data.

    Transformer-based NMT : modeling, training and implementation

    Get PDF
    International trade and industrial collaborations enable countries and regions to concentrate their developments on specific industries while making the most of other countries' specializations, which significantly accelerates global development. However, globalization also increases the demand for cross-region communication. Language barriers between many languages worldwide create a challenge for achieving deep collaboration between groups speaking different languages, increasing the need for translation. Language technology, specifically, Machine Translation (MT) holds the promise to enable communication between languages efficiently in real-time with minimal costs. Even though nowadays computers can perform computation in parallel very fast, which provides machine translation users with translations with very low latency, and although the evolution from Statistical Machine Translation (SMT) to Neural Machine Translation (NMT) with the utilization of advanced deep learning algorithms has significantly boosted translation quality, current machine translation algorithms are still far from accurately translating all input. Thus, how to further improve the performance of state-of-the-art NMT algorithm remains a valuable open research question which has received a wide range of attention. In the research presented in this thesis, we first investigate the long-distance relation modeling ability of the state-of-the-art NMT model, the Transformer. We propose to learn source phrase representations and incorporate them into the Transformer translation model, aiming to enhance its ability to capture long-distance dependencies well. Second, though previous work (Bapna et al., 2018) suggests that deep Transformers have difficulty in converging, we empirically find that the convergence of deep Transformers depends on the interaction between the layer normalization and residual connections employed to stabilize its training. We conduct a theoretical study about how to ensure the convergence of Transformers, especially for deep Transformers, and propose to ensure the convergence of deep Transformers by putting the Lipschitz constraint on its parameter initialization. Finally, we investigate how to dynamically determine proper and efficient batch sizes during the training of the Transformer model. We find that the gradient direction gets stabilized with increasing batch size during gradient accumulation. Thus we propose to dynamically adjust batch sizes during training by monitoring the gradient direction change within gradient accumulation, and to achieve a proper and efficient batch size by stopping the gradient accumulation when the gradient direction starts to fluctuate. For our research in this thesis, we also implement our own NMT toolkit, the Neutron implementation of the Transformer and its variants. In addition to providing fundamental features as the basis of our implementations for the approaches presented in this thesis, we support many advanced features from recent cutting-edge research work. Implementations of all our approaches in this thesis are also included and open-sourced in the toolkit. To compare with previous approaches, we mainly conducted our experiments on the data from the WMT 14 English to German (En-De) and English to French (En-Fr) news translation tasks, except when studying the convergence of deep Transformers, where we alternated the WMT 14 En-Fr task with the WMT 15 Czech to English (Cs-En) news translation task to compare with Bapna et al. (2018). The sizes of these datasets vary from medium (the WMT 14 En-De, ~ 4.5M sentence pairs) to very large (the WMT 14 En-Fr, ~ 36M sentence pairs), thus we suggest our approaches help improve the translation quality between popular language pairs which are widely used and have sufficient data.China Scholarship Counci

    Modeling and Recognition of Smart Grid Faults by a Combined Approach of Dissimilarity Learning and One-Class Classification

    Full text link
    Detecting faults in electrical power grids is of paramount importance, either from the electricity operator and consumer viewpoints. Modern electric power grids (smart grids) are equipped with smart sensors that allow to gather real-time information regarding the physical status of all the component elements belonging to the whole infrastructure (e.g., cables and related insulation, transformers, breakers and so on). In real-world smart grid systems, usually, additional information that are related to the operational status of the grid itself are collected such as meteorological information. Designing a suitable recognition (discrimination) model of faults in a real-world smart grid system is hence a challenging task. This follows from the heterogeneity of the information that actually determine a typical fault condition. The second point is that, for synthesizing a recognition model, in practice only the conditions of observed faults are usually meaningful. Therefore, a suitable recognition model should be synthesized by making use of the observed fault conditions only. In this paper, we deal with the problem of modeling and recognizing faults in a real-world smart grid system, which supplies the entire city of Rome, Italy. Recognition of faults is addressed by following a combined approach of multiple dissimilarity measures customization and one-class classification techniques. We provide here an in-depth study related to the available data and to the models synthesized by the proposed one-class classifier. We offer also a comprehensive analysis of the fault recognition results by exploiting a fuzzy set based reliability decision rule
    corecore