59 research outputs found

    E-SNLI: Natural language inference with natural language explanations

    Get PDF
    In order for machine learning to garner widespread public adoption, models must be able to provide interpretable and robust explanations for their decisions, as well as learn from human-provided explanations at train time. In this work, we extend the Stanford Natural Language Inference dataset with an additional layer of human-annotated natural language explanations of the entailment relations. We further implement models that incorporate these explanations into their training process and output them at test time. We show how our corpus of explanations, which we call e-SNLI, can be used for various goals, such as obtaining full sentence justifications of a model's decisions, improving universal sentence representations and transferring to out-of-domain NLI datasets. Our dataset 1 thus opens up a range of research directions for using natural language explanations, both for improving models and for asserting their trust

    Pembiayaan murabahah dalam meningkatkan pendapatan operasional PT. Bank Syariah Mandiri KCP Wonocolo Surabaya

    Get PDF
    Penelitian tentang strategi peningkatan pendapatan operasional dilakukan karena kekhuatiran peneliti terhadap implementasi produk pada perbankan syariah. Bank Syariah Mandiri KCP Wonocolo Surabaya mempunyai banyak produk pembiayaan yang menunjang peningkatan pendapatan operasional bank. Produk murabahah pada PT. Bank Syariah Mandiri menjadi produk penyumbang pendapatan opersional tertinggi, padahal dalam teori tidak dijelaskan bahwa pembiayaan murabahah meningkatkan pendapatan opersional. Namun dalam praktek pembiayaan murabahah menjadi penyumbang tertinggi, Hal ini menjadi pertanyaan bagi penulis apakah benar pembiayaan murabahah dapat meningkatkan pendapatan operasional. Peningkatan tersebut tidak akan terjadi dengan sendirinya, tapi mungkin ada faktor lain yang mendukung. Karena itu penelitian ini dilakukan untuk mengkaji kebenaran pembiayaan murabahah dapat meningkatkan pendapatan operasional. Kejadian seperti ini diteliti secara ilmiah dengan penelitian kualitatif untuk mendapat jawaban dan dipertanggungjawabkan. Temuan dalam penelitian ini bahwa pembiayaan murabahah meningkatkan pendapatan operasional karena memang produk tersebut didesain untuk strategi dalam meningkatkan pendapatan operasional. Produk murabahah banyak diminati nasabah, sehingga PT. BankSyariah Mandiri KCP Wonocolo Surabaya menjadikan produk tersebut unggulan. Tidak hanya karena banyak diminati, alasan lain menjadi produk unggulan karena produk murabahah lebih mudah dalam operasionalnya dan pengawasannya. PT. Bank Syariah Mandiri KCP Wonocolo Surabaya tidak perlu ikut campur melakukan usaha nasabah dengan alasan bank tidak menjual barang dan melakukan usaha perbankan sendiri tetapi memberikan jasa peminjaman uang kepada nasabah yang membutuhkan dana. Produk murabahah menjadi produk unggulan, sehingga ada upaya bank untuk melakukan promosi yang berbeda dibandingkan produk lainnya. Dengan melakukan promosi tersebut ada upaya pihak bank menggiring nasabah untuk melakukan pembiayaan murabahah produktif. Setelah nasabah mendapat informasi produk, seharusnya nasabah memilih produk dengan sendirinya berdasarkan edukasi dengan pendampingan petugas. Karena keefektifan murabahah tersebut PT. Bank Syariah Mandiri KCP Wonocolo Surabaya menjadikan murabahah produk unggulan dalam meningkatkan pendapatan

    Maximum Entropy Markov models for semantic role labelling

    No full text

    Language as a latent variable: Discrete generative models for sentence compression

    No full text
    In this work we explore deep generative models of text in which the latent representation of a document is itself drawn from a discrete language model distribution. We formulate a variational auto-encoder for inference in this model and apply it to the task of compressing sentences. In this application the generative model first draws a latent summary sentence from a background language model, and then subsequently draws the observed sentence conditioned on this latent summary. In our empirical evaluation we show that generative formulations of both abstractive and extractive compression yield state-of-the-art results when trained on a large amount of supervised data. Further, we explore semi-supervised compression scenarios where we show that it is possible to achieve performance competitive with previously proposed supervised models while training on a fraction of the supervised data

    Stochastic collapsed variational inference for sequential data

    No full text
    Stochastic variational inference for collapsed models has recently been successfully applied to large scale topic modelling. In this paper, we propose a stochastic collapsed variational inference algorithm in the sequential data setting. Our algorithm is applicable to both finite hidden Markov models and hierarchical Dirichlet process hidden Markov models, and to any datasets generated by emission distributions in the exponential family. Our experiment results on two discrete datasets show that our inference is both more efficient and more accurate than its uncollapsed version, stochastic variational inference

    Collapsed Variational Bayesian Inference for Hidden Markov Models

    No full text

    Stochastic collapsed variational inference for hidden Markov models

    No full text
    Stochastic variational inference for collapsed models has recently been successfully applied to large scale topic modelling. In this paper, we propose a stochastic collapsed variational inference algorithm for hidden Markov models, in a sequential data setting. Given a collapsed hidden Markov Model, we break its long Markov chain into a set of short subchains. We propose a novel sum-product algorithm to update the posteriors of the subchains, taking into account their boundary transitions due to the sequential dependencies. Our experiments on two discrete datasets show that our collapsed algorithm is scalable to very large datasets, memory efficient and significantly more accurate than the existing uncollapsed algorithm

    Stochastic collapsed variational inference for hidden Markov models

    No full text
    Stochastic variational inference for collapsed models has recently been successfully applied to large scale topic modelling. In this paper, we propose a stochastic collapsed variational inference algorithm for hidden Markov models, in a sequential data setting. Given a collapsed hidden Markov Model, we break its long Markov chain into a set of short subchains. We propose a novel sum-product algorithm to update the posteriors of the subchains, taking into account their boundary transitions due to the sequential dependencies. Our experiments on two discrete datasets show that our collapsed algorithm is scalable to very large datasets, memory efficient and significantly more accurate than the existing uncollapsed algorithm

    The Role of Syntax in Vector Space Models of Compositional Semantics

    No full text
    • …
    corecore