1,235 research outputs found

    Transportation-cost inequalities for diffusions driven by Gaussian processes

    Full text link
    We prove transportation-cost inequalities for the law of SDE solutions driven by general Gaussian processes. Examples include the fractional Brownian motion, but also more general processes like bifractional Brownian motion. In case of multiplicative noise, our main tool is Lyons' rough paths theory. We also give a new proof of Talagrand's transportation-cost inequality on Gaussian Fr\'echet spaces. We finally show that establishing transportation-cost inequalities implies that there is an easy criterion for proving Gaussian tail estimates for functions defined on that space. This result can be seen as a further generalization of the "generalized Fernique theorem" on Gaussian spaces [Friz-Hairer 2014; Theorem 11.7] used in rough paths theory.Comment: The paper was completely revised. In particular, we gave a new proof for Theorem 1.

    A simple proof of distance bounds for Gaussian rough paths

    Get PDF
    We derive explicit distance bounds for Stratonovich iterated integrals along two Gaussian processes (also known as signatures of Gaussian rough paths) based on the regularity assumption of their covariance functions. Similar estimates have been obtained recently in [Friz-Riedel, AIHP, to appear]. One advantage of our argument is that we obtain the bound for the third level iterated integrals merely based on the first two levels, and this reflects the intrinsic nature of rough paths. Our estimates are sharp when both covariance functions have finite 1-variation, which includes a large class of Gaussian processes. Two applications of our estimates are discussed. The first one gives the a.s. convergence rates for approximated solutions to rough differential equations driven by Gaussian processes. In the second example, we show how to recover the optimal time regularity for solutions of some rough SPDEs.Comment: 20 pages, updated abstract and introductio

    End-to-End Differentiable Proving

    Get PDF
    We introduce neural networks for end-to-end differentiable proving of queries to knowledge bases by operating on dense vector representations of symbols. These neural networks are constructed recursively by taking inspiration from the backward chaining algorithm as used in Prolog. Specifically, we replace symbolic unification with a differentiable computation on vector representations of symbols using a radial basis function kernel, thereby combining symbolic reasoning with learning subsymbolic vector representations. By using gradient descent, the resulting neural network can be trained to infer facts from a given incomplete knowledge base. It learns to (i) place representations of similar symbols in close proximity in a vector space, (ii) make use of such similarities to prove queries, (iii) induce logical rules, and (iv) use provided and induced logical rules for multi-hop reasoning. We demonstrate that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules.Comment: NIPS 2017 camera-ready, NIPS 201

    Wronging a Right: Generating Better Errors to Improve Grammatical Error Detection

    Get PDF
    Grammatical error correction, like other machine learning tasks, greatly benefits from large quantities of high quality training data, which is typically expensive to produce. While writing a program to automatically generate realistic grammatical errors would be difficult, one could learn the distribution of naturallyoccurring errors and attempt to introduce them into other datasets. Initial work on inducing errors in this way using statistical machine translation has shown promise; we investigate cheaply constructing synthetic samples, given a small corpus of human-annotated data, using an off-the-rack attentive sequence-to-sequence model and a straight-forward post-processing procedure. Our approach yields error-filled artificial data that helps a vanilla bi-directional LSTM to outperform the previous state of the art at grammatical error detection, and a previously introduced model to gain further improvements of over 5% F0.5F_{0.5} score. When attempting to determine if a given sentence is synthetic, a human annotator at best achieves 39.39 F1F_1 score, indicating that our model generates mostly human-like instances.Comment: Accepted as a short paper at EMNLP 201
    • …
    corecore