52,530 research outputs found

    Skip-gram Language Modeling Using Sparse Non-negative Matrix Probability Estimation

    Full text link
    We present a novel family of language model (LM) estimation techniques named Sparse Non-negative Matrix (SNM) estimation. A first set of experiments empirically evaluating it on the One Billion Word Benchmark shows that SNM nn-gram LMs perform almost as well as the well-established Kneser-Ney (KN) models. When using skip-gram features the models are able to match the state-of-the-art recurrent neural network (RNN) LMs; combining the two modeling techniques yields the best known result on the benchmark. The computational advantages of SNM over both maximum entropy and RNN LM estimation are probably its main strength, promising an approach that has the same flexibility in combining arbitrary features effectively and yet should scale to very large amounts of data as gracefully as nn-gram LMs do

    Dual Rectified Linear Units (DReLUs): A Replacement for Tanh Activation Functions in Quasi-Recurrent Neural Networks

    Full text link
    In this paper, we introduce a novel type of Rectified Linear Unit (ReLU), called a Dual Rectified Linear Unit (DReLU). A DReLU, which comes with an unbounded positive and negative image, can be used as a drop-in replacement for a tanh activation function in the recurrent step of Quasi-Recurrent Neural Networks (QRNNs) (Bradbury et al. (2017)). Similar to ReLUs, DReLUs are less prone to the vanishing gradient problem, they are noise robust, and they induce sparse activations. We independently reproduce the QRNN experiments of Bradbury et al. (2017) and compare our DReLU-based QRNNs with the original tanh-based QRNNs and Long Short-Term Memory networks (LSTMs) on sentiment classification and word-level language modeling. Additionally, we evaluate on character-level language modeling, showing that we are able to stack up to eight QRNN layers with DReLUs, thus making it possible to improve the current state-of-the-art in character-level language modeling over shallow architectures based on LSTMs

    Enhancing Domain Word Embedding via Latent Semantic Imputation

    Full text link
    We present a novel method named Latent Semantic Imputation (LSI) to transfer external knowledge into semantic space for enhancing word embedding. The method integrates graph theory to extract the latent manifold structure of the entities in the affinity space and leverages non-negative least squares with standard simplex constraints and power iteration method to derive spectral embeddings. It provides an effective and efficient approach to combining entity representations defined in different Euclidean spaces. Specifically, our approach generates and imputes reliable embedding vectors for low-frequency words in the semantic space and benefits downstream language tasks that depend on word embedding. We conduct comprehensive experiments on a carefully designed classification problem and language modeling and demonstrate the superiority of the enhanced embedding via LSI over several well-known benchmark embeddings. We also confirm the consistency of the results under different parameter settings of our method.Comment: ACM SIGKDD 201

    Finite Boolean Algebras for Solid Geometry using Julia's Sparse Arrays

    Full text link
    The goal of this paper is to introduce a new method in computer-aided geometry of solid modeling. We put forth a novel algebraic technique to evaluate any variadic expression between polyhedral d-solids (d = 2, 3) with regularized operators of union, intersection, and difference, i.e., any CSG tree. The result is obtained in three steps: first, by computing an independent set of generators for the d-space partition induced by the input; then, by reducing the solid expression to an equivalent logical formula between Boolean terms made by zeros and ones; and, finally, by evaluating this expression using bitwise operators. This method is implemented in Julia using sparse arrays. The computational evaluation of every possible solid expression, usually denoted as CSG (Constructive Solid Geometry), is reduced to an equivalent logical expression of a finite set algebra over the cells of a space partition, and solved by native bitwise operators.Comment: revised version submitted to Computer-Aided Geometric Desig
    • …
    corecore