3,140 research outputs found

    A Quantum Many-body Wave Function Inspired Language Modeling Approach

    Full text link
    The recently proposed quantum language model (QLM) aimed at a principled approach to modeling term dependency by applying the quantum probability theory. The latest development for a more effective QLM has adopted word embeddings as a kind of global dependency information and integrated the quantum-inspired idea in a neural network architecture. While these quantum-inspired LMs are theoretically more general and also practically effective, they have two major limitations. First, they have not taken into account the interaction among words with multiple meanings, which is common and important in understanding natural language text. Second, the integration of the quantum-inspired LM with the neural network was mainly for effective training of parameters, yet lacking a theoretical foundation accounting for such integration. To address these two issues, in this paper, we propose a Quantum Many-body Wave Function (QMWF) inspired language modeling approach. The QMWF inspired LM can adopt the tensor product to model the aforesaid interaction among words. It also enables us to reveal the inherent necessity of using Convolutional Neural Network (CNN) in QMWF language modeling. Furthermore, our approach delivers a simple algorithm to represent and match text/sentence pairs. Systematic evaluation shows the effectiveness of the proposed QMWF-LM algorithm, in comparison with the state of the art quantum-inspired LMs and a couple of CNN-based methods, on three typical Question Answering (QA) datasets.Comment: 10 pages,4 figures,CIK

    Enhanced Nonlinear System Identification by Interpolating Low-Rank Tensors

    Full text link
    Function approximation from input and output data is one of the most investigated problems in signal processing. This problem has been tackled with various signal processing and machine learning methods. Although tensors have a rich history upon numerous disciplines, tensor-based estimation has recently become of particular interest in system identification. In this paper we focus on the problem of adaptive nonlinear system identification solved with interpolated tensor methods. We introduce three novel approaches where we combine the existing tensor-based estimation techniques with multidimensional linear interpolation. To keep the reduced complexity, we stick to the concept where the algorithms employ a Wiener or Hammerstein structure and the tensors are combined with the well-known LMS algorithm. The update of the tensor is based on a stochastic gradient decent concept. Moreover, an appropriate step size normalization for the update of the tensors and the LMS supports the convergence. Finally, in several experiments we show that the proposed algorithms almost always clearly outperform the state-of-the-art methods with lower or comparable complexity.Comment: 12 pages, 4 figures, 3 table

    Language Modeling with Power Low Rank Ensembles

    Full text link
    We present power low rank ensembles (PLRE), a flexible framework for n-gram language modeling where ensembles of low rank matrices and tensors are used to obtain smoothed probability estimates of words in context. Our method can be understood as a generalization of n-gram modeling to non-integer n, and includes standard techniques such as absolute discounting and Kneser-Ney smoothing as special cases. PLRE training is efficient and our approach outperforms state-of-the-art modified Kneser Ney baselines in terms of perplexity on large corpora as well as on BLEU score in a downstream machine translation task

    Rheological Model for Wood

    Full text link
    Wood as the most important natural and renewable building material plays an important role in the construction sector. Nevertheless, its hygroscopic character basically affects all related mechanical properties leading to degradation of material stiffness and strength over the service life. Accordingly, to attain reliable design of the timber structures, the influence of moisture evolution and the role of time- and moisture-dependent behaviors have to be taken into account. For this purpose, in the current study a 3D orthotropic elasto-plastic, visco-elastic, mechano-sorptive constitutive model for wood, with all material constants being defined as a function of moisture content, is presented. The corresponding numerical integration approach, with additive decomposition of the total strain is developed and implemented within the framework of the finite element method (FEM). Moreover to preserve a quadratic rate of asymptotic convergence the consistent tangent operator for the whole model is derived. Functionality and capability of the presented material model are evaluated by performing several numerical verification simulations of wood components under different combinations of mechanical loading and moisture variation. Additionally, the flexibility and universality of the introduced model to predict the mechanical behavior of different species are demonstrated by the analysis of a hybrid wood element. Furthermore, the proposed numerical approach is validated by comparisons of computational evaluations with experimental results.Comment: 37 pages, 13 figures, 10 table

    Learning neural trans-dimensional random field language models with noise-contrastive estimation

    Full text link
    Trans-dimensional random field language models (TRF LMs) where sentences are modeled as a collection of random fields, have shown close performance with LSTM LMs in speech recognition and are computationally more efficient in inference. However, the training efficiency of neural TRF LMs is not satisfactory, which limits the scalability of TRF LMs on large training corpus. In this paper, several techniques on both model formulation and parameter estimation are proposed to improve the training efficiency and the performance of neural TRF LMs. First, TRFs are reformulated in the form of exponential tilting of a reference distribution. Second, noise-contrastive estimation (NCE) is introduced to jointly estimate the model parameters and normalization constants. Third, we extend the neural TRF LMs by marrying the deep convolutional neural network (CNN) and the bidirectional LSTM into the potential function to extract the deep hierarchical features and bidirectionally sequential features. Utilizing all the above techniques enables the successful and efficient training of neural TRF LMs on a 40x larger training set with only 1/3 training time and further reduces the WER with relative reduction of 4.7% on top of a strong LSTM LM baseline.Comment: 5 pages and 2 figure

    Prediction and Tracking of Moving Objects in Image Sequences

    Get PDF
    We employ a prediction model for moving object velocity and location estimation derived from Bayesian theory. The optical flow of a certain moving object depends on the history of its previous values. A joint optical flow estimation and moving object segmentation algorithm is used for the initialization of the tracking algorithm. The segmentation of the moving objects is determined by appropriately classifying the unlabeled and the occluding regions. Segmentation and optical flow tracking is used for predicting future frames
    • …
    corecore