113,818 research outputs found

    Convolutional Neural Networks over Tree Structures for Programming Language Processing

    Full text link
    Programming language processing (similar to natural language processing) is a hot research topic in the field of software engineering; it has also aroused growing interest in the artificial intelligence community. However, different from a natural language sentence, a program contains rich, explicit, and complicated structural information. Hence, traditional NLP models may be inappropriate for programs. In this paper, we propose a novel tree-based convolutional neural network (TBCNN) for programming language processing, in which a convolution kernel is designed over programs' abstract syntax trees to capture structural information. TBCNN is a generic architecture for programming language processing; our experiments show its effectiveness in two different program analysis tasks: classifying programs according to functionality, and detecting code snippets of certain patterns. TBCNN outperforms baseline methods, including several neural models for NLP.Comment: Accepted at AAAI-1

    Statistical Model Evaluation Using Reproducing Kernels and Stein’s method

    Get PDF
    Advances in computing have enabled us to develop increasingly complex statistical models. However, their complexity poses challenges in their evaluation. The central theme of the thesis is addressing intractability and interpretability in model evaluations. The key tools considered in the thesis are kernel and Stein's methods: Kernel methods provide flexible means of specifying features for comparing models, and Stein's method further allows us to incorporate model structures in evaluation. The first part of the thesis addresses the question of intractability. The focus is on latent variable models, a large class of models used in practice, including factor models, topic models for text, and hidden Markov models. The kernel Stein discrepancy (KSD), a kernel-based discrepancy, is extended to deal with this model class. Based on this extension, a statistical hypothesis test of relative goodness of fit is developed, enabling us to compare competing latent variable models that are known up to normalization. The second part of the thesis concerns the question of interpretability with two contributed works. First, interpretable relative goodness-of-fit tests are developed using kernel-based discrepancies developed in Chwialkowski et al. (2015); Jitkrittum et al. (2016); Jitkrittum et al. (2017). These tests allow the user to choose features for comparison and discover aspects distinguishing two models. Second, a convergence property of the KSD is established. Specifically, the KSD is shown to control an integral probability metric defined by a class of polynomially growing continuous functions. In particular, this development allows us to evaluate both unnormalized statistical models and sample approximations to posterior distributions in terms of moments

    Stellar Inversion Techniques

    Full text link
    Stellar seismic inversions have proved to be a powerful technique for probing the internal structure of stars, and paving the way for a better understanding of the underlying physics by revealing some of the shortcomings in current stellar models. In this lecture, we provide an introduction to this topic by explaining kernel-based inversion techniques. Specifically, we explain how various kernels are obtained from the pulsation equations, and describe inversion techniques such as the Regularised Least-Squares (RLS) and Optimally Localised Averages (OLA) methods.Comment: 20 pages, 8 figures. Lecture presented at the IVth Azores International Advanced School in Space Sciences on "Asteroseismology and Exoplanets: Listening to the Stars and Searching for New Worlds" (arXiv:1709.00645), which took place in Horta, Azores Islands, Portugal in July 201

    Model-based kernel sum rule: kernel Bayesian inference with probabilistic model

    Get PDF
    Kernel Bayesian inference is a principled approach to nonparametric inference in probabilistic graphical models, where probabilistic relationships between variables are learned from data in a nonparametric manner. Various algorithms of kernel Bayesian inference have been developed by combining kernelized basic probabilistic operations such as the kernel sum rule and kernel Bayes’ rule. However, the current framework is fully nonparametric, and it does not allow a user to flexibly combine nonparametric and model-based inferences. This is inefficient when there are good probabilistic models (or simulation models) available for some parts of a graphical model; this is in particular true in scientific fields where “models” are the central topic of study. Our contribution in this paper is to introduce a novel approach, termed the model-based kernel sum rule (Mb-KSR), to combine a probabilistic model and kernel Bayesian inference. By combining the Mb-KSR with the existing kernelized probabilistic rules, one can develop various algorithms for hybrid (i.e., nonparametric and model-based) inferences. As an illustrative example, we consider Bayesian filtering in a state space model, where typically there exists an accurate probabilistic model for the state transition process. We propose a novel filtering method that combines model-based inference for the state transition process and data-driven, nonparametric inference for the observation generating process. We empirically validate our approach with synthetic and real-data experiments, the latter being the problem of vision-based mobile robot localization in robotics, which illustrates the effectiveness of the proposed hybrid approach

    Combining Thesaurus Knowledge and Probabilistic Topic Models

    Full text link
    In this paper we present the approach of introducing thesaurus knowledge into probabilistic topic models. The main idea of the approach is based on the assumption that the frequencies of semantically related words and phrases, which are met in the same texts, should be enhanced: this action leads to their larger contribution into topics found in these texts. We have conducted experiments with several thesauri and found that for improving topic models, it is useful to utilize domain-specific knowledge. If a general thesaurus, such as WordNet, is used, the thesaurus-based improvement of topic models can be achieved with excluding hyponymy relations in combined topic models.Comment: Accepted to AIST-2017 conference (http://aistconf.ru/). The final publication will be available at link.springer.co
    • …
    corecore