64 research outputs found

    Integrals of monomials over the orthogonal group

    Full text link
    A recursion formula is derived which allows to evaluate invariant integrals over the orthogonal group O(N), where the integrand is an arbitrary finite monomial in the matrix elements of the group. The value of such an integral is expressible as a finite sum of partial fractions in NN. The recursion formula largely extends presently available integration formulas for the orthogonal group.Comment: 9 pages, no figure

    A computer algebra user interface manifesto

    Full text link
    Many computer algebra systems have more than 1000 built-in functions, making expertise difficult. Using mock dialog boxes, this article describes a proposed interactive general-purpose wizard for organizing optional transformations and allowing easy fine grain control over the form of the result even by amateurs. This wizard integrates ideas including: * flexible subexpression selection; * complete control over the ordering of variables and commutative operands, with well-chosen defaults; * interleaving the choice of successively less main variables with applicable function choices to provide detailed control without incurring a combinatorial number of applicable alternatives at any one level; * quick applicability tests to reduce the listing of inapplicable transformations; * using an organizing principle to order the alternatives in a helpful manner; * labeling quickly-computed alternatives in dialog boxes with a preview of their results, * using ellipsis elisions if necessary or helpful; * allowing the user to retreat from a sequence of choices to explore other branches of the tree of alternatives or to return quickly to branches already visited; * allowing the user to accumulate more than one of the alternative forms; * integrating direct manipulation into the wizard; and * supporting not only the usual input-result pair mode, but also the useful alternative derivational and in situ replacement modes in a unified window.Comment: 38 pages, 12 figures, to be published in Communications in Computer Algebr

    Advanced Probabilistic Models for Clustering and Projection

    Get PDF
    Probabilistic modeling for data mining and machine learning problems is a fundamental research area. The general approach is to assume a generative model underlying the observed data, and estimate model parameters via likelihood maximization. It has the deep probability theory as the mathematical background, and enjoys a large amount of methods from statistical learning, sampling theory and Bayesian statistics. In this thesis we study several advanced probabilistic models for data clustering and feature projection, which are the two important unsupervised learning problems. The goal of clustering is to group similar data points together to uncover the data clusters. While numerous methods exist for various clustering tasks, one important question still remains, i.e., how to automatically determine the number of clusters. The first part of the thesis answers this question from a mixture modeling perspective. A finite mixture model is first introduced for clustering, in which each mixture component is assumed to be an exponential family distribution for generality. The model is then extended to an infinite mixture model, and its strong connection to Dirichlet process (DP) is uncovered which is a non-parametric Bayesian framework. A variational Bayesian algorithm called VBDMA is derived from this new insight to learn the number of clusters automatically, and empirical studies on some 2D data sets and an image data set verify the effectiveness of this algorithm. In feature projection, we are interested in dimensionality reduction and aim to find a low-dimensional feature representation for the data. We first review the well-known principal component analysis (PCA) and its probabilistic interpretation (PPCA), and then generalize PPCA to a novel probabilistic model which is able to handle non-linear projection known as kernel PCA. An expectation-maximization (EM) algorithm is derived for kernel PCA such that it is fast and applicable to large data sets. Then we propose a novel supervised projection method called MORP, which can take the output information into account in a supervised learning context. Empirical studies on various data sets show much better results compared to unsupervised projection and other supervised projection methods. At the end we generalize MORP probabilistically to propose SPPCA for supervised projection, and we can also naturally extend the model to S2PPCA which is a semi-supervised projection method. This allows us to incorporate both the label information and the unlabeled data into the projection process. In the third part of the thesis, we introduce a unified probabilistic model which can handle data clustering and feature projection jointly. The model can be viewed as a clustering model with projected features, and a projection model with structured documents. A variational Bayesian learning algorithm can be derived, and it turns out to iterate the clustering operations and projection operations until convergence. Superior performance can be obtained for both clustering and projection

    Quantum Statistical Field Theory and Combinatorics

    Full text link
    This is a set of review notes on combinatorial aspects of Bosonic quantum field theory. We collect together several related issues concerning moments of distributions, moments of stochastic processes and Ito's formula, and Green's functions and cumulant moments in quantum field theory.Comment: 50 pages, several figures, extended notes with up-dated reference

    Formalized Class Group Computations and Integral Points on Mordell Elliptic Curves

    Full text link
    Diophantine equations are a popular and active area of research in number theory. In this paper we consider Mordell equations, which are of the form y2=x3+dy^2=x^3+d, where dd is a (given) nonzero integer number and all solutions in integers xx and yy have to be determined. One non-elementary approach for this problem is the resolution via descent and class groups. Along these lines we formalized in Lean 3 the resolution of Mordell equations for several instances of d<0d<0. In order to achieve this, we needed to formalize several other theories from number theory that are interesting on their own as well, such as ideal norms, quadratic fields and rings, and explicit computations of the class number. Moreover we introduced new computational tactics in order to carry out efficiently computations in quadratic rings and beyond.Comment: 14 pages. Submitted to CPP '23. Source code available at https://github.com/lean-forward/class-group-and-mordell-equatio

    Physics-inspired Replica Approaches to Computer Science Problems

    Get PDF
    We study machine learning class classification problems and combinatorial optimization problems using physics inspired replica approaches. In the current work, we focus on the traveling salesman problem which is one of the most famous problems in the entire field of combinatorial optimization. Our approach is specifically motivated by the desire to avoid trapping in metastable local minima-a common occurrence in hard problems with multiple extrema. Our method involves (i) coupling otherwise independent simulations of a system (“replicas”) via geometrical distances as well as (ii) probabilistic inference applied to the solutions found by individual replicas. In particular, we apply our method to the well-known “k-opt” algorithm and examine two particular cases-k = 2 and k = 3. With the aid of geometrical coupling alone, we are able to determine for the optimum tour length on systems up to 280 cities (an order of magnitude larger than the largest systems typically solved by the bare k = 3 opt). The probabilistic replica-based inference approach improves k - opt even further and determines the optimal solution of a problem with 318 cities. In this work, we also formulate a supervised machine learning algorithm for classification problems which is called “Stochastic Replica Voting Machine” (SRVM). The method is based on the representations of known data via multiple linear expansions in terms of various stochastic functions. The algorithm is developed, implemented and applied to a binary and a 3-class classification problems in material science. Here, we employ SRVM to predict candidate compounds capable of forming cubic Perovskite structure and further classify binary (AB) solids. We demonstrated that our SRVM method exceeds the well-known Support Vector Machine (SVM) in terms of accuracy when predicting the cubic Perovskite structure. The algorithm has also been tested on 8 diverse training data sets of various types and feature space dimensions from UCI machine learning repository. It has been shown to consistently match or exceed the accuracy of existing algorithms, while simultaneously avoiding many of their pitfalls
    corecore