117 research outputs found

    SU(3) Landau gauge gluon and ghost propagators using the logarithmic lattice gluon field definition

    Full text link
    We study the Landau gauge gluon and ghost propagators of SU(3) gauge theory, employing the logarithmic definition for the lattice gluon fields and implementing the corresponding form of the Faddeev-Popov matrix. This is necessary in order to consistently compare lattice data for the bare propagators with that of higher-loop numerical stochastic perturbation theory (NSPT). In this paper we provide such a comparison, and introduce what is needed for an efficient lattice study. When comparing our data for the logarithmic definition to that of the standard lattice Landau gauge we clearly see the propagators to be multiplicatively related. The data of the associated ghost-gluon coupling matches up almost completely. For the explored lattice spacings and sizes discretization artifacts, finite-size and Gribov-copy effects are small. At weak coupling and large momentum, the bare propagators and the ghost-gluon coupling are seen to be approached by those of higher-order NSPT.Comment: 18 pages, 19 figures, 5 table

    Lectures on Computational Numerical Analysis of Partial Differential Equations

    Get PDF
    From Chapter 1: The purpose of these lectures is to present a set of straightforward numerical methods with applicability to essentially any problem associated with a partial differential equation (PDE) or system of PDEs independent of type, spatial dimension or form of nonlinearity.https://uknowledge.uky.edu/me_textbooks/1002/thumbnail.jp

    Relaxation-Based Coarsening for Multilevel Hypergraph Partitioning

    Get PDF
    Multilevel partitioning methods that are inspired by principles of multiscaling are the most powerful practical hypergraph partitioning solvers. Hypergraph partitioning has many applications in disciplines ranging from scientific computing to data science. In this paper we introduce the concept of algebraic distance on hypergraphs and demonstrate its use as an algorithmic component in the coarsening stage of multilevel hypergraph partitioning solvers. The algebraic distance is a vertex distance measure that extends hyperedge weights for capturing the local connectivity of vertices which is critical for hypergraph coarsening schemes. The practical effectiveness of the proposed measure and corresponding coarsening scheme is demonstrated through extensive computational experiments on a diverse set of problems. Finally, we propose a benchmark of hypergraph partitioning problems to compare the quality of other solvers

    Support Vector Machines in R

    Get PDF
    Being among the most popular and efficient classification and regression methods currently available, implementations of support vector machines exist in almost every popular programming language. Currently four R packages contain SVM related software. The purpose of this paper is to present and compare these implementations.

    Nonparallel support vector machines for pattern classification

    Get PDF
    We propose a novel nonparallel classifier, called nonparallel support vector machine (NPSVM), for binary classification. Our NPSVM that is fully different from the existing nonparallel classifiers, such as the generalized eigenvalue proximal support vector machine (GEPSVM) and the twin support vector machine (TWSVM), has several incomparable advantages: 1) two primal problems are constructed implementing the structural risk minimization principle; 2) the dual problems of these two primal problems have the same advantages as that of the standard SVMs, so that the kernel trick can be applied directly, while existing TWSVMs have to construct another two primal problems for nonlinear cases based on the approximate kernel-generated surfaces, furthermore, their nonlinear problems cannot degenerate to the linear case even the linear kernel is used; 3) the dual problems have the same elegant formulation with that of standard SVMs and can certainly be solved efficiently by sequential minimization optimization algorithm, while existing GEPSVM or TWSVMs are not suitable for large scale problems; 4) it has the inherent sparseness as standard SVMs; 5) existing TWSVMs are only the special cases of the NPSVM when the parameters of which are appropriately chosen. Experimental results on lots of datasets show the effectiveness of our method in both sparseness and classification accuracy, and therefore, confirm the above conclusion further. In some sense, our NPSVM is a new starting point of nonparallel classifiers

    Graph coarsening: From scientific computing to machine learning

    Full text link
    The general method of graph coarsening or graph reduction has been a remarkably useful and ubiquitous tool in scientific computing and it is now just starting to have a similar impact in machine learning. The goal of this paper is to take a broad look into coarsening techniques that have been successfully deployed in scientific computing and see how similar principles are finding their way in more recent applications related to machine learning. In scientific computing, coarsening plays a central role in algebraic multigrid methods as well as the related class of multilevel incomplete LU factorizations. In machine learning, graph coarsening goes under various names, e.g., graph downsampling or graph reduction. Its goal in most cases is to replace some original graph by one which has fewer nodes, but whose structure and characteristics are similar to those of the original graph. As will be seen, a common strategy in these methods is to rely on spectral properties to define the coarse graph
    corecore