13 research outputs found

    Information Splitting for Big Data Analytics

    Full text link
    Many statistical models require an estimation of unknown (co)-variance parameter(s) in a model. The estimation usually obtained by maximizing a log-likelihood which involves log determinant terms. In principle, one requires the \emph{observed information}--the negative Hessian matrix or the second derivative of the log-likelihood---to obtain an accurate maximum likelihood estimator according to the Newton method. When one uses the \emph{Fisher information}, the expect value of the observed information, a simpler algorithm than the Newton method is obtained as the Fisher scoring algorithm. With the advance in high-throughput technologies in the biological sciences, recommendation systems and social networks, the sizes of data sets---and the corresponding statistical models---have suddenly increased by several orders of magnitude. Neither the observed information nor the Fisher information is easy to obtained for these big data sets. This paper introduces an information splitting technique to simplify the computation. After splitting the mean of the observed information and the Fisher information, an simpler approximate Hessian matrix for the log-likelihood can be obtained. This approximated Hessian matrix can significantly reduce computations, and makes the linear mixed model applicable for big data sets. Such a spitting and simpler formulas heavily depends on matrix algebra transforms, and applicable to large scale breeding model, genetics wide association analysis.Comment: arXiv admin note: text overlap with arXiv:1605.0764

    AutoAMG(θ\theta): An Auto-tuned AMG Method Based on Deep Learning for Strong Threshold

    Full text link
    Algebraic Multigrid (AMG) is one of the most used iterative algorithms for solving large sparse linear equations Ax=bAx=b. In AMG, the coarse grid is a key component that affects the efficiency of the algorithm, the construction of which relies on the strong threshold parameter θ\theta. This parameter is generally chosen empirically, with a default value in many current AMG solvers of 0.25 for 2D problems and 0.5 for 3D problems. However, for many practical problems, the quality of the coarse grid and the efficiency of the AMG algorithm are sensitive to θ\theta; the default value is rarely optimal, and sometimes is far from it. Therefore, how to choose a better θ\theta is an important question. In this paper, we propose a deep learning based auto-tuning method, AutoAMG(θ\theta) for multiscale sparse linear equations, which are widely used in practical problems. The method uses Graph Neural Networks (GNNs) to extract matrix features, and a Multilayer Perceptron (MLP) to build the mapping between matrix features and the optimal θ\theta, which can adaptively output θ\theta values for different matrices. Numerical experiments show that AutoAMG(θ\theta) can achieve significant speedup compared to the default θ\theta value

    Some Progress on Parallel Modal and Vibration Analysis Using the JAUMIN Framework

    No full text
    In the development of large and complex equipment, a large-scale finite element analysis (FEA) with high efficiency is often strongly required. This paper provides some progress on parallel solution of large-scale modal and vibration FE problems. Some predominant algorithms for modal and vibration analysis are firstly reviewed and studied. Based on the newly developed JAUMIN framework, the corresponding procedures are developed and integrated to form a parallel modal and vibration solution system; the details of parallel implementation are given. Numerical experiments are carried out to evaluate the parallel scalability of our procedures, and the results show that the maximum solution scale attains ninety million degrees of freedom (DOFs) and the maximum parallel CPU processors attain 8192 with favorable computing efficiency
    corecore