386 research outputs found

    Grassland: A Rapid Algebraic Modeling System for Million-variable Optimization

    Get PDF
    An algebraic modeling system (AMS) is a type of mathematical software for optimization problems, which allows users to define symbolic mathematical models in a specific language, instantiate them with given source of data, and solve them with the aid of external solver engines. With the bursting scale of business models and increasing need for timeliness, traditional AMSs are not sufficient to meet the following industry needs: 1) million-variable models need to be instantiated from raw data very efficiently; 2) Strictly feasible solution of million-variable models need to be delivered in a rapid manner to make up-to-date decisions against highly dynamic environments. Grassland is a rapid AMS that provides an end-to-end solution to tackle these emerged new challenges. It integrates a parallelized instantiation scheme for large-scale linear constraints, and a sequential decomposition method that accelerates model solving exponentially with an acceptable loss of optimality. Extensive benchmarks on both classical models and real enterprise scenario demonstrate 6-10x speedup of Grassland over state-of-the-art solutions on model instantiation. Our proposed system has been deployed in the large-scale real production planning scenario of Huawei. With the aid of our decomposition method, Grassland successfully accelerated Huawei's million-variable production planning simulation pipeline from hours to 3-5 minutes, supporting near-real-time production plan decision making against highly dynamic supply-demand environment

    Performance Portable Solid Mechanics via Matrix-Free pp-Multigrid

    Full text link
    Finite element analysis of solid mechanics is a foundational tool of modern engineering, with low-order finite element methods and assembled sparse matrices representing the industry standard for implicit analysis. We use performance models and numerical experiments to demonstrate that high-order methods greatly reduce the costs to reach engineering tolerances while enabling effective use of GPUs. We demonstrate the reliability, efficiency, and scalability of matrix-free pp-multigrid methods with algebraic multigrid coarse solvers through large deformation hyperelastic simulations of multiscale structures. We investigate accuracy, cost, and execution time on multi-node CPU and GPU systems for moderate to large models using AMD MI250X (OLCF Crusher), NVIDIA A100 (NERSC Perlmutter), and V100 (LLNL Lassen and OLCF Summit), resulting in order of magnitude efficiency improvements over a broad range of model properties and scales. We discuss efficient matrix-free representation of Jacobians and demonstrate how automatic differentiation enables rapid development of nonlinear material models without impacting debuggability and workflows targeting GPUs

    High-Performance Computing: Dos and Don’ts

    Get PDF
    Computational fluid dynamics (CFD) is the main field of computational mechanics that has historically benefited from advances in high-performance computing. High-performance computing involves several techniques to make a simulation efficient and fast, such as distributed memory parallelism, shared memory parallelism, vectorization, memory access optimizations, etc. As an introduction, we present the anatomy of supercomputers, with special emphasis on HPC aspects relevant to CFD. Then, we develop some of the HPC concepts and numerical techniques applied to the complete CFD simulation framework: from preprocess (meshing) to postprocess (visualization) through the simulation itself (assembly and iterative solvers)

    ToDD: Topological Compound Fingerprinting in Computer-Aided Drug Discovery

    Full text link
    In computer-aided drug discovery (CADD), virtual screening (VS) is used for identifying the drug candidates that are most likely to bind to a molecular target in a large library of compounds. Most VS methods to date have focused on using canonical compound representations (e.g., SMILES strings, Morgan fingerprints) or generating alternative fingerprints of the compounds by training progressively more complex variational autoencoders (VAEs) and graph neural networks (GNNs). Although VAEs and GNNs led to significant improvements in VS performance, these methods suffer from reduced performance when scaling to large virtual compound datasets. The performance of these methods has shown only incremental improvements in the past few years. To address this problem, we developed a novel method using multiparameter persistence (MP) homology that produces topological fingerprints of the compounds as multidimensional vectors. Our primary contribution is framing the VS process as a new topology-based graph ranking problem by partitioning a compound into chemical substructures informed by the periodic properties of its atoms and extracting their persistent homology features at multiple resolution levels. We show that the margin loss fine-tuning of pretrained Triplet networks attains highly competitive results in differentiating between compounds in the embedding space and ranking their likelihood of becoming effective drug candidates. We further establish theoretical guarantees for the stability properties of our proposed MP signatures, and demonstrate that our models, enhanced by the MP signatures, outperform state-of-the-art methods on benchmark datasets by a wide and highly statistically significant margin (e.g., 93% gain for Cleves-Jain and 54% gain for DUD-E Diverse dataset).Comment: NeurIPS, 2022 (36th Conference on Neural Information Processing Systems

    A quasi‐cache‐aware model for optimal domain partitioning in parallel geometric multigrid

    Get PDF
    Stencil computations form the heart of numerical simulations to solve Partial Differential Equations using Finite Difference, Finite Element, and Finite Volume methods. Geometric Multigrid is an optimal O(N), hierarchical tool employing stencil computations in its chief constituents, namely, smoothing, restriction, and interpolation. When Multigrid is parallelized over distributed‐shared memory architectures, traditionally, the domain partitioning creates cubic partitions of the mesh to minimize overall communication. Thus, the orthodox approach considers only load‐balancing and communication minimization for completely determining the domain partitioning. In this article, we show that these two factors are not sufficient to obtain optimal partitions for Parallel Geometric Multigrid. To this effect, we develop and validate a high level analytical model to show that “close to 2‐D” partitions for Geometric Multigrid can give higher performance than the partitions returned by the MPI_Dims_create() function which minimizes the communication volume by default. We quantify sub‐domain level cache‐misses in Parallel Geometric Multigrid and obtain families of optimal domain partitions. We conclude that the sub‐domain level cache‐misses for the application‐specific stencil computational kernel and communicated planes should be taken into account in addition to communication minimization/load‐balance to obtain optimal partitions for Parallel Geometric Multigrid

    Heterogeneous CPU/GPU co-execution of CFD simulations on the POWER9 architecture: Application to airplane aerodynamics

    Full text link
    High fidelity Computational Fluid Dynamics simulations are generally associated with large computing requirements, which are progressively acute with each new generation of supercomputers. However, significant research efforts are required to unlock the computing power of leading-edge systems, currently referred to as pre-Exascale systems, based on increasingly complex architectures. In this paper, we present the approach implemented in the computational mechanics code Alya. We describe in detail the parallelization strategy implemented to fully exploit the different levels of parallelism, together with a novel co-execution method for the efficient utilization of heterogeneous CPU/GPU architectures. The latter is based on a multi-code co-execution approach with a dynamic load balancing mechanism. The assessment of the performance of all the proposed strategies has been carried out for airplane simulations on the POWER9 architecture accelerated with NVIDIA Volta V100 GPUs

    Computational Intelligent Models for Alzheimer's Prediction Using Audio Transcript Data

    Get PDF
    Alzheimer's dementia (AD) is characterized by memory loss, which is one of the earliest symptoms to develop. In this study, we investigated audio transcript data of patients with Alzheimer's dementia. The study involved the use of three intelligent computational approaches: conventional machine learning (Support Vector Machine, Random Forest, Decision Tree), sequential deep learning (LSTM, bidirectional LSTM, CNN-LSTM), and transfer learning (BERT, XLNet) models for automatic detection of linguistic indicators for early diagnosis of Alzheimer's dementia. These models were trained on the DementiaBank clinical transcript dataset. The grid search tuning approach is used for tuning the values of the hyperparameters. Text vectorization is done using the Term Frequency-Inverse Document Frequency (TF-IDF) information retrieval approach. TF-IDF is based on the Bag of Words (BoW) paradigm, which deals with the less and more relevant words in a transcript. Results were evaluated and compared using several performance metrics. The state-of-the-art techniques implemented on DementiaBank dataset in our methodology achieved better performance in terms of accuracy. Transfer learning models showed better classification results in comparison to sequential deep learning models. However, sequential deep learning models outperformed traditional machine learning models. Overall, in terms of accuracy, BERT and XLNet were the most accurate, with accuracy of 93 % and 92 %, respectively
    • 

    corecore