297 research outputs found

    Development, Implementation, and Optimization of a Modern, Subsonic/Supersonic Panel Method

    Get PDF
    In the early stages of aircraft design, engineers consider many different design concepts, examining the trade-offs between different component arrangements and sizes, thrust and power requirements, etc. Because so many different designs are considered, it is best in the early stages of design to use simulation tools that are fast; accuracy is secondary. A common simulation tool for early design and analysis is the panel method. Panel methods were first developed in the 1950s and 1960s with the advent of modern computers. Despite being reasonably accurate and very fast, their development was abandoned in the late 1980s in favor of more complex and accurate simulation methods. The panel methods developed in the 1980s are still in use by aircraft designers today because of their accuracy and speed. However, they are cumbersome to use and limited in applicability. The purpose of this work is to reexamine panel methods in a modern context. In particular, this work focuses on the application of panel methods to supersonic aircraft (a supersonic aircraft is one that flies faster than the speed of sound). Various aspects of the panel method, including the distributions of the unknown flow variables on the surface of the aircraft and efficiently solving for these unknowns, are discussed. Trade-offs between alternative formulations are examined and recommendations given. This work also serves to bring together, clarify, and condense much of the literature previously published regarding panel methods so as to assist future developers of panel methods

    A Newton-MR algorithm with complexity guarantees for nonconvex smooth unconstrained optimization

    Full text link
    In this paper, we consider variants of Newton-MR algorithm for solving unconstrained, smooth, but non-convex optimization problems. Unlike the overwhelming majority of Newton-type methods, which rely on conjugate gradient algorithm as the primary workhorse for their respective sub-problems, Newton-MR employs minimum residual (MINRES) method. Recently, it has been established that MINRES has inherent ability to detect non-positive curvature directions as soon as they arise and certain useful monotonicity properties will be satisfied before such detection. We leverage these recent results and show that our algorithms come with desirable properties including competitive first and second-order worst-case complexities. Numerical examples demonstrate the performance of our proposed algorithms

    Speed limits and locality in many-body quantum dynamics

    Full text link
    We review the mathematical speed limits on quantum information processing in many-body systems. After the proof of the Lieb-Robinson Theorem in 1972, the past two decades have seen substantial developments in its application to other questions, such as the simulatability of quantum systems on classical or quantum computers, the generation of entanglement, and even the properties of ground states of gapped systems. Moreover, Lieb-Robinson bounds have been extended in non-trivial ways, to demonstrate speed limits in systems with power-law interactions or interacting bosons, and even to prove notions of locality that arise in cartoon models for quantum gravity with all-to-all interactions. We overview the progress which has occurred, highlight the most promising results and techniques, and discuss some central outstanding questions which remain open. To help bring newcomers to the field up to speed, we provide self-contained proofs of the field's most essential results.Comment: review article. 93 pages, 10 figures, 1 table. v2: minor change

    Overcoming the timescale barrier in molecular dynamics: Transfer operators, variational principles and machine learning

    Get PDF
    One of the main challenges in molecular dynamics is overcoming the ‘timescale barrier’: in many realistic molecular systems, biologically important rare transitions occur on timescales that are not accessible to direct numerical simulation, even on the largest or specifically dedicated supercomputers. This article discusses how to circumvent the timescale barrier by a collection of transfer operator-based techniques that have emerged from dynamical systems theory, numerical mathematics and machine learning over the last two decades. We will focus on how transfer operators can be used to approximate the dynamical behaviour on long timescales, review the introduction of this approach into molecular dynamics, and outline the respective theory, as well as the algorithmic development, from the early numerics-based methods, via variational reformulations, to modern data-based techniques utilizing and improving concepts from machine learning. Furthermore, its relation to rare event simulation techniques will be explained, revealing a broad equivalence of variational principles for long-time quantities in molecular dynamics. The article will mainly take a mathematical perspective and will leave the application to real-world molecular systems to the more than 1000 research articles already written on this subject

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    FAST AND MEMORY EFFICIENT ALGORITHMS FOR STRUCTURED MATRIX SPECTRUM APPROXIMATION

    Get PDF
    Approximating the singular values or eigenvalues of a matrix, i.e. spectrum approximation, is a fundamental task in data science and machine learning applications. While approximation of the top singular values has received considerable attention in numerical linear algebra, provably efficient algorithms for other spectrum approximation tasks such as spectral-sum estimation and spectrum density estimation are starting to emerge only recently. Two crucial components that have enabled efficient algorithms for spectrum approximation are access to randomness and structure in the underlying matrix. In this thesis, we study how randomization and the underlying structure of the matrix can be exploited to design fast and memory efficient algorithms for spectral sum-estimation and spectrum density estimation. In particular, we look at two classes of structure: sparsity and graph structure. In the first part of this thesis, we show that sparsity can be exploited to give low-memory algorithms for spectral summarization tasks such as approximating some Schatten norms, the Estrada index and the logarithm of the determinant (log-det) of a sparse matrix. Surprisingly, we show that the space complexity of our algorithms are independent of the underlying dimension of the matrix. Complimenting our result for sparse matrices, we show that matrices that satisfy a certain smooth definition of sparsity, but potentially dense in the conventional sense, can be approximated in spectral-norm error by a truly sparse matrix. Our method is based on a simple sampling scheme that can be implemented in linear time. In the second part, we give the first truly sublinear time algorithm to approximate the spectral density of the (normalized) adjacency matrix of an undirected, unweighted graph in earth-mover distance. In addition to our sublinear time result, we give theoretical guarantees for a variant of the widely-used Kernel Polynomial Method and propose a new moment matching based method for spectrum density estimation of Hermitian matrices

    Krylov-aware stochastic trace estimation

    Full text link
    We introduce an algorithm for estimating the trace of a matrix function f(A)f(\mathbf{A}) using implicit products with a symmetric matrix A\mathbf{A}. Existing methods for implicit trace estimation of a matrix function tend to treat matrix-vector products with f(A)f(\mathbf{A}) as a black-box to be computed by a Krylov subspace method. Like other recent algorithms for implicit trace estimation, our approach is based on a combination of deflation and stochastic trace estimation. However, we take a closer look at how products with f(A)f(\mathbf{A}) are integrated into these approaches which enables several efficiencies not present in previously studied methods. In particular, we describe a Krylov subspace method for computing a low-rank approximation of a matrix function by a computationally efficient projection onto Krylov subspace.Comment: Figure 5.1 differs somewhat from the published version due to a clerical error made when uploading the images to the journa

    Randomized algorithms for low-rank matrix approximation: Design, analysis, and applications

    Full text link
    This survey explores modern approaches for computing low-rank approximations of high-dimensional matrices by means of the randomized SVD, randomized subspace iteration, and randomized block Krylov iteration. The paper compares the procedures via theoretical analyses and numerical studies to highlight how the best choice of algorithm depends on spectral properties of the matrix and the computational resources available. Despite superior performance for many problems, randomized block Krylov iteration has not been widely adopted in computational science. The paper strengthens the case for this method in three ways. First, it presents new pseudocode that can significantly reduce computational costs. Second, it provides a new analysis that yields simple, precise, and informative error bounds. Last, it showcases applications to challenging scientific problems, including principal component analysis for genetic data and spectral clustering for molecular dynamics data.Comment: 60 pages, 14 figure

    An Improved Classical Singular Value Transformation for Quantum Machine Learning

    Full text link
    We study quantum speedups in quantum machine learning (QML) by analyzing the quantum singular value transformation (QSVT) framework. QSVT, introduced by [GSLW, STOC'19, arXiv:1806.01838], unifies all major types of quantum speedup; in particular, a wide variety of QML proposals are applications of QSVT on low-rank classical data. We challenge these proposals by providing a classical algorithm that matches the performance of QSVT in this regime up to a small polynomial overhead. We show that, given a matrix A∈Cm×nA \in \mathbb{C}^{m\times n}, a vector b∈Cnb \in \mathbb{C}^{n}, a bounded degree-dd polynomial pp, and linear-time pre-processing, we can output a description of a vector vv such that ∥v−p(A)b∥≤ε∥b∥\|v - p(A) b\| \leq \varepsilon\|b\| in O~(d11∥A∥F4/(ε2∥A∥4))\widetilde{\mathcal{O}}(d^{11} \|A\|_{\mathrm{F}}^4 / (\varepsilon^2 \|A\|^4 )) time. This improves upon the best known classical algorithm [CGLLTW, STOC'20, arXiv:1910.06151], which requires O~(d22∥A∥F6/(ε6∥A∥6))\widetilde{\mathcal{O}}(d^{22} \|A\|_{\mathrm{F}}^6 /(\varepsilon^6 \|A\|^6 ) ) time, and narrows the gap with QSVT, which, after linear-time pre-processing to load input into a quantum-accessible memory, can estimate the magnitude of an entry p(A)bp(A)b to ε∥b∥\varepsilon\|b\| error in O~(d∥A∥F/(ε∥A∥))\widetilde{\mathcal{O}}(d\|A\|_{\mathrm{F}}/(\varepsilon \|A\|)) time. Our key insight is to combine the Clenshaw recurrence, an iterative method for computing matrix polynomials, with sketching techniques to simulate QSVT classically. We introduce several new classical techniques in this work, including (a) a non-oblivious matrix sketch for approximately preserving bi-linear forms, (b) a new stability analysis for the Clenshaw recurrence, and (c) a new technique to bound arithmetic progressions of the coefficients appearing in the Chebyshev series expansion of bounded functions, each of which may be of independent interest.Comment: 62 pages, v3: fixed bug, runtime exponent now 11 instead of 9; v2: revised abstract to clarify result
    • …
    corecore