209 research outputs found

    Lecture 10: Preconditioned Iterative Methods for Linear Systems

    Get PDF
    Iterative methods for the solution of linear systems of equations – such as stationary, semi-iterative, and Krylov subspace methods – are classical methods taught in numerical analysis courses, but adapting these methods to run efficiently at large-scale on high-performance computers is challenging and a constantly evolving topic. Preconditioners – necessary to aid the convergence of iterative methods – come in many forms, from algebraic to physics-based, are regularly being developed for linear systems from different classes of problems, and similarly are evolving with high-performance computers. This lecture will cover the background and some recent developments on iterative methods and preconditioning in the context of high-performance parallel computers. Topics include asynchronous iterative methods that avoid the potentially high synchronization cost where there are very large numbers of computational threads, parallel sparse approximate inverse preconditioners, parallel incomplete factorization preconditioners and sparse triangular solvers, and preconditioning with hierarchical rank-structured matrices for kernel matrix equations

    Excitation and Imaging of Resonant Optical Modes of Au Triangular Nano-Antennas Using Cathodoluminescence Spectroscopy

    Full text link
    Cathodoluminescence (CL) imaging spectroscopy is an important technique to understand resonant behavior of optical nanoantennas. We report high-resolution CL spectroscopy of triangular gold nanoantennas designed with near-vacuum effective index and very small metal-substrate interface. This design helped in addressing issues related to background luminescence and shifting of dipole modes beyond visible spectrum. Spatial and spectral investigations of various plasmonic modes are reported. Out-of-plane dipole modes excited with vertically illuminated electron beam showed high-contrast tip illumination in panchromatic imaging. By tilting the nanostructures during fabrication, in-plane dipole modes of antennas were excited. Finite-difference time-domain simulations for electron and optical excitations of different modes showed excellent agreement with experimental results. Our approach of efficiently exciting antenna modes by using low index substrates is confirmed both with experiments and numerical simulations. This should provide further insights into better understanding of optical antennas for various applications.Comment: To be published in JVST B (accepted, Sep 2010) (15 pages, 6 figures, originally presented at EIPBN 2010

    Data-Driven Linear Complexity Low-Rank Approximation of General Kernel Matrices: A Geometric Approach

    Full text link
    A general, {\em rectangular} kernel matrix may be defined as Kij=κ(xi,yj)K_{ij} = \kappa(x_i,y_j) where κ(x,y)\kappa(x,y) is a kernel function and where X={xi}i=1mX=\{x_i\}_{i=1}^m and Y={yi}i=1nY=\{y_i\}_{i=1}^n are two sets of points. In this paper, we seek a low-rank approximation to a kernel matrix where the sets of points XX and YY are large and are not well-separated (e.g., the points in XX and YY may be ``intermingled''). Such rectangular kernel matrices may arise, for example, in Gaussian process regression where XX corresponds to the training data and YY corresponds to the test data. In this case, the points are often high-dimensional. Since the point sets are large, we must exploit the fact that the matrix arises from a kernel function, and avoid forming the matrix, and thus ruling out most algebraic techniques. In particular, we seek methods that can scale linearly, i.e., with computational complexity O(m)O(m) or O(n)O(n) for a fixed accuracy or rank. The main idea in this paper is to {\em geometrically} select appropriate subsets of points to construct a low rank approximation. An analysis in this paper guides how this selection should be performed

    Combinatorial Algorithms for Computing Column Space Bases That Have Sparse Inverses

    Get PDF
    Abstract. This paper presents a new combinatorial approach towards constructing a sparse, implicit basis for the null space of a sparse, under-determined matrix. Our approach is to compute a column space basis of that has a sparse inverse, which could be used to represent a null space basis in implicit form. We investigate three different algorithms for computing column space bases: two greedy algorithms implemented using graph matchings, and a third, which employs a divide and conquer strategy implemented with hypergraph partitioning followed by a matching. Our results show that for many matrices from linear programming, structural analysis, and circuit simulation, it is possible to compute column space bases having sparse inverses, contrary to conventional wisdom. The hypergraph partitioning method yields sparser basis inverses and has low computational time requirements, relative to the greedy approaches. We also discuss the complexity of selecting a column space basis when it is known that such a basis exists in block diagonal form with a given small block size. Key words. sparse column space basis, sparse null space basis, block angular matrix, block diagonal matrix, matching, hypergraph partitioning, inverse of a basis AMS subject classifications. 65F50, 68R10, 90C20 1. Introduction. Man
    • …
    corecore