2,053 research outputs found

    Tensor Computation: A New Framework for High-Dimensional Problems in EDA

    Get PDF
    Many critical EDA problems suffer from the curse of dimensionality, i.e. the very fast-scaling computational burden produced by large number of parameters and/or unknown variables. This phenomenon may be caused by multiple spatial or temporal factors (e.g. 3-D field solvers discretizations and multi-rate circuit simulation), nonlinearity of devices and circuits, large number of design or optimization parameters (e.g. full-chip routing/placement and circuit sizing), or extensive process variations (e.g. variability/reliability analysis and design for manufacturability). The computational challenges generated by such high dimensional problems are generally hard to handle efficiently with traditional EDA core algorithms that are based on matrix and vector computation. This paper presents "tensor computation" as an alternative general framework for the development of efficient EDA algorithms and tools. A tensor is a high-dimensional generalization of a matrix and a vector, and is a natural choice for both storing and solving efficiently high-dimensional EDA problems. This paper gives a basic tutorial on tensors, demonstrates some recent examples of EDA applications (e.g., nonlinear circuit modeling and high-dimensional uncertainty quantification), and suggests further open EDA problems where the use of tensor computation could be of advantage.Comment: 14 figures. Accepted by IEEE Trans. CAD of Integrated Circuits and System

    Riemannian preconditioned algorithms for tensor completion via tensor ring decomposition

    Full text link
    We propose Riemannian preconditioned algorithms for the tensor completion problem via tensor ring decomposition. A new Riemannian metric is developed on the product space of the mode-2 unfolding matrices of the core tensors in tensor ring decomposition. The construction of this metric aims to approximate the Hessian of the cost function by its diagonal blocks, paving the way for various Riemannian optimization methods. Specifically, we propose the Riemannian gradient descent and Riemannian conjugate gradient algorithms. We prove that both algorithms globally converge to a stationary point. In the implementation, we exploit the tensor structure and adopt an economical procedure to avoid large matrix formulation and computation in gradients, which significantly reduces the computational cost. Numerical experiments on various synthetic and real-world datasets -- movie ratings, hyperspectral images, and high-dimensional functions -- suggest that the proposed algorithms are more efficient and have better reconstruction ability than other candidates.Comment: 25 pages, 7 figures, 5 table
    • …
    corecore