154 research outputs found

    Tensor Computation: A New Framework for High-Dimensional Problems in EDA

    Get PDF
    Many critical EDA problems suffer from the curse of dimensionality, i.e. the very fast-scaling computational burden produced by large number of parameters and/or unknown variables. This phenomenon may be caused by multiple spatial or temporal factors (e.g. 3-D field solvers discretizations and multi-rate circuit simulation), nonlinearity of devices and circuits, large number of design or optimization parameters (e.g. full-chip routing/placement and circuit sizing), or extensive process variations (e.g. variability/reliability analysis and design for manufacturability). The computational challenges generated by such high dimensional problems are generally hard to handle efficiently with traditional EDA core algorithms that are based on matrix and vector computation. This paper presents "tensor computation" as an alternative general framework for the development of efficient EDA algorithms and tools. A tensor is a high-dimensional generalization of a matrix and a vector, and is a natural choice for both storing and solving efficiently high-dimensional EDA problems. This paper gives a basic tutorial on tensors, demonstrates some recent examples of EDA applications (e.g., nonlinear circuit modeling and high-dimensional uncertainty quantification), and suggests further open EDA problems where the use of tensor computation could be of advantage.Comment: 14 figures. Accepted by IEEE Trans. CAD of Integrated Circuits and System

    Dictionary-based Tensor Canonical Polyadic Decomposition

    Full text link
    To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images

    Using the Proximal Gradient and the Accelerated Proximal Gradient as a Canonical Polyadic tensor decomposition Algorithms in difficult situations

    Get PDF
    Canonical Polyadic (CP) tensor decomposition is useful in many real-world applications due to its uniqueness, and the ease of interpretation of its factor matrices. This work addresses the problem of calculating the CP decomposition of tensors in difficult cases where the factor matrices in one or all modes are almost collinear - i.e. bottleneck or swamp problems arise. This is done by introducing a constraint on the coherences of the factor matrices that ensures the existence of a best low-rank approximation, which makes it possible to estimate these highly correlated factors. Two new algorithms optimizing the CP decomposition based on proximal methods are proposed. Simulation results are provided and demonstrate the good behaviour of these algorithms, as well as a better compromise between accuracy and convergence speed than other algorithms in the literature

    Fast truncation of mode ranks for bilinear tensor operations

    Full text link
    We propose a fast algorithm for mode rank truncation of the result of a bilinear operation on 3-tensors given in the Tucker or canonical form. If the arguments and the result have mode sizes n and mode ranks r, the computation costs O(nr3+r4)O(nr^3 + r^4). The algorithm is based on the cross approximation of Gram matrices, and the accuracy of the resulted Tucker approximation is limited by square root of machine precision.Comment: 9 pages, 2 tables. Submitted to Numerical Linear Algebra and Applications, special edition for ICSMT conference, Hong Kong, January 201

    A new penalized nonnegative third order tensor decomposition using a block coordinate proximal gradient approach: application to 3D fluorescence spectroscopy

    No full text
    International audienceIn this article, we address the problem of tensor factorization subject to certain constraints. We focus on the Canonical Polyadic Decomposition (CPD) also known as Parafac. The interest of this multi-linear decomposition coupled with 3D fluorescence spectroscopy is now well established in the fields of environmental data analysis, biochemistry and chemistry. When real experimental data (possibly corrupted by noise) are processed, the actual rank of the " observed " tensor is generally unknown. Moreover, when the amount of data is very large, this inverse problem may become numerically ill-posed and consequently hard to solve. The use of proper constraints reflecting some a priori knowledge about the latent (or hidden) tracked variables and/or additional information through the addition of penalty functions can prove very helpful in estimating more relevant components rather than totally arbitrary ones. The counterpart is that the cost functions that have to be considered can be non convex and sometimes even non differentiable making their optimization more difficult, leading to a higher computing time and a slower convergence speed. Block alternating proximal approaches offer a rigorous and flexible framework to properly address that problem since they are applicable to a large class of cost functions while remaining quite easy to implement. Here, we suggest a new block coordinate variable metric forward-backward method which can be seen as a special case of Majorize-Minimize (MM) approaches to derive a new penalized nonnegative third order CPD algorithm. Its interest, efficiency, robustness and flexibility are illustrated thanks to computer simulations carried out on both simulated and real experimental 3D fluorescence spectroscopy data
    corecore