315 research outputs found
Tensor Computation: A New Framework for High-Dimensional Problems in EDA
Many critical EDA problems suffer from the curse of dimensionality, i.e. the
very fast-scaling computational burden produced by large number of parameters
and/or unknown variables. This phenomenon may be caused by multiple spatial or
temporal factors (e.g. 3-D field solvers discretizations and multi-rate circuit
simulation), nonlinearity of devices and circuits, large number of design or
optimization parameters (e.g. full-chip routing/placement and circuit sizing),
or extensive process variations (e.g. variability/reliability analysis and
design for manufacturability). The computational challenges generated by such
high dimensional problems are generally hard to handle efficiently with
traditional EDA core algorithms that are based on matrix and vector
computation. This paper presents "tensor computation" as an alternative general
framework for the development of efficient EDA algorithms and tools. A tensor
is a high-dimensional generalization of a matrix and a vector, and is a natural
choice for both storing and solving efficiently high-dimensional EDA problems.
This paper gives a basic tutorial on tensors, demonstrates some recent examples
of EDA applications (e.g., nonlinear circuit modeling and high-dimensional
uncertainty quantification), and suggests further open EDA problems where the
use of tensor computation could be of advantage.Comment: 14 figures. Accepted by IEEE Trans. CAD of Integrated Circuits and
System
Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis
The widespread use of multi-sensor technology and the emergence of big
datasets has highlighted the limitations of standard flat-view matrix models
and the necessity to move towards more versatile data analysis tools. We show
that higher-order tensors (i.e., multiway arrays) enable such a fundamental
paradigm shift towards models that are essentially polynomial and whose
uniqueness, unlike the matrix methods, is guaranteed under verymild and natural
conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical
backbone, data analysis techniques using tensor decompositions are shown to
have great flexibility in the choice of constraints that match data properties,
and to find more general latent components in the data than matrix-based
methods. A comprehensive introduction to tensor decompositions is provided from
a signal processing perspective, starting from the algebraic foundations, via
basic Canonical Polyadic and Tucker models, through to advanced cause-effect
and multi-view data analysis schemes. We show that tensor decompositions enable
natural generalizations of some commonly used signal processing paradigms, such
as canonical correlation and subspace techniques, signal separation, linear
regression, feature extraction and classification. We also cover computational
aspects, and point out how ideas from compressed sensing and scientific
computing may be used for addressing the otherwise unmanageable storage and
manipulation problems associated with big datasets. The concepts are supported
by illustrative real world case studies illuminating the benefits of the tensor
framework, as efficient and promising tools for modern signal processing, data
analysis and machine learning applications; these benefits also extend to
vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker
decomposition, HOSVD, tensor networks, Tensor Train
Rank-1 Tensor Approximation Methods and Application to Deflation
Because of the attractiveness of the canonical polyadic (CP) tensor
decomposition in various applications, several algorithms have been designed to
compute it, but efficient ones are still lacking. Iterative deflation
algorithms based on successive rank-1 approximations can be used to perform
this task, since the latter are rather easy to compute. We first present an
algebraic rank-1 approximation method that performs better than the standard
higher-order singular value decomposition (HOSVD) for three-way tensors.
Second, we propose a new iterative rank-1 approximation algorithm that improves
any other rank-1 approximation method. Third, we describe a probabilistic
framework allowing to study the convergence of deflation CP decomposition
(DCPD) algorithms based on successive rank-1 approximations. A set of computer
experiments then validates theoretical results and demonstrates the efficiency
of DCPD algorithms compared to other ones
The average condition number of most tensor rank decomposition problems is infinite
The tensor rank decomposition, or canonical polyadic decomposition, is the
decomposition of a tensor into a sum of rank-1 tensors. The condition number of
the tensor rank decomposition measures the sensitivity of the rank-1 summands
with respect to structured perturbations. Those are perturbations preserving
the rank of the tensor that is decomposed. On the other hand, the angular
condition number measures the perturbations of the rank-1 summands up to
scaling.
We show for random rank-2 tensors with Gaussian density that the expected
value of the condition number is infinite. Under some mild additional
assumption, we show that the same is true for most higher ranks as
well. In fact, as the dimensions of the tensor tend to infinity, asymptotically
all ranks are covered by our analysis. On the contrary, we show that rank-2
Gaussian tensors have finite expected angular condition number.
Our results underline the high computational complexity of computing tensor
rank decompositions. We discuss consequences of our results for algorithm
design and for testing algorithms that compute the CPD. Finally, we supply
numerical experiments
- …