685 research outputs found
Decoupling Multivariate Polynomials Using First-Order Information
We present a method to decompose a set of multivariate real polynomials into
linear combinations of univariate polynomials in linear forms of the input
variables. The method proceeds by collecting the first-order information of the
polynomials in a set of operating points, which is captured by the Jacobian
matrix evaluated at the operating points. The polyadic canonical decomposition
of the three-way tensor of Jacobian matrices directly returns the unknown
linear relations, as well as the necessary information to reconstruct the
univariate polynomials. The conditions under which this decoupling procedure
works are discussed, and the method is illustrated on several numerical
examples
Symmetric Tensor Decomposition by an Iterative Eigendecomposition Algorithm
We present an iterative algorithm, called the symmetric tensor eigen-rank-one
iterative decomposition (STEROID), for decomposing a symmetric tensor into a
real linear combination of symmetric rank-1 unit-norm outer factors using only
eigendecompositions and least-squares fitting. Originally designed for a
symmetric tensor with an order being a power of two, STEROID is shown to be
applicable to any order through an innovative tensor embedding technique.
Numerical examples demonstrate the high efficiency and accuracy of the proposed
scheme even for large scale problems. Furthermore, we show how STEROID readily
solves a problem in nonlinear block-structured system identification and
nonlinear state-space identification
Tensor-based framework for training flexible neural networks
Activation functions (AFs) are an important part of the design of neural
networks (NNs), and their choice plays a predominant role in the performance of
a NN. In this work, we are particularly interested in the estimation of
flexible activation functions using tensor-based solutions, where the AFs are
expressed as a weighted sum of predefined basis functions. To do so, we propose
a new learning algorithm which solves a constrained coupled matrix-tensor
factorization (CMTF) problem. This technique fuses the first and zeroth order
information of the NN, where the first-order information is contained in a
Jacobian tensor, following a constrained canonical polyadic decomposition
(CPD). The proposed algorithm can handle different decomposition bases. The
goal of this method is to compress large pretrained NN models, by replacing
subnetworks, {\em i.e.,} one or multiple layers of the original network, by a
new flexible layer. The approach is applied to a pretrained convolutional
neural network (CNN) used for character classification.Comment: 26 pages, 13 figure
Stochastic Testing Simulator for Integrated Circuits and MEMS: Hierarchical and Sparse Techniques
Process variations are a major concern in today's chip design since they can
significantly degrade chip performance. To predict such degradation, existing
circuit and MEMS simulators rely on Monte Carlo algorithms, which are typically
too slow. Therefore, novel fast stochastic simulators are highly desired. This
paper first reviews our recently developed stochastic testing simulator that
can achieve speedup factors of hundreds to thousands over Monte Carlo. Then, we
develop a fast hierarchical stochastic spectral simulator to simulate a complex
circuit or system consisting of several blocks. We further present a fast
simulation approach based on anchored ANOVA (analysis of variance) for some
design problems with many process variations. This approach can reduce the
simulation cost and can identify which variation sources have strong impacts on
the circuit's performance. The simulation results of some circuit and MEMS
examples are reported to show the effectiveness of our simulatorComment: Accepted to IEEE Custom Integrated Circuits Conference in June 2014.
arXiv admin note: text overlap with arXiv:1407.302
Decomposing Overcomplete 3rd Order Tensors using Sum-of-Squares Algorithms
Tensor rank and low-rank tensor decompositions have many applications in
learning and complexity theory. Most known algorithms use unfoldings of tensors
and can only handle rank up to for a -th order
tensor in . Previously no efficient algorithm can decompose
3rd order tensors when the rank is super-linear in the dimension. Using ideas
from sum-of-squares hierarchy, we give the first quasi-polynomial time
algorithm that can decompose a random 3rd order tensor decomposition when the
rank is as large as .
We also give a polynomial time algorithm for certifying the injective norm of
random low rank tensors. Our tensor decomposition algorithm exploits the
relationship between injective norm and the tensor components. The proof relies
on interesting tools for decoupling random variables to prove better matrix
concentration bounds, which can be useful in other settings
- …